Another semester over – what have I learned?
Posted: October 29, 2012 Filed under: Education | Tags: community, curriculum, education, educational research, grand challenge, higher education, in the student's head, learning, measurement, student perspective, teaching, teaching approaches, thinking, work/life balance Leave a commentMonday the 29th marks the last official teaching activities, barring the exam and associated marking, for my grand challenges in Computer Science course. It’s been a very busy time and I’ve worked very hard on it but my students have worked even harder. Their final projects are certainly up where I wanted them to be and I believe that the majority of the course has gone well.
However, I’m running some feedback activities this week and I’ll find out how I can make it better for next year. At this stage we look like we’re going to have a reasonably large group for next year’s intake – somewhere in the region of 10-20 – and this is going to change how I run the course. Certain things just won’t work at that scale unless I start to take better advantage of group structure. I’ve already learnt a lot about how hard it is to connect students and data and, in our last meeting, I commented that I was thinking about making more data available in advance. Well, maybe, was the reply from students but we learned so much about how the data in the world is actually stored and treated.
Hmm. Back to the drawing board maybe – but also I’m going to wait for all of the final feedback.
Do I have students who I would happily put out in front of a class to run it for a while, doubly so for a community involvement project, with the confidence that they’ll communicate confidently, competently and with passion? Well, yes, actually – although there’ll be a small range. (And now I’ve just made at least three people paranoid – that’s what you get for reading my blog.)
There is so much going on that the next two months are going to be pretty frantic. Next year is already shaping up to be a real make-or-break year for my career and that means I need to sit down with a list of things that I want to achieve and a list of things that I am and am not prepared to do in order to achieve things. The achievement list is going to be a while coming, as goal lists always are, but the will/won’t/want list is forming. Here’s a rough draft.
- I still want to teach and be pretty involved in teaching. That’s easy as I’m not senior or research-loaded enough to get out of teaching. (I don’t really have a choice.
- I need to have more time to work on my non-work projects. I’ve just spent all of a Sunday working and the only reason I stopped was that I couldn’t spell constructivist reliably any more. (Yes, that just took three tries.)
- I want to have enough time to spend time with my students and not looked rushed or feel guilty about the time.
- I want to have the time to be able to help out any colleagues who could use my assistance AND I want to have the time to be able to seek help from my colleagues!
- I don’t want to take on anything that I have to give up on, or push to the sidelines for next year.
So, obviously, it all boils down to time, planning and allocation of priorities. With that in mind, I’ll wish you a happy Monday or good weekend. I’m going to have some dinner.
I am a potato – heading towards caramelisation. (Programming Language Threshold Concepts Part II)
Posted: October 28, 2012 Filed under: Education | Tags: curriculum, design, education, educational problem, educational research, feedback, Generation Why, higher education, in the student's head, learning, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, threshold concepts, tools Leave a commentFollowing up on yesterday’s discussion of some of the chapters in “Threshold Concepts Within the Disciplines”, I finished by talking about Flanagan and Smith’s thoughts on the linguistic issues in learning computer programming. This led me to the theory of markedness, a useful way to think about some of the syntactic structures that we see in computer programs. Let me introduce the concept of markedness with an example. Consider the pair of opposing concepts big/small. If you ask how ‘big’ something is, then you’re not actually assuming that the thing you’re asking about is ‘big’, you’re asking about its size. However, ask someone how ‘small’ something is and there’s a presumption that it’s actually small (most of the time). The same thing happens for old/young. Asking someone how old they are, bad jokes aside, is not implying that they are old – the word “old” here is standing in for the concept of age. This is an example of markedness in the relationship between lexical opposites: the assumed meaning (the default) is referred to as the unmarked form, where the marked form is more restrictive (in that it doesn’t subsume both concepts) and it is generally not the default. You see this in gender and plural forms too. In Lions/Lionesses, Lions is an unmarked form because it’s the default and it doesn’t exclude the Lionesses, whereas Lionesses would not be the general form used (for whatever reasons, good or bad) and excludes the male lions.
Why is this important for programming languages? Because we often have syntactic elements (the structures and the tokens that we type) that take the form of opposing concepts where one is the default, and hence unmarked, form. Many modern languages employ object-oriented programming practices (itself a threshold concept) that allow programmers to specify how the data that they define inside their programs is going to be used, even within that program. These practices include the ability to set access controls, that strictly define how you can use your code, how other pieces of code that you write can use your code, and how other people’s code can use it, as well. The fundamental access control pairs are public and private, one of which says anyone can use this piece of code to calculate things or can change this value, the other restricts such use or change to the owner. In the Java programming language, public dominates, by far, and can be considered unmarked. Private, however, changes the way that you can work with your own code and it’s easy for students to get this wrong. (To make it more confusing, there is another type of access control that sits effectively between public and private, which is an even more cognitively complex concept and is probably the least well understood of the lot!) One of the issues with any programming language is that deviating from the default requires you to understand what you are doing because you are having to type more, think more and understand more of the implications of your actions.
However, it gets harder, because we sometimes have marked/unmarked pairs where the unmarked element is completely invisible. If we didn’t have the need to describe how people could use our code then we wouldn’t need the access modifiers – the absence of public, private or protected wouldn’t signify anything. There are some implicit modes of operation in programming languages that can be overridden with keywords but the introduction of these keywords just doesn’t illustrate a positive/negative asymmetry (as with big/small or private/public), these illustrate an asymmetry between “something” and “nothing”. Now, the presence of a specific and marked keyword makes it glaringly obvious that there has been an invisible assumption sitting in that spot the whole time.
One of these troublesome word/nothing pairs is found in several languages and consists of the keyword static, with no matching keyword. What do you think the opposite (and pair) of static is? If you’re like most humans, you’d think dynamic. However, not only is this not what this keyword actually means but there is no dynamic keyword that balances it. Let’s look at this in Java:
public static void main(String [] args) {...}
public static int numberOfObjects(int theFirst) {...}
public int getValues() {...}
You’ll see that static keyword twice.Where static isn’t used, however, there’s nothing at all, and this (by its absence) also has a definite meaning and this defines what the default expectation is of behaviour in the Java programming language. From a teaching perspective, this means that we now have a default context, with a separation between those tokens and concepts that are marked and unmarked, and it becomes easier to see why students will struggle with instance methods and fields (which is what we call things without static) if we start with static, and struggle with the concept of static if we start the other way around! What further complicates is this is that every single program we write must contain at least one static method, because it is the starting point for the program’s execution. Even if you don’t want to talk about static yet, you must use it anyway (unless you want to provide the students with some skeleton code or a harness that removes this – but now we’ve put the wizard behind the curtain even more).
One other point I found very interesting in Flanagan and Smith’s chapter was the discussion of barriers and traps in programming languages, from Thimbleby’s critique of Java (1999). Barriers are the limitations on expressiveness that mean that what you want to say in a programming language can only be said in a certain way or in a certain place – which limits how we can explain the language and therefore affects learnability. As students tend to write their lines of code as and when they think of them, at least initially, these barriers will lead the students to make errors because they haven’t developed the locally valid computational idiom. I could ask for food in German as “please two pieces ham thick tasty” and, while I’ll get some looks, I’ll also get ham. Students hitting a barrier get confusing error messages that are given back to them at a time when they barely have enough framework to understand what these messages mean, let alone how to fix them. No ham for them!
Traps are unknown and unexpected problems, such as those caused by not using the right way to compare two things in a program. In short, it is possible in many programming languages to ask “does this equal that” and return an answer of true or false that does not depend upon the values of this or that, but where they are being stored in memory. This is a trap. It is confusing for the novice to try to work out why the program is telling her that two containers that have the value “3” in them are not the same because they are duplicates rather than aliases for the same entity. These traps can seriously trip someone up as they attempt to form a correct mental model and, in the worst case, can lead to magical or cargo-cult thinking once again. (This is not helped by languages that, despite saying that they will take such-and-such an action, take actions that further undermine consistent mental models without being obvious about it. Sekrit Java String munging, I’m looking at you.)
This way of thinking about languages is of great interest to me because, instead of talking about usability in an abstract sense, we are now discussing concrete benefits and deficiencies in the language. Is it heavily restrictive on what goes where, such as Pascal’s pre-declaration of variables or Java’s package import restrictions? Does the language have a large number on unbalanced marked/unmarked pairs where one of them is invisible and possibly counterintuitive, such as static? Is it easy to turn a simple English statement into a programmatic equivalent that does not do what was expected?
The authors suggested ways to dealing with this, including teaching students about formal grammars for programming languages – effectively treating this as learning a new language because the grammar, syntax and semantics are very, very different from English.(Suggestions included Wittgenstein’s Sprachspiel, language game, which will be a post for another time.) Another approach is to start from logic and then work forwards, turning this into forms that will then match the programming languages and giving us a Rosetta stone between English speakers and program speakers.
I have found the whole book very interesting so far and, obviously, so too this chapter. Identifying the problems and their locations, regrettably, is only the starting point. Now I have to think about ways to overcome this, building on what these and other authors have already written.
A Difficult Argument: Can We Accept “Academic Freedom” In Defence of Poor Teaching?
Posted: October 26, 2012 Filed under: Education | Tags: advocacy, authenticity, community, curriculum, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, measurement, principles of design, reflection, student perspective, teaching, teaching approaches, thinking, tools, vygotsky 3 CommentsLet me frame this very carefully, because I realise that I am on very, very volatile ground with any discussion that raises the spectre of a right or a wrong way of teaching. The educational literature is equally careful about this and, very sensibly, you read about rates of transfer, load issues, qualitative aspects and quantitative outcomes, without any hard and fast statements such as “You must never lecture again!” or “You must use formative assessment or bees will consume your people!”
I am aware, however, that we are seeing a split between those people who accept that educational research has something to tell them, which may possibly override personal experience or industry requirement, and those who don’t. But, and let me tread very carefully indeed, while those of us who accept that the traditional lecture is not always the right approach realise that the odd lecture (or even entire course of lectures) won’t hurt our students, there is far more damaging and fundamental disagreement.
Does education transform in the majority of cases or are most students ‘set’ by the time that they come to us?
This is a key question because it affects how we deal with our students. If there are ‘good’ and ‘bad’ students, ‘smart’ and ‘dumb’ or ‘hardworking’ and ‘lazy’, and this is something that is an immutable characteristic, then a lot of what we are doing in order to engage students, to assist them in constructing knowledge and placing into them collaborative environments, is a waste of their time. They will either get it (if they’re smart and hardworking) or they won’t. Putting a brick next to a bee doesn’t double your honey-making capacity or your ability to build houses. Except, of course, that students are not bees or bricks. In fact, there appears to be a vast amount of evidence that says that such collaborative activities, if set up correctly in accordance with the established work in social constructivism and cognitive apprenticeship, will actually have the desired effect and you will see positive transformations in students who take part.
However, there are still many activities and teachers who continue to treat students as if they are always going to be bricks or bees. Why does this matter? Let me digress for a moment.
I don’t care if vampires, werewolves or zombies actually exist or not and, for the majority of my life, it is unlikely to make any difference to me. However, if someone else is convinced that she is a vampire and she attacks me and drain my blood, I am just as dead as if she were not a vampire – of course, I now will not rise from the dead but this is of little import to me. What matters is the impact upon me because of someone else’s practice of their beliefs.
If someone strongly believes that students are either ‘smart enough’ to take their courses or not, they don’t care who fails or how many, and that it is purely the role of the student to have or to spontaneously develop this characteristic then their impact will likely be high enough to have a negative impact on at least some students. We know about stereotype threat. We’re aware of inherent bias. In this case, we’re no longer talking about right or wrong teaching (thank goodness), we’re talking about a fundamentally self-fulfilling prophecy as a teaching philosophy. This will have as great an impact to those who fail or withdraw as the transformation pathway does to those who become better students and develop.
It is, I believe, almost never about the bright light of our most stellar successes. Perhaps we should always be held to answer (or at least explain) for the number and nature of those who fall away. I have been looking for statements of student rights across Australia and the Higher Education sites all seem to talk about ‘fair assessment’ and ‘right of appeal’, as well as all of the student responsibilities. The ACARA (Australian Curriculum and Reporting Authority) website talks a lot about opportunities and student needs in schools. What I haven’t yet found is something that I would like to see, along these lines:
“Educational is transformational. Students are entitled to be assessed on their own performance, in the context of their opportunities.”
Curve grading, which I’ve discussed before, immediately forces a false division of students into good and bad, merely by ‘better’ students existing. It is hard to think of something that is fundamentally less fair or appropriate to the task if we accept that our goal is improvement to a higher standard, regardless of where people start. In a curve graded system, the ‘best’ person can coast because all they have to do is stay one step ahead of their competition and natural alignment and inflation will do the rest. This is not the motivational framework that we wish to establish, especially when the lowest realise that all is lost.
I am a long distance runner and my performances will never set the world on fire. To come first in a race, I would have to be in a small race with very unfit people. But no-one can take away my actual times for my marathons and it is those times that have been used to allow me to enter other events. You’ll note that in the Olympics, too. Qualifying times are what are used because relative performance does not actually establish any set level of quality. The final race? Yes, we’ve established competitiveness and ranking becomes more important – but then again, entering the final heat of an Olympic race is an Olympian achievement. Let’s not quibble on this, because this is the equivalent of Nobel and Turing awards.
And here is the problem again. If I believe that education is transformative and set up all of my classes with collaborative work, intrinsic motivation and activities to develop self-regulation, then that’s great but what if it’s in third-year? If the ‘students were too dumb to get it’ people stand between me and my students for the first two years then I will have lost a great number of possibly good students by this stage – not to mention the fact that the ones who get through may need some serious de-programming.
Is it an acceptable excuse that another academic should be free to do what they want, if what they want to do is having an excluding and detrimental effect on students? Can we accept that if it means that we have to swallow that philosophy? If I do, does it make me complicit? I would like nothing more than to let people do what they want, hey, I like that as much as the next person, but in thinking about the effect of some decisions being made, is the notion of personal freedom in what is ultimately a public service role still a sufficiently good argument for not changing practice?
Students and Programming: A stroll through the archives in the contemplation of self-regulation.
Posted: October 23, 2012 Filed under: Education | Tags: community, education, educational problem, educational research, higher education, in the student's head, measurement, resources, sigcse, teaching, teaching approaches, thinking, time banking, universal principles of design Leave a commentI’ve been digging back into the foundations of Computer Science Education to develop some more breadth in the area and trying to fill in some of the reading holes that have developed as I’ve chased certain ideas forward. I’ve been looking at Maye’s “Psychology of How Novices Learn Computer Programming” from 1981, following it forward to a number of papers including McCracken (Chair) et al’s “A multi-national, multi-institutional study of assessment of programming skills of first-year CS students”. Among the many interesting items presented in this paper was a measure of Degree of Closeness (DoC): a quantification of how close the student had come to providing a correct solution, assessed on their source code. The DoC is rated on a five-point scale, with 1 being the furthest from a correct solution. These “DoC 1” students are of a great deal of interest to me because they include those students who submitted nothing – possible evidence of disengagement or just the student being overwhelmed. In fact the DoC 1 students were classified into three types:
- Type 1: The student handed up an empty file.
- Type 2: The student’s work showed no evidence of a plan.
- Type 3: The student appeared to have a plan but didn’t carry it out.
Why did the students do something without a plan? The authors hypothesise that the student may have been following a heuristic approach, doing what they could, until they could go no further. Type 3 was further subdivided into 3a (the student had a good plan or structure) and 3b (the student had a poor plan or structure). All of these, however, have one thing in common and that is that they can indicate a lack of resource organisation, which may be identified as a shortfall in metacognition. On reflection, however, many of these students blamed external factors for their problems. The Type 1 students blamed the time that they had to undertake the task, the lab machines, their lack of familiarity with the language. The DoC 5 students (from the same school) described their difficulties in terms of the process of creating a solution. Other comments from DoC 1 and 2 students included information such as insufficient time, students “not being good” at whatever this question was asking and, in one case, “Too cold environment, problem was too hard.” The most frequent complaint among the low performing students was that they had not had enough time, the presumption being that, had enough time been available, a solution was possible. Combine this with the students who handed up nothing or had no plan and we must start to question this assertion. (It is worth noting that some low-performing students had taken this test as their first ever solo lab-based examination so we cannot just dismiss all of these comments!)
The paper discusses a lot more and is rather critical of its own procedure (perhaps the time pressure was too high, the specifications a little cluttered, highly procedural rather than OO) and I would not argue with the authors on any of this but, from my perspective, I am zooming in on the issue of time because, if you’ve read any of my stuff before, you’ll know that I am working in self-regulation and time management. I look at the Types of DoC 1 students and I can see exactly what I saw in my own student timeliness data and reflection reports: a lack of ability to organise resources. This is now, apparently, combined with a persistent belief that fixing this was beyond the student’s control. It’s unsurprising that handing up nothing suddenly became a valid option.
The null submission could be a clear indicator of organisational ability, where the student can’t muster any kind of solution to the problem at all. Not one line of code or approximate solution. What is puzzling about this is that the activity was, in fact, heavily scheduled. Students sat in a lab and undertook it. There was no other task for them to perform except to do this code in either 1 or 1.5 hours. To not do anything at all may be a reaction to time pressure (as the authors raised) or it could be complete ignorance of how to solve the problem. There’s too much uncertainty here for me to say much more about this.
The “no plan” solution can likely be explained by the heuristic focus and I’ve certainly seen evidence of it. One of the most unforgiving aspects of the heuristic solution is that, without a design, it is easy to end up in a place where you are running out of time and have no idea of where to go to solve unforeseen problems that have arisen. These students are the ones who I would expect to start the last day that something is due and throw together a solution, working later and panicking more as they realised that their code wasn’t working. Having done a bit here and a piece there, they may cobble something together and hand it up but it is unlikely to work and is never robust.
The “I planned it but I couldn’t do it” group fall heavily into the problem space of self-regulation, because they had managed to organise their resources – so why didn’t anything come out? Did they procrastinate? Was their meta-planning process deficient, in that they spent most of their time perfecting a plan and not leaving enough time to make it happen? I have a number of students who have a tendency to go down the rabbit hole when chasing design issues and I sometimes have to reach down, grab them by the ears and haul them out. The reality of time constraints is that you have to work out what you can do and then do as much as you can with that time.
This is fascinating because I’m really trying to work out at which point students will give up and DoC 1 basically amounts to an “I didn’t manage it” mark in my local system. I have data that shows the marks students get from automated marking (immediate assessment) so I can look to see how long people will try to get above what (effectively) would be above DoC 1, and probably up around DoC 3. (The paper defines DoC 3 as “In reading the source code, the outline of a viable solution was apparent, including meaningful comments, stub code, or a good start on the code.” This would be enough to meet our assessment requirements although the mark wouldn’t be great.) DoC 1 would, I suspect, amount to “no submission” in many cases so my DoC 1 students are those who stayed enrolled (and sat the exam) but never created a repository or submission. (There are so many degrees of disengagement!)
I, of course, now have to move further forward along this paper line and I will hopefully intersect with my ‘contemporary’ reading into student programming activity. I will be reading pretty solidly on all of this for the upcoming months as we try to refine the time management and self-regulation strategies that we’ll be employing next year.
Making Time For Students
Posted: October 20, 2012 Filed under: Education | Tags: education, higher education, measurement, resources, student perspective, teaching, teaching approaches, thinking, tools, workload 1 CommentI was reminded of my slightly overloaded calendar today as students came and went throughout the day, I raced in and out of project meetings and RV and I worked on some papers that we’re trying to get together for an upcoming submission date in the next few months. I wish I could talk about the research but, given that it will all have to go into peer review and some of the people reading this may end up being on those panels, it will all have to wait until we get accepted or it comes back on fire with a note written in blood saying “Don’t call us…”
For those following the Australian Research scene, you might know that the Australian Federal Government had put a hold on releasing information on key research funding schemes and that this has led to uncertainty for those people whose salaries are paid by research grants. Why is this important in a learning and teaching blog? Because the majority of Higher Education academics are involved in research, teaching and administration but it’s not too much of a generalisation to say that those who are the most successful have substantial help on the research front from well-established groups and staff who are paid to do research full-time.
Right now, as I write this, our postdoc (RV) is reviewing the terminology of certain aspects of the discipline to allow us to continue our research. RV is running citation analyses, digging through papers, peering at my scrawl on the whiteboard and providing a vital aspect to the project: uninterrupted dedication to the research question. I’m seeing students, holding meetings, dealing with technical problems, worrying about my own grants, preparing for a new course roll-out on Monday… and writing this. RV’s role is rapidly becoming critical to my ability to work.
There are thousands of dedicated researchers like RV across Australia and it is easy to quantify their contribution to research, but easy to overlook their implicit benefit in terms of learning and teaching. Every senior academic who is involved in research and teaching will most likely only still be teaching because they someone to carry on the research and maintain the focus and continuity that only comes from having one major area to work on.
I think of it in terms of gearing. When I’m talking to other researchers, I use one set of mental gears. Inside my own group, I use another because we are all much more closely aligned. I use a completely different set when I talk to students and this set varies by year level, course and student! Making time for students is not just a case of having an hour in my calendar. Making time for students is a matter of making the mental space for a discussion that will be at the appropriate level. It’s having enough time to have a chat rather than a rapid-fire exchange. I don’t always succeed at this because far too many of my students apologise to me for taking up my time. Argh! My time is student time! It’s what I get a good 40% of my salary for! (Not that we’re counting. Like most academics, when asked what percentage of my time I spent on the three areas of research, teaching and admin, I say 50,50,40. 🙂 )
Now I am not, by any means, a senior academic and I am very early on in this process, so you can imagine how important those research staff are going to be in keeping projects going for senior staff who are having to make those gear changes at a very rapid speed across much larger domains. Knowledge workers need the time and headspace to think and switching context takes up valuable time, as well as tiring you out if you do it often enough.
On that basis, the recent news that the Government is unfreezing the medical research schemes and at least some of the major awards for everyone else is good news. My own grant in this area is highly unlikely to get up – my relief is not actually for myself, here – but we are already worried about an increased rate of departure for those researchers who are concerned about having a job next year and are, because of their skills and experience, highly mobile. The impact of these people leaving will not just be felt in terms of research output, which has a multi-year lag, but will be felt immediately wherever learning and teaching depended upon someone having the time and mental space to do it, because they had a member of the research staff supporting their other work. Universities are a complex ecosystem and there are very important connections between staff in different areas and areas of focus that are not immediately apparent when you make the simplistic distinction of staff (professional and academic) and, for academics, research/teaching/admin, research/admin, teaching/admin, pure research and pure teaching. The number of courses that I have to teach depends upon the number of staff available to teach, as well as the number of courses and students, and the number of staff (or their available hours) is directly affected by the number of people who help them.
It’s good news that the research funds are starting to unfreeze because it will say to the people who are depending upon grant money that an answer is coming soon. It’s also saying to the rest of us that we can start to think about planning and allocation for 2013 with more certainty, because the monies will be coming at some point.
This, in turn, stops me having to worry about things like contingency plans, who is going to be working with me, and how I will fund research assistants into 2014 because now I have a possibility of a grant, rather than a placeholder in a frozen scheme. This reduces my current overheads (for a while) and frees up some headspace. With any luck, the next student who walks into my office will not realise exactly how busy I am – and that’s the way that I like it.
Thoughts on Overloading: I Still Appear to be Ignoring My Own Advice
Posted: October 18, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, education, educational research, feedback, higher education, learning, measurement, reflection, resources, teaching approaches, thinking, time banking, work/life balance, workload Leave a commentI was musing recently on the inherent issues with giving students more work to do, if they are already overloaded to a point where they start doing questionable things (like cheating). A friend of mine is also going through a contemplation of how he seems to be so busy that fitting in everything that he wants to do keeps him up until midnight. My answer to him, which includes some previous comments from other people, is revealing – not least because I am talking through my own lens, and I appear to still feel that I am doing too much.
Because I am a little too busy, I am going to repost (with some editing to remove personal detail and clarify) what I wrote to him, which distils a lot of my thoughts over the past few months on overloading. This was all in answer to the question: “How do people fit everything in?”
You have deliberately committed to a large number of things and you wish to perform all of them at a high standard. However, to do this requires that you spend a very large amount of time, including those things that you need to do for your work.
Most people do one of three things:
- they do not commit to as much,
- they do commit to as much but do it badly, or
- they lie about what they are doing because claiming to be a work powerhouse is a status symbol.
A very, very small group of people can buck the well documented long-term effects of overwork but these peopler are in the minority. I would like to tell you what generally happens to people who over-commit, while readily admitting that this might not apply to you. Most of this is based on research, informed by bitter personal experience.
The long-term effects of overwork (as a result of over-commitment) are sinister and self-defeating. As fatigue increases, errors increase. The introduction of errors requires you to spend more time to achieve tasks because you are now doing the original task AND fixing errors, whether the errors are being injected by you or they are actually just unforeseen events because your metacognitive skills (resource organisation) are being impaired by fatigue.
However, it’s worse than that because you start to lose situational awareness as well. You start to perform tasks because they are there to perform, without necessarily worrying about why or how you’re doing it. Suddenly, not only are you tired and risking the introduction of errors, you start to lose the ability to question whether you should be carrying out a certain action in the first place.
Then it gets worse again because not only do obstacles now appear to be thrown up with more regularity (because your error rates are going up, your frustration levels are high and you’re losing resource organisational ability) but even the completion of goals merely becomes something that facilitates more work. Having completed job X, because you’re over-committed, you must immediately commence job X+1. Goal completion, which should be a time for celebration and reflection, now becomes a way to open more gateways of burden. Goals delayed become a source of frustration. The likely outcome is diminished enjoyment and an encroaching sense of work, work, work.
[I have removed a paragraph here that contained too much personal detail of my friend.]
So, the question is whether your work is too much, given everything else that you want to do, and only you can answer this question as to whether you are frustrated by it most of the time and whether you are enjoying achieving goals, or if they are merely opening more doors of work. I don’t expect you to reply on this one but it’s an important question – how do you feel when you open your eyes in the morning? How often are you angry at things? Is this something that you want to continue for the foreseeable future?
Would you still do it, if you didn’t have to pay the rent and eat?
Regrettably, one of the biggest problems with over-commitment is not having time to adequately reflect. However, long term over-commitment is clearly demonstrated (through research) to be bad for manual labourers, soldiers, professionals, and knowledge workers. The loss of situational awareness and cognitive function are not good for anyone.
My belief is that an approach based on listening to your body and working within sensible and sustainable limits is possible for all aspects of life but readily acknowledge that transition away from over-commitment to sustainable commitment can be very, very hard. I’m facing that challenge at the moment and know that it is anything but easy. I’m not trying to lecture you, I’m trying to share my own take on it, which may or may not apply. However, you should always feel free to drop by for a coffee to chat, if you like, and I hope that you have some easier and less committed times ahead.
Reading through this, I reminded of how much work I have left to do in order to reduce my overall commitments to sensible levels. It’s hard, sometimes, because there are so many things that I want to do but I can easily point to a couple of indicators that tell me that I still don’t quite have the balance right. For example, I’m managing my time at the moment, but that’s probably because being unable to run has given me roughly 8 hours a week back to spend elsewhere. I am getting things done because I am using up almost all of that running time but working in it instead. And that, put simply, means I’m regularly working longer hours than I should.
Looking back at the advice, I am projecting my own problems with goals: completing something merely unlocks new burdens, and there is very little feeling of finalisation. I am very careful to try and give my students closure points, guidance and a knowledge of when to stop. Time to take a weekend and reflect on how I can get that back for myself – and still do everything cool that I want to do! 🙂
Industry Speaks! (May The Better Idea Win)
Posted: October 16, 2012 Filed under: Education | Tags: alan noble, community, data visualisation, design, education, entrepreneurship, Generation Why, grand challenge, higher education, learning, measurement, MIKE, principles of design, teaching, teaching approaches, thinking, tools, universal principles of design Leave a commentAlan Noble, Director of Engineering for Google Australia and an Adjunct Professor with my Uni, generously gave up a day today to give a two hour lecture of distributed systems and scale to our third-year Distributed Systems course, and another two-hour lecture on entrepreneurship to my Grand Challenge students. Industry contact is crucial for my students because the world inside the Uni and the world outside the Uni can be very, very different. While we try to keep industry contact high in later years, and we’re very keen on authentic assignments that tackle real-world problems, we really need the people who are working for the bigger companies to come in and tell our students what life would be like working for Google, Microsoft, Saab, IBM…
My GC students have had a weird mix of lectures that have been designed to advance their maturity in the community and as scientists, rather than their programming skills (although that’s an indirect requirement), but I’ve been talking from a position of social benefit and community-focused ethics. It is essential that they be exposed to companies, commercialisation and entrepreneurship as it is not my job to tell them who to be. I can give them skills and knowledge but the places that they take those are part of an intensely personal journey and so it’s great to have an opportunity for Alan, a man with well-established industry and research credentials, to talk to them about how to make things happen in business terms.
The students I spoke to afterwards were very excited and definitely saw the value of it. (Alan, if they all leave at the end of this year and go to Google, you’re off the Christmas Card list.) Alan focused on three things: problems, users and people.
Problems: Most great companies find a problem and solve it but, first, you have to recognise that there is a problem. This sometimes just requires putting the right people in front of something to find out what these new users see as a problem. You have to be attentive to the world around you but being inventive can be just as important. Something Alan said really resonated with me in that people in the engineering (and CS) world tend to solve the problems that they encounter (do it once manually and then set things up so it’s automatic thereafter) and don’t necessarily think “Oh, I could solve this for everyone”. There are problems everywhere but, unless we’re looking for them, we may just adapt and move on, instead of fixing the problem.
Users: Users don’t always know what they want yet (the classic Steve Jobs approach), they may not ask for it or, if they do ask for something, what they want may not yet be available for them. We talked here about a lot of current solutions to problems but there are so many problems to fix that would help users. Simultaneous translation, for example, over telephone. 100% accurate OCR (while we’re at it). The risk is always that when you offer the users the idea of a car, all they ask for is a faster horse (after Henry Ford). The best thing for you is a happy user because they’re the best form of marketing – but they’re also fickle. So it’s a balancing act between genuine user focus and telling them what they need.
People: Surround yourself with people who are equally passionate! Strive of a culture of innovation and getting things done. Treasure your agility as a company and foster it if you get too big. Keep your units of work (teams) smaller if you can and match work to the team size. Use structures that encourage a short distance from top to bottom of the hierarchy, which allows for ideas to move up, down and sideways. Be meritocratic and encourage people to contest ideas, using facts and articulating their ideas well. May the Better Idea Win! Motivating people is easier when you’re open and transparent about what they’re doing and what you want.
Alan then went on to speak a lot about execution, the crucial step in taking an idea and having a successful outcome. Alan had two key tips.
Experiment: Experiment, experiment, experiment. Measure, measure, measure. Analyse. Take it into account. Change what you’re doing if you need to. It’s ok to fail but it’s better to fail earlier. Learn to recognise when your experiment is failing – and don’t guess, experiment! Here’s a quote that I really liked:
When you fail a little every day, it’s not failing, it’s learning.
Risk goes hand-in-hand with failure and success. Entrepreneurs have to learn when to call an experiment and change direction (pivot). Pivot too soon, you might miss out on something good. Pivot too late, you’re in trouble. Learning how to be agile is crucial.
Data: Collect and scrutinise all of the data that you get – your data will keep you honest if you measure the right things. Be smart about your data and never copy it when you can analyse it in situ.
(Alan said a lot more than this over 2 hours but I’m trying to give you the core.)
Alan finished by summarising all of this as his Three As of Entrepreneurship, then why we seem to be hitting an entrepreneurship growth spurt in Australia at the moment. The Three As are:
- Audit your data
- Having Audited, Admit when things aren’t working
- Once admitted, you can Adapt (or pivot)
As to why we’re seeing a growth of entrepreneurship, Australia has a population who are some of the highest early adopters on the planet. We have a high technical penetration, over 20,000,000 potential users, a high GDP and we love tech. 52% of Australians have smart phones and we had so many mobile phones, pre-smart, that it was just plain crazy. Get the tech right and we will buy it. Good tech, however, is hardware+software+user requirement+getting it all right.
It’s always a pleasure to host Alan because he communicates his passion for the area well but he also puts a passionate and committed face onto industry, which is what my students need to see in order to understand where they could sit in their soon-to-be professional community.
Dealing with Plagiarism: Punishment or Remediation?
Posted: October 15, 2012 Filed under: Education | Tags: advocacy, community, curriculum, design, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, learning, measurement, plagiarism, principles of design, reflection, student perspective, teaching, teaching approaches, thinking, tools, work/life balance 6 CommentsI have written previously about classifying plagiarists into three groups (accidental, panicked and systematic), trying to get the student to focus on the journey rather than the objective, and how overwork can produce situations in which human beings do very strange things. Recently, I was asked to sit in on another plagiarism hearing and, because I’ve been away from the role of Assessment Coordinator for a while, I was able to look at the process with an outsider’s eye, a slightly more critical view, to see how it measures up.
Our policy is now called an Academic Honesty Policy and is designed to support one of our graduate attributes: “An awareness of ethical, social and cultural issues within a global context and their importance in the exercise of professional skills and responsibilities”. The principles are pretty straight-forward for the policy:
- Assessment is an aid to learning and involves obligations on the part of students to make it effective.
- Academic honesty is an essential component of teaching, learning and research and is fundamental to the very nature of universities.
- Academic writing is evidence-based, and the ideas and work of others must be acknowledged and not claimed or presented as one’s own, either deliberately or unintentionally.
The policy goes on to describe what student responsibilities are, why they should do the right thing for maximum effect of the assessment and provides some handy links to our Writing Centre and applying for modified arrangements. There’s also a clear statement of what not to do, followed by lists of clarifications of various terms.
Sitting in on a hearing, looking at the process unfolding, I can review the overall thrust of this policy and be aware that it has been clearly identified to students that they must do their own work but, reading through the policy and its implementation guide, I don’t really see what it provides to sufficiently scaffold the process of retraining or re-educating students if they are detected doing the wrong thing.
There are many possible outcomes from the application of this policy, starting with “Oh, we detected something but we turned out to be wrong”, going through “Well, you apparently didn’t realise so we’ll record your name for next time, now submit something new ” (misunderstanding), “You knew what you were doing so we’re going to give you zero for the assignment and (will/won’t) let you resubmit it (with a possible mark cap)” (first offence), “You appear to make a habit of this so we’re giving you zero for the course” (second offence) and “It’s time to go.” (much later on in the process after several confirmed breaches).
Let me return to my discussions on load and the impact on people from those earlier posts. If you accept my contention that the majority of plagiarism cheating is minor omission or last minute ‘helmet fire’ thinking under pressure, then we have to look at what requiring students to resubmit will do. In the case of the ‘misunderstanding’, students may also be referred to relevant workshops or resources to attend in order to improve their practices. However, considering that this may have occurred because the student was under time pressure, we have just added more work and a possible requirement to go and attend extra training. There’s an old saying from Software Development called Brook’s Law:
“…adding manpower to a late software project makes it later.” (Brooks, Mythical Man Month, 1975)
In software it’s generally because there is ramp up time (the time required for people to become productive) and communication overheads (which increases with the square of the number of people again). There is time required for every assignment that we set which effectively stands in for the ramp-up and, as plagiarising/cheating students have probably not done the requisite work before (or could just have completed the assignment), we have just added extra ramp-up into their lives for any re-issued assignments and/or any additional improvement training. We have also greatly increased the communication burden because the communication between lecturers and peers has implicit context based on where we are in the semester. All of the student discussion (on-line or face-to-face) from points A to B will be based around the assignment work in that zone and all lecturing staff will also have that assignment in their heads. An significantly out-of-sequence assignment not only isolates the student from their community, it increases the level of context switching required by the staff, decreasing the amount of effective time that have with the student and increasing the amount of wall-clock time. Once again, we have increased the potential burden on a student that, we suspect, is already acting this way because of over-burdening or poor time management!
Later stages in the policy increase the burden on students by either increasing the requirement to perform at a higher level, due to the reduction of available marks through giving a zero, or by removing an entire course from their progress and, if they wish to complete the degree, requiring them to overload or spend an additional semester (at least) to complete their degree.
My question here is, as always, are any of these outcomes actually going to stop the student from cheating or do they risk increasing the likelihood of either the student cheating or the student dropping out? I complete agree with the principles and focus of our policy, and I also don’t believe that people should get marks for work that they haven’t done, but I don’t see how increasing burden is actually going to lead to the behaviour that we want. (Dan Pink on TED can tell you many interesting things about motivation, extrinsic factors and cognitive tasks, far more effectively than I can.)
This is, to many people, not an issue because this kind of policy is really treated as being punitive rather than remedial. There are some excellent parts in our policy that talk about helping students but, once we get beyond the misunderstanding, this language of support drops away and we head swiftly into the punitive with the possibility of controlled resubmission. The problem, however, is that we have evidence that light punishment is interpreted as a licence to repeat the action, because it doesn’t discourage. This does not surprise me because we have made such a risk/reward strategy framing with our current policy. We have resorted to a punishment modality and, as a result, we have people looking at the punishments to optimise their behaviour rather than changing their behaviour to achieve our actual goals.
This policy is a strange beast as there’s almost no way that I can take an action under the current approach without causing additional work to students at a time when it is their ability to handle pressure that is likely to have led them here. Even if it’s working, and it appears that it does, it does so by enforcing compliance rather than actually leading people to change the way that they think about their work.
My conjecture is that we cannot isolate the problems to just this policy. This spills over into our academic assessment policies, our staff training and our student support, and the key difference between teaching ethics and training students in ethical behaviour. There may not be a solution in this space that meets all of our requirements but if we are going to operate punitively then let us be honest about it and not over-burden the student with remedial work that they may not be supported for. If we are aiming for remediation then let us scaffold it properly. I think that our policy, as it stands, can actually support this but I’m not sure that I’ve seen the broad spread of policy and practice that is required to achieve this desirable, but incredibly challenging, goal of actually changing student behaviour because the students realise that it is detrimental to their learning.
Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach. Part 2.
Posted: October 13, 2012 Filed under: Education | Tags: community, curriculum, education, educational problem, educational research, Generation Why, higher education, in the student's head, jeff froyd, learning, learning outcome, measurement, reflection, research, resources, student perspective, teaching, teaching approaches, thinking, tools, wageeh boles, workload Leave a commentContinuing on from yesterday’s post, I was discussing the workshop that I went to and what I’d learned from it. I finished on the point that assessment of learning occurs when Lecturers:
- Use evidence of student learning
- to make judgements on student achievement
- against goals and standards
but we have so many other questions to ask at this stage. What were our initial learning objectives? What were we trying to achieve? The learning outcome is effectively a contract between educator and student so we plan to achieve them, but how they fit in the context of our accreditation and overall requirements? One of the things stressed in the workshop was that we need a range of assessment tasks to achieve our objectives:
- We need a wide variety
- These should be open-entry where students can begin the tasks from a range of previous learning levels and we cater for different learning preferences and interests
- They should be open-ended, where we don’t railroad the students towards a looming and monolithic single right answer, and multiple pathways or products are possible
- We should be building students’ capabilities by building on the standards
- Finally, we should provide space for student ownership and decision making.
Effectively, we need to be able to get to the solution in a variety of ways. If we straitjacket students into a fixed solution we risk stifling their ability to actually learn and, as I’ve mentioned before, we risk enforcing compliance to a doctrine rather than developing knowledgeable self-regulated learners. If we design these activities properly then we should find the result reduces student complaints about fairness or incorrect assumptions about their preparation. However, these sorts of changes take time and, a point so important that I’ll give it its own line:
You can’t expect to change all of your assessment in one semester!
The advice from Wageeh and Jeff was to focus on an aspect, monitor it, make your change, assess it, reflect and then extend what you’ve learned to other aspects. I like this because, of course, it sounds a lot like a methodical scientific approach to me. Because it is. As to which assessment methods you should choose, the presenters recognised that working out how to make a positive change to your assessment can be hard so they suggested generating a set of alternative approaches and then picking one. They then introduced Prus and Johnson’s 1994 paper “A critical review of Student Assessment Options” which provide twelve different assessment methods and their drawbacks and advantages. One of the best things about this paper is that there is no ‘must’ or ‘right’, there is always ‘plus’ and ‘minus’.
Want to mine archival data to look at student performance? As I’ve discussed before, archival data gives you detailed knowledge but at a time when it’s too late to do anything for that student or a particular cohort in that class. Archival data analysis is, however, a fantastic tool for checking to see if your prerequisites are set correctly. Does their grade in this course correlate with grades in the prereqs? Jeff mentioned a student where the students should have depended upon Physics and Maths but, while their Physics mark correlated with their final Statics mark, Mathematics didn’t. (A study at Baldwin-Wallace presented at SIGCSE 2012 asked the more general question: what are the actual dependencies if we carry out a Bayesian Network Analysis. I’m still meaning to do this for our courses as well.)
Other approaches, such as Surveys, are quick and immediate but are all perceptual. Asking a student how they did on a quiz should never be used as their actual mark! The availability of time will change the methods you choose. If you have a really big group then you can statistically sample to get an indication but this starts to make your design and tolerance for possible error very important.
Jeff stressed that, in all of this assessment, it was essential to never give students an opportunity to gain marks in areas that are not the core focus. (Regular readers know that this is one of my design and operational mantras, as it encourages bad behaviour, by which I mean incorrect optimisation.)
There were so many other things covered in this workshop and, sadly, we only had three hours. I suggested that the next time it was run that they allow more time because I believe I could happily have spent a day going through this. And I would still have had questions.
We discussed the issue of subjectivity and objectivity and the distinction between setting and assessment. Any way that I set a multiple choice quiz is going to be subjective, because I will choose the questions based on my perception of the course and assessment requirements, but it is scored completely objectively.
We also discussed data collection as well because there are so many options here. When will we collect the data? If we collect continuously, can we analyse and react continuously? What changes are we making in response? This is another important point:
If you collect data in order to determine which changes are to be made, tie your changes to your data driven reasons!
There’s little point in saying “We collected all student submission data for three years and then we went to multiple choice questions” unless you can provide a reason from the data, which will both validate your effort in collection and give you a better basis for change. When do I need data to see if someone is clearing the bar? If they’re not, what needs to be fixed? What do I, as a lecturer, need to collect during the process to see what needs to be fixed, rather than the data we collect at the end to determine if they’ve met the bar.
How do I, as a student, determine if I’m making progress along the way? Can I put all of the summative data onto one point? Can I evaluate everything on a two-hour final exam?
WHILE I’m teaching the course, are the students making progress, do they need something else, how do I (and should I) collect data throughout the course. A lot of what we actually collect is driven by the mechanisms that we already have. We need to work out what we actually require and this means that we may need to work beyond the systems that we have.
Again, a very enjoyable workshop! It’s always nice to be able to talk to people and get some really useful suggestions for improvement.
Brief Stats Update: I appear to have written two more books
Posted: October 9, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, education, feedback, higher education, measurement, MIKE, reflection, SWEDE, teaching, teaching approaches, thinking, tools, work/life balance, workload 2 CommentsOn May 6th, I congratulated Mark Guzdial on his 1000th post and I noted that I had written 102,136 words, an average of 676 words per post, with 151 posts over 126 days. I commented that, at that rate, I could expect to produce about 180,000 more words by the end of the year, for a total of about 280,000. So, to summarise, my average posting level was at rate of 1.2 posts per day, and 676 words per post.
Today, I reanalysed the blog to see how I was going. This post will be published on Tuesday the 9th, my time, and the analysis here does not include itself. So, up until all activity on Monday the 8th, Central Australian Daylight Saving Time, here are the stats.
Total word count: 273,639. Total number of posts: 343. Number of words per post: 798. Number of posts per day: 1.23. I will reach my end of year projected word count in about 9 days.
I knew that I had been writing longer posts, you may remember that I’ve deliberately tried to keep the posts to around 1,000 where possible, but it’s obvious that I’m just not that capable of writing a short post! In the long term, I’d expect this to approach 1,000 words/post because of my goal to limit myself to that, with the occasional overshoot. I’m surprised by the consistency in number of posts per day. The previous average was a smidgen under 1.2 but I wanted to clarify that there has been a minor increase. Given that my goal was not to necessarily hit exactly 1/day but to set aside time to think about learning and teaching every day, I’m happy with that.
The word count, however, is terrifying. One of the reasons that I wanted to talk about this is to identify how much work something like this is, not to either over inflate myself or to put you off, but to help anyone out there who is considering such a venture. Let me explain some things first.
- I have been typing in one form or another since 1977. I was exposed to computers early on and, while I’ve never been trained to touch type, I have that nasty hybrid version where I don’t use all of my fingers but still don’t have to look at the keyboard.
- I can sustain a typing speed of about 2,500 words/hour for fiction for quite a long time. That includes the aspects of creativity required, not dictation or transcription. It is very tiring, however, and too much of it makes me amusingly incoherent.
- I do not have any problems with repetitive strain injury and I have a couple of excellent working spaces with fast computers and big screens.
- I love to write.
So, I’m starting from a good basis and, let me stress, I love to write. Now let me tell you about the problems that this project has revealed.
- I produce two kinds of posts: research focused and the more anecdotal. Anecdotal posts can be written up quickly but the moment any research, pre-reading or reformulation is required, it will take me about an hour or two to get a post together. So that cute high speed production drops to about 500-1000 words/hour.
- Research posts are the result of hours of reading and quite a lot of associated thought. My best posts start from a set of papers that I read, I then mull on it for a few days and finally it all comes together. I often ask someone else to look at the work to see how it sits in the queue.
- I’m always better when I don’t have to produce something for tomorrow. When the post queue is dry, I don’t have the time to read in detail or mull so I have to either pull a previous draft from the queue and see if I can fix it (and I’ve pretty much run out of those) or I have to come up with an idea now and write it now. All too often, these end up being relatively empty opinion pieces.
- If you are already tired, writing can be very tiring and you lose a lot of the fiero and inspiration from writing a good post.
I have probably spent, by all of these figures and time estimates, somewhere around 274 hours on this project. That’s just under 7 working weeks at 40 hours/week. No wonder I feel tired sometimes!
I am already, as you know, looking to change the posting frequency next year because I wish to focus on the quality of my work rather than the volume of my output. I still plan to have that hour or so put aside every day to contemplate and carry out research on learning and teaching but it will no longer be tied to an associated posting deadline. My original plan had an output requirement to force me to carry out the work. Unsurprisingly, oh brave new world that has such extrinsic motivating factors in it, I have become focused on the post, rather than the underlying research. My word count indicates that I am writing but, once this year is over, the review that I carry out will be to make sure that every word written from that point on is both valuable and necessary. My satisfaction in the contribution and utility of those posts I do make will replace any other quantitative measures of output.
My experience in this can be summarised quite simply. Setting a posting schedule that is too restrictive risks you putting the emphasis on the wrong component, where setting aside a regular time to study and contemplate the issues that lead to a good post is a far wiser investment. If you want to write this much, then it cannot be too much of a chore and, honestly, loving writing is almost essential, I feel. Fortunately, I have more than enough to keep the post queue going to the end of the year, as I’m working on a number of papers and ideas that will naturally end up here but I feel that I have, very much, achieved what I originally set to to do. I now deeply value the scholarship of learning and teaching and have learned enough to know that I have a great deal more to learn.
From a personal perspective, I believe that all of the words written have been valuable to me but, from next year, I have to make sure that the words I write are equally valuable to other people.
I’ll finish with something amusing. Someone asked me the other day how many words I’d written and, off the top of my head, I said “about 140,000” and thought that I was possibly over-claiming. The fact that I was under claiming by almost a factor of two never would have occurred to me, nor the fact that I had written more words than can be found in Order of the Phoenix. While I may wish to reclaim my reading time once this is over, for any fiction publishers reading this, I will have some free time next year! 🙂





