I won’t be giving detailed comments on all sessions – firstly, I can’t attend everything and, secondly, I don’t want you all to die of word poisoning – but I’ve been to a number of talks and thought I’d discuss those here that really made me think. (My apologies for the delay. I seem to be coming down with a cold/flu and it’s slowing me down.)
In Session 1, I went to a talk entitled “Integrating teaching, learning, support and wellbeing in Universities”, presented by Dr Helen Stallman from University of Queensland. The core of this talk was that, if we want to support our students academically, we have to support them in every other way as well. The more distressed students are, the less well they do academically. If we want good outcomes, we have to able to support students’ wellbeing and mental health. We already provide counselling and support skill workshops but very few students will go and access these resources, until they actually need them.
This is a problem. Tell a student at the start of the course, when they are fine, where they can find help and they won’t remember it when they actually may need to know where that resource is. We have a low participation in many of the counselling and support skill workshop activities – it is not on the student’s agenda to go to one of these courses, it is on their agenda is to get a good mark. Pressured for time, competing demands, anything ‘optional’ is not a priority.
The student needs to identify that they have a problem, then they have to be able to find the solution! Many University webpages not actually useful in this regard, although they contain a lot of marketing information on the front page.
What if we have an at-risk profile that we can use to identify students? It’s not 100% accurate. Students who are ‘at risk’ may not have problems but students who don’t have the profile may still have problems! We don’t necessarily know what’s going on with our students. Where we have 100s of students, how can we know all of them? (This is one of the big drivers for my work in submission management and elastic time – identifying students who are at risk as soon as they may be at risk.)
So let me reiterate the problem with the timing of information: we tend to mention support services once, at the start. People don’t access resources unless they’re relevant and useful at the particular time. Talking to people when they don’t have a problem – they’ll forget it.
So what are the characteristics of interventions that promote student success:
- Inclusive of all students (and you can find it)
- Encourages self-management skills (Don’t smother them! Our goal is not dependency, it’s self-regulation)
- Promotes academic achievement (highest potential for each of our students)
- Promotes wellbeing (not just professional capabilities but personal capabilities and competencies)
- Minimally sufficient (students/academics/unis are not doing more work than they need to, and only providing the level of input that is required to achieve this goal.)
- Sustainable (easy for students and academics)
Dr Stallman then talked about two tools – the Learning Thermometer and The Desk. Student reflection and system interface gives us the Learning Thermometer, then automated and personalised student feedback is added, put in by academic. Support and intervention, web-based, as a loop around student feedback. Student privacy data is maintained and student gets to choose intervention that is appropriate. Effectively, the Learning Thermometer tells the student which services are available, as and when they are needed, based on their results, their feedback and the lecturer’s input.
This is designed to promote self-management skills and makes the student think “What can I do? What are the things that I can do?” Gives students of knowledge of which resources they can access. (And this resource is called “The Desk”) Who are the people who can help me?
What is being asked is: What are the issues that get in the way of achieving academic success?
About “The Desk”: it contains quizzes related to all part of the desk that gives students personalised feedback to give them module suggestions as appropriate. Have a summary sheet of what you’ve done so you can always remember it. Tools section to give you short tips on how to fix things. Coffee House social media centre to share information and pictures (recipes and anything really).
To allow teachers to work out what is going on, an addition to the Learning Thermometer can give the teacher feedback based on reflection and the interface. Early feedback to academics allows us to improve learning outcomes. THese improvements in teaching practices. (Student satisfaction correlates poorly with final mark, this is more than satisfaction.)
The final items in the talk focussed on:
- A universal model of prevention
- All students can be resilient
- Resources need to be timely relevant and useful
- Multiple access points
- Integrated within the learning environment
What are the implications?
- Focus on prevention
- Close the loop between learning, teaching, wellbeing and support
- More resilient students
- Better student graduate outcomes.
Overall a very interesting talk, which a lot of things to think about. How can I position my support resources so that students know where to go as and when they need them? Is ‘resiliency’ an implicit or explicit goal inside my outcomes and syllabus structure? Do the mechanisms that I provide for assessment work within this framework?
With my Time Banking hat on, I am always thinking about how I can be fair but flexible, consistent but compassionate, and maintain quality while maintaining humanity. This talk is yet more information to consider as I look at alternative ways to work with students for their own benefit, while improving their performance at the same time.
Contact details and information on tools discussed:
Take a look at this picture.
One thing you might have noticed, if you’ve looked carefully, is that this man appears to have had some reconstructive surgery on the right side of his face and there is a colour difference, which is slightly accentuated by the lack of beard stubble. What if I were to tell you that this man was offered the chance to have fake stubble tattooed onto that section and, when he declined because he felt strange about it, received a higher level of pressure and, in his words, guilt trip than for any other procedure during the extensive time he spent in hospital receiving skin grafts and burn treatments. Why was the doctor pressuring him?
Because he had already performed the tattooing remediation on two people and needed a third for the paper. In Dan’s words, again, the doctor was a fantastic physician, thoughtful, and he cared but he had a conflict of interest that meant that he moved to a different mode of behaviour. For me, I had to look a couple of times because the asymmetry that the doctor referred to is not that apparent at first glance. Yet the doctor felt compelled, by interests that were now Dan’s, to make Dan self-conscious about the perceived problem.
A friend on Facebook (thanks, Bill!) posted a link to an excellent article in Wired, entitled “Why We Lie, Cheat, Go to Prison and Eat Chocolate Cake” by Dan Ariely, the man pictured above. Dan is a professor of behavioural economics and psychology at Duke and his new book explores the reasons that we lie to each other. I was interested in this because I’m always looking for explanations of student behaviour and I want to understand their motivations. I know that my students will rationalise and do some strange things but, if I’m forewarned, maybe I can construct activities and courses in a way that heads this off at the pass.
There were several points of interest to me. The first was the question whether a cost/benefit analysis of dishonesty – do something bad, go to prison – actually has the effect that we intend. As Ariely points out, if you talk to the people who got caught, the long-term outcome of their actions was never something that they thought about. He also discusses the notion of someone taking small steps, a little each time, that move them from law abiding, for want of a better word, to dishonest. Rather than set out to do bad things in one giant leap, people tend to take small steps, rationalising each one, and after each step opening up a range of darker and darker options.
Welcome to the slippery slope – beloved argument of rubicose conservative politicians since time immemorial. Except that, in this case, it appears that the slop is piecewise composed on tiny little steps. Yes, each step requires a decision, so there isn’t the momentum that we commonly associate with the slope, but each step, in some sense, takes you to larger and larger steps away from the honest place from which you started.
Ariely discusses an experiment where he gave two groups designer sunglasses and told one group that they had the real thing, and the other that they had fakes, and then asked them to complete a test and then gave them a chance to cheat. The people who had been randomly assigned into the ‘fake sunglasses’ group cheated more than the others. Now there are many possible reasons for this. One of them is the idea that if you know that are signalling your status deceptively to the world, which is Ariely’s argument, you are in a mindset where you have taken a step towards dishonesty. Cheating a little more is an easier step. I can see many interpretations of this, because of the nature of the cheating which is in reporting how many questions you completed on the test, where self-esteem issues caused by being in the ‘fake’ group may lead to you over-promoting yourself in the reporting of your success on the quiz – but it’s still cheating. Ultimately, whatever is motivating people to take that step, the step appears to be easier if you are already inside the dishonest space, even to a degree.
[Note: Previous paragraph was edited slightly after initial publication due to terrible auto-correcting slipping by me. Thanks, Gary!]
Where does something like copying software or illicitly downloading music come into this? Does this constant reminder of your small, well-rationalised, step into low-level lawlessness have any impact on the other decisions that you make? It’s an interesting question because, according to the outline in Ariely’s sunglasses experiment, we would expect it to be more of a problem if the products became part of your projected image. We know that having developed a systematic technological solution for downloading is the first hurdle in terms of achieving downloads but is it also the first hurdle in making steadily less legitimate decisions? I actually have no idea but would be very interested to see some research in this area. I feel it’s too glib to assume a relationship, because it is so ‘slippery slope’ argument, but Ariely’s work now makes me wonder. Is it possible that, after downloading enough music or software, you could actually rationalise the theft of a car? Especially if you were only ‘borrowing’ it? (Personally, I doubt it because I think that there are several steps in between.) I don’t have a stake in this fight – I have a personal code for behaviour in this sphere that I can live with but I see some benefits in asking and trying to answer these questions from something other that personal experience.
Returning to the article, of particular interest to me was the discussion of an honour code, such as Princeton’s, where students sign a pledge. Ariely sees it as benefit as a reminder to people that is active for some time but, ultimately, would have little value over several years because, as we’ve already discussed, people rationalise in small increments over the short term rather than constructing long-term models where the pledge would make a difference. Sign a pledge in 2012 and it may just not have any impact on you by the middle of 2012, let alone at the end of 2015 when you’re trying to graduate. Potentially, at almost any cost.
In terms of ongoing reminders, and a signature on a piece of work saying (in effect) “I didn’t cheat”, Ariely asks what happens if you have to sign the honour clause after you’ve finished a test – well, if you’ve finished then any cheating has already occurred so the honour clause is useless then. If you remind people at the start of every assignment, every test, and get them to pledge at the beginning then this should have an impact – a halo effect to an extent, or a reminder of expectation that will make it harder for you to rationalise your dishonesty.
In our school we have an electronic submission system that require students to use to submit their assignments. It has boiler plate ‘anti-plagiarism’ text and you must accept the conditions to submit. However, this is your final act before submission and you have already finished the code, which falls immediately into the trap mentioned in the previous paragraph. Dan Ariely’s answers have made me think about how we can change this to make it more of an upfront reminder, rather than an ‘after the fact – oh it may be too late now’ auto-accept at the end of the activity. And, yes, reminder structures and behaviour modifiers in time banking are also being reviewed and added in the light of these new ideas.
The Wired Q&A is very interesting and covers a lot of ground but, realistically, I think I have to go and buy Dan Ariely’s book(s), prepare myself for some harsh reflection and thought, and plan for a long weekend of reading.
In yesterday’s post, I laid out an evaluation scheme that allocated the work of evaluation based on the way that we tend to teach and the availability, and expertise, of those who will be evaluating the work. My “top” (arbitrary word) tier of evaluators, the E1s, were the teaching staff who had the subject matter expertise and the pedagogical knowledge to create all of the other evaluation materials. Despite the production of all of these materials and designs already being time-consuming, in many cases we push all evaluation to this person as well. Teachers around the world know exactly what I’m talking about here.
Our problem is time. We move through it, tick after tick, in one direction and we can neither go backwards nor decrease the number of seconds it takes to perform what has to take a minute. If we ask educators to undertake good learning design, have engaging and interesting assignments, work on assessment levels well up in the taxonomies and we then ask them to spend day after day standing in front of a class and add marking on top?
Forget it. We know that we are going to sacrifice the number of tasks, the quality of the tasks or our own quality of life. (I’ve written a lot about time before, you can search my blog for time or read this, which is a good summary.) If our design was good, then sacrificing the number of tasks or their quality is going to compromise our design. If we stop getting sleep or seeing our families, our work is going to suffer and now our design is compromised by our inability to perform to our actual level of expertise!
When Henry Ford refused to work his assembly line workers beyond 40 hours because of the increased costs of mistakes in what were simple, mechanical, tasks, why do we keep insisting that complex, delicate, fragile and overwhelmingly cognitive activities benefit from us being tired, caffeine-propped, short-tempered zombies?
We’re not being honest. And thus we are not meeting our requirement for truth. A design that gets mangled for operational reasons without good redesign won’t achieve our outcomes. That’s not going to achieve our results – so that’s not good. But what of beauty?
What are the aesthetics of good work? In Petts’ essay on the Arts and Crafts movement, he speaks of William Morris, Dewey and Marx (it’s a delightful essay) and ties the notion of good work to work that is authentic, where such work has aesthetic consequences (unsurprisingly given that we were aiming for beauty), and that good (beautiful) work can be the result of human design if not directly the human hand. Petts makes an interesting statement, which I’m not sure Morris would let pass un-challenged. (But, of course, I like it.)
It is not only the work of the human hand that is visible in art but of human design. In beautiful machine-made objects we still can see the work of the “abstract artist”: such an individual controls his labor and tools as much as the handicraftsman beloved of Ruskin.
Jeffrey Petts, Good Work and Aesthetic Education: William Morris, the Arts and Crafts Movement, and Beyond, The Journal of Aesthetic Education, Vol. 42, No. 1 (Spring, 2008), page 36
Petts notes that it is interesting that Dewey’s own reflection on art does not acknowledge Morris especially when the Arts and Crafts’ focus on authenticity, necessary work and a dedication to vision seems to be a very suitable framework. As well, the Arts and Crafts movement focused on the rejection of the industrial and a return to traditional crafting techniques, including social reform, which should have resonated deeply with Dewey and his peers in the Pragmatists. However, Morris’ contribution as a Pragmatist aesthetic philosopher does not seem to be recognised and, to me, this speaks volumes of the unnecessary separation between cloister and loom, when theory can live in the pragmatic world and forms of practice can be well integrated into the notional abstract. (Through an Arts and Crafts lens, I would argue that there is are large differences between industrialised education and the provision, support and development of education using the advantages of technology but that is, very much, another long series of posts, involving both David Bowie and Gary Numan.)
But here is beauty. The educational designer who carries out good design and manages to hold on to enough of her time resources to execute the design well is more aesthetically pleasing in terms of any notion of creative good works. By going through a development process to stage evaluations, based on our assessment and learning environment plans, we have created “made objects” that reflect our intention and, if authentic, then they must be beautiful.
We now have a strong motivating factor to consider both the often over-looked design role of the educator as well as the (easier to perceive) roles of evaluation and intervention.
I’ve revisited the diagram from yesterday’s post to show the different roles during the execution of the course. Now you can clearly see that the course lecturer maintains involvement and, from our discussion above, is still actively contributing to the overall beauty of the course and, we would hope, it’s success as a learning activity. What I haven’t shown is the role of the E1 as designer prior to the course itself – but that’s another post.
Even where we are using mechanical or scripted human markers, the hand of the designer is still firmly on the tiller and it is that control that allows us to take a less active role in direct evaluation, while still achieving our goals.
Do I need to personally look at each of the many works all of my first years produce? In our biggest years, we had over 400 students! It is beyond the scale of one person and, much as I’d love to have 40 expert academics for that course, a surplus of E1 teaching staff is unlikely anytime soon. However, if I design the course correctly and I continue to monitor and evaluate the course, then the monster of scale that I have can be defeated, if I can make a successful argument that the E2 to E4 marker tiers are going to provide the levels of feedback, encouragement and detailed evaluation that are required at these large-scale years.
Tomorrow, we look at the details of this as it applies to a first-year programming course in the Processing language, using a media computation approach.
I’ve written before about the issues of prolonged human workload leading to ethical problems and the fact that working more than 40 hours a week on a regular basis is downright unproductive because you get less efficient and error-prone. This is not some 1968 French student revolutionary musing on what benefits the soul of a true human, this is industrial research by Henry Ford and the U.S. Army, neither of whom cold be classified as Foucault-worshipping Situationist yurt-dwelling flower children, that shows that there are limits to how long you can work in a sustained weekly pattern and get useful things done, while maintaining your awareness of the world around you.
The myth won’t die, sadly, because physical presence and hours attending work are very easy to measure, while productive outputs and their origins in a useful process on a personal or group basis are much harder to measure. A cynic might note that the people who are around when there is credit to take may end up being the people who (reluctantly, of course) take the credit. But we know that it’s rubbish. And the people who’ve confirmed this are both philosophers and the commercial sector. One day, perhaps.
But anyone who has studied cognitive load issues, the way that the human thinking processes perform as they work and are stressed, will be aware that we have a finite amount of working memory. We can really only track so many things at one time and when we exceed that, we get issues like the helmet fire that I refer to in the first linked piece, where you can’t perform any task efficiently and you lose track of where you are.
So what about multi-tasking?
Ready for this?
There’s a ton of research on this but I’m going to link you to a recent article by Daniel Levitin in the Guardian Q&A. The article covers the fact that what we are really doing is switching quickly from one task to another, dumping one set of information from working memory and loading in another, which of course means that working on two things at once is less efficient than doing two things one after the other.
But it’s more poisonous than that. The sensation of multi-tasking is actually quite rewarding as we get a regular burst of the “oooh, shiny” rewards our brain gives us for finding something new and we enter a heightened state of task readiness (fight or flight) that also can make us feel, for want of a better word, more alive. But we’re burning up the brain’s fuel at a fearsome rate to be less efficient so we’re going to tire more quickly.
Get the idea? Multi-tasking is horribly inefficient task switching that feels good but makes us tired faster and does things less well. But when we achieve tiny tasks in this death spiral of activity, like replying to an e-mail, we get a burst of reward hormones. So if your multi-tasking includes something like checking e-mails when they come in, you’re going to get more and more distracted by that, to the detriment of every other task. But you’re going to keep doing them because multi-tasking.
I regularly get told, by parents, that their children are able to multi-task really well. They can do X, watch TV, do Y and it’s amazing. Well, your children are my students and everything I’ve seen confirms what the research tells me – no, they can’t but they can give a convincing impression when asked. When you dig into what gets produced, it’s a different story. If someone sits down and does the work as a single task, it will take them a shorter time and they will do a better job than if they juggle five things. The five things will take more than five times as long (up to 10, which really blows out time estimation) and will not be done as well, nor will the students learn about the work in the right way. (You can actually sabotage long term storage by multi-tasking in the wrong way.) The most successful study groups around the Uni are small, focused groups that stay on one task until it’s done and then move on. The ones with music and no focus will be sitting there for hours after the others are gone. Fun? Yes. Efficient? No. And most of my students need to be at least reasonably efficient to get everything done. Have some fun but try to get all the work done too – it’s educational, I hear. 🙂
It’s really not a surprise that we haven’t changed humanity in one or two generations. Our brains are just not built in a way that can (yet) provide assistance with the quite large amount of work required to perform multi-tasking.
We can handle multiple tasks, no doubt at all, but we’ve just got to make sure, for our own well-being and overall ability to complete the task, that we don’t fall into the attractive, but deceptive, trap that we are some sort of parallel supercomputer.
This is going to be longer than usual but these thoughts have been running around in my mind for a while and, rather than break them up, I thought I’d put them all together here. My apologies for the long read but, to help you, here’s the executive summary. Firstly, we’re not going to get anywhere until all of us truly accept that University students are not some sort of different species but that they are actually junior versions of ourselves – not inferior, just less advanced. Secondly, education is heavily colonising but what we often tend to pass on to our students are mechanisms for conformity rather than the important aspects of knowledge, creativity and confidence.
Let me start with some background and look at the primary and secondary schooling system. There is what we often refer to as traditional education: classroom full of students sitting in rows, writing down the words spoken by the person at the front. Assignments test your ability to learn and repeat the words and apply this is well-defined ways to a set of problems. Then we have progressive education that, depending upon your socio-political alignment and philosophical bent, is either a way of engaging students and teachers in the process for better outcomes, more critical thought and a higher degree of creativity; or it is cats and dogs lying down together, panic in the streets, a descent into radicalism and anarchy. (There is, of course, a middle ground, where the cats and dogs sleep in different spots, in rows, but engage in discussions of Foucault.) Dewey wrote on the tension between these two apparatus (seriously, is there anything he didn’t write on?) but, as we know, he was highly opposed to the lining up on students in ranks, like some sort of prison, so let’s examine why.
Simply put, the traditional model is an excellent way to prepare students for factory work but it’s not a great way to prepare them for a job that requires independence or creativity. You sit at your desk, the teacher reads out the instructions, you copy down the instructions, you are assigned piece work to do, you follow the instructions, your work is assessed to determine if it is acceptable, if not, you may have to redo it or it is just rejected. If enough of your work is deemed acceptable, then you are now a successful widget and may take your place in the community. Of course, it will help if your job is very similar to this. However, if your deviation from the norm is towards the unacceptable side then you may not be able to graduate until you conform.
Now, you might be able to argue this on accuracy, were it not for the constraining behavioural overtones in all of this. It’s not about doing the work, it’s about doing the work, quietly, while sitting for long stretches, without complaint and then handing back work that you had no part in defining for someone else to tell you what is acceptable. A pure model of this form cripples independence because there is no scope for independent creation as it must, by definition, deviate and thus be unacceptable.
Progressive models change this. They break up the structure of the classroom, change the way that work is assigned and, in many cases, change the power relationship between student and teacher. The teacher is still authoritative in terms of information but can potentially handle some (controlled for societal reasons) deviation and creativity from their student groups.
The great sad truth of University is that we have a lot more ability to be progressive because we don’t have to worry about too many severe behavioural issues as there is enough traditional education going on below these levels (or too few management resources for children in need) that it is highly unlikely that students with severe behavioural issues will graduate from high school, let alone make it to University with the requisite grades.
But let’s return to the term ‘colonising’, because it is a loaded term. We colonise when we send a group of settlers to a new place and attempt to assert control over it, often implicit in this is the notion that the place we have colonised is now for our own use. Ultimately, those being colonised can fight or they can assimilate. The most likely outcome if the original inhabitants fight is they they are destroyed, if those colonising are technologically superior or greatly outnumber them. Far more likely, and as seen all around the world, is the requirement for the original inhabitants to be assimilated to the now dominant colonist culture. Under assimilation, original cultures shrink to accommodate new rules, requirements, and taboos from the colonists.
In the case of education, students come to a University in order to obtain the benefits of the University culture so they are seeking to be colonised by the rules and values of the University. But it’s very important to realise that any positive colonisation value (and this is a very rare case, it’s worth noting) comes with a large number of negatives. If students come from a non-Western pedagogical tradition, then many requirements at Universities in Australia, the UK and America will be at odds with the way that they have learned previously, whether it’s power distances, collectivism/individualism issues or even in the way that work is going to be assigned and assessed. If students come from a highly traditional educational background, then they will struggle if we break up the desks and expect them to be independent and creative. Their previous experiences define their educational culture and we would expect the same tensions between colonist and coloniser as we would see in any encounter in the past.
I recently purchased a game called “Dog Eat Dog“, which is a game designed to allow you to explore the difficult power dynamics of the colonist/colonised relationship in the Pacific. Liam Burke, the author, is a second-generation half-Filipino who grew up in Hawaii and he developed the game while thinking about his experiences growing up and drawing on other resources from the local Filipino community.
The game is very simple. You have a number of players. One will play the colonist forces (all of them). Each other player will play a native. How do you select the colonist? Well, it’s a simple question: Which player at the table is the richest?
As you can tell, the game starts in uncomfortable territory and, from that point on, it can be very challenging as the the native players will try to run small scenarios that the colonist will continually interrupt, redirect and adjudicate to see how well the natives are playing by the colonist’s rules. And the first rule is:
The (Native people) are inferior to the (Occupation people).
After every scenario, more rules are added and the native population can either conform (for which they are rewarded) or deviate (for which they are punished). It actually lies inside the colonist’s ability to kill all the natives in the first turn, should they wish to do so, because this happened often enough that Burke left it in the rules. At the end of the game, the colonists may be rebuffed but, in order to do that, the natives have become adept at following the rules and this is, of course, at the expense of their own culture.
This is a difficult game to explain in the short form but the PDF is only $10 and I think it’s an important read for just about anyone. It’s a short rule book, with a quick history of Pacific settlement and exemplars, produced from a successful Kickstarter.
Let’s move this into the educational sphere. It would be delightful if I couldn’t say this but, let’s be honest, our entire system is often built upon the premise that:
The students are inferior to the teachers.
Let’s play this out in a traditional model. Every time the students get together in order to do anything, we are there to assess how well they are following the rules. If they behave, they get grades (progress towards graduation). If they don’t conform, then they don’t progress and, because everyone has finite resources, eventually they will drop out, possibly doing something disastrous in the process. (In the original game, the native population can run amok if they are punished too much, which has far too many unpleasant historical precedents.) Every time that we have an encounter with the students, they have to come up with a rule to work out how they can’t make the same mistake again. This new rule is one that they’re judged against.
When I realised how close a parallel this, a very cold shiver went down my spine. But I also realised how much I’d been doing to break out of this system, by treating students as equals with mutual respect, by listening and trying to be more flexible, by interpreting a more rigid pedagogical structure through filters that met everyone’s requirements. But unless I change the system, I am merely one of the “good” overseers on a penal plantation. When the students leave my care, if I know they are being treated badly, I am still culpable.
As I started with, valuing knowledge, accuracy, being productive (in an academic sense), being curious and being creative are all things that we should be passing on from our culture but these are very hard things to pass on with a punishment/reward modality as they are all cognitive in aspect. What is far easier to do is to pass on culture such as sitting silently, being bound by late penalties, conformity to the rules and the worst excesses of the Banking model of education (after Freire) where students are empty receiving objects that we, as teachers, fill up. There is no agency in such a model, nor room for creativity. The jug does not choose the liquid that fills it.
It is easy to see examples all around us of the level of disrespect levelled at colonised peoples, from the mindless (and well-repudiated) nonsense spouted in Australian newspapers about Aboriginal people to the racist stereotyping that persists despite the overwhelming evidence of equality between races and genders. It is also as easy to see how badly students can be treated by some staff. When we write off a group of students because they are ‘bad students’ then we have made them part of a group that we don’t respect – and this empowers us to not have to treat them as well as we treat ourselves.
We have to start from the basic premise that our students are at University because they want to be like us, but like the admirable parts of us, not the conformist, factory model, industrial revolution prison aspects. They are junior lawyers, young engineers, apprentice architects when they come to us – they do not have to prove their humanity in order to be treated with respect. However, this does have to be mutual and it’s important to reflect upon the role that we have as a mentor, someone who has greater knowledge in an area and can share it with a more junior associate to bring them up to the same level one day.
If we regard students as being worthy of respect, as being potential peers, then we are more likely to treat them with a respect that engenders a reciprocal relationship. Treat your students like idiots and we all know how that goes.
The colonial mindset is poisonous because of the inherent superiority and because of the value of conformity to imposed rules above the potential to be gained from incorporating new and useful aspects of other cultures. There are many positive aspects of University culture but they can happily coexist with other educational traditions and cultures – the New Zealand higher educational system is making great steps in this direction to be able to respect both Maori tradition and the desire of young people to work in a westernised society without compromising their traditions.
We have to start from the premise that all people are equal, because to do otherwise is to make people unequal. We then must regard our students as ourselves, just younger, less experienced and only slightly less occasionally confused than we were at that age. We must carefully examine how we expose students to our important cultural aspects and decide what is and what is not important. However, if all we turn out at the end of a 3-4 year degree is someone who can perform a better model of piece work and is too heavily intimidated into conformity that they cannot do anything else – then we have failed our students and ourselves.
The game I mentioned, “Dog Eat Dog”, starts with a quote by a R. Zamora Linmark from his poem “They Like You Because You Eat Dog”. Linmark is a Filipino American poet, novelist, and playwright, who was educated in Honolulu. His challenging poem talks about the ways that a second-class citizenry are racially classified with positive and negative aspects (the exoticism is balanced against a ‘brutish’ sexuality, for example) but finishes with something that is even more challenging. Even when a native population fully assimilates, it is never enough for the coloniser, because they are still not quite them.
“They like you because you’re a copycat, want to be just like them. They like you because—give it a few more years—you’ll be just like them.
And when that time comes, will they like you more?”
R. Zamora Linmark, “They Like You Because You Eat Dog”, from “Rolling the R’s”
I had a discussion once with a remote colleague who said that he was worried the graduates of his own institution weren’t his first choice to supervise for PhDs as they weren’t good enough. I wonder whose fault he thought that was?