The 1-Year Degree – what’s your reaction?

I’m going to pose a question and I’d be interested in your reaction.

“Is there a structure and delivery mechanism that could produce a competent professional graduate from a degree course such as engineering or computer science, which takes place over a maximum of 12 months including all assessment, without sacrificing quality or content?”

What was your reaction? More importantly, what is the reasoning behind your reaction?

For what it’s worth my answer is “Not with our current structures but, apart from that, maybe.” which is why one of my side projects is an attempt to place an entire degree’s worth of work into a 12-month span as a practice exercise for discussing the second and third year curriculum review that we’re holding later on this year.

Our ‘standard’ estimate for any normal degree program is that a student is expected to have a per-semester load of four courses (at 3 units a course, long story) and each of these courses will require 156 hours from start to finish. (This is based on 10 hours per week, including contact and non-contact, and roughly 36 hours for revision towards examination or the completion of other projects.) Based on this estimate, and setting up an upper barrier of 40 hours/week, for all of the good research-based reasons that I’ve discussed previously, there is no way that I can just pick up the existing courses and drop them into a year. A three-year program has six semesters, with four courses per semester, which gives an hour burden of 24*156 = 3,744. At 40 hours per week, we’d need 93.6 weeks (let’s call that 94), or 1.8 years.

But, hang on, we already have courses that are 6-unit and span two semesters – in fact, we have enormous projects for degree programs like Honours that are worth the equivalent of four courses. Interestingly, rather than having an exam every semester, these have a set of summative and formative assignments embedded to allow the provision of feedback and the demonstration of knowledge and skill acquisition – does this remove the need to have 36 hours for exam study for each semester if we build the assignments correctly?

Let’s assume that it does. Now we have a terminal set of examinations at the end of each year, instead of every semester. Now I have 12 courses at 120 hours each and 12 at 156 hours each. Now we’re down to 3,312 – which is only 1.6 years. Dang. Still not there. But it’s ok, I can see all of you who have just asked “Well, why are you so keen on using examinations if you’re happy with summative assignments testing concepts as you go and then building in the expectation of this knowledge in later modules?” Let’s drop the exam requirement even further to a final set of professional level assessment criteria, carried out at the end of the degree to test high-level concepts and advanced skills. Now, of the 24 courses that a student sits, almost all assessment work has moved into continuous assessment mode, rich in feedback, with summative checkpoints and a final set of examinations as part of the four capstone courses at the end. This gives us 3,024 hours – about 1.45 years.

But this is also ignoring that the first week of many of these courses is required revision after some 6-18 weeks of inactivity as the students go away to summer break or home for various holidays. Let’s assume even further that, with the exception of the first four courses that they do, that we build this continuously so that skills and knowledge are reinforced as micro slides, scattered throughout the work, supported with recordings, podcasts, notes, guides and quick revision exercises in the assessment framework. Now I can slice maybe 5 hours off 20 of the courses (the last 20) – cutting me down by another 100 hours and that’s half a month saved, down to 1.4 years.

Of course, I’m ignoring a lot of issues here. I’m ignoring the time it takes someone to digest information but, having raised that, can you tell me exactly how long it takes a student to learn a new concept? This is a trick question as the answer generally depends upon the question “how are you teaching them?” We know that lectures are one of the worst ways to transfer information, with A/V displays, lectures and listening all having a retention rate less than 40%. If you’re not retaining, your chances of learning something are extremely low. At the same time, somewhere between 30-50% of the time that we’re allocating to those courses we already teach are spent in traditional lectures – at time of writing. We can improve retention (of both knowledge and students) when we use group work (50% and higher for knowledge) or get the students to practice (75%) or, even better, instruct someone else (up to 90%). If we can restructure the ’empty’ or ‘low transfer’ times into other activities that foster collaboration or constructive student pedagogy with a role transfer that allows students to instruct each other, then we can potentially greatly improve our usage of time.

If we use this notion and slice, say, 20 hours from each course because we can get rid of that many contact hours that we were wasting and get the same, if not better, results, we’re down to 2,444 hours, about 1.18 years. And I haven’t even started looking at the notion of concept alignment, where similar concepts are taught across two different concepts and could be put in one place, taught once, consistently and then built upon for the rest of the course. Suddenly, with the same concepts and a potentially improved educational design – we’re looking the 1-year degree in the face.

Now, there will be people who will say “Well, how does the student mature in this time? That’s only one year!” to which my response is “Well, how are you training them for maturity? Where are the developing exercises? The formative assessment based on careful scaffolding in societal development and intellectual advancement?” If the appeal of the three-year degree is that people will be 19-20 when they graduate, and this is seen as a good thing, then we solve this problem for the 1-year degree by waiting two years before they start!

Having said all of this, and believing that a high quality 1-year degree is possible, let me conclude by saying that I think that it is a terrible idea! University is more than a sequence of assessments and examinations, it is a culture, a place for intellectual exploration and the formation of bonds with like-minded friends. It is not a cram school to turn out a slightly shell-shocked engineer who has worked solidly, and without respite, for 52 weeks. However, my aim was never actually to run a course in a year, it was to see how I could restructure a course to be able to more easily modularise it, to break me out of the mental tyranny of a three- or four-year mandate and to focus on learning outcomes, educational design and sound pedagogy. The reason that I am working on this is so that I can produce a sound course structure with which students can engage, regardless of whether they are full-time or not, clearly outlining dependency and requirements. Yes, if we break this up into part-time, we need to add revision modules back in – but if we teach it intensively (or on-line) then those aren’t required. This is a way to give students choice and the freedom to come in at any age, with whatever time they have, but without sacrificing the quality of the underlying program. This is a bootstrap program for a developing nation, a quick entry point for people who had to go to work – this is making up for decades of declining enrolments in key areas.

This is going on a war footing against the forces of ignorance.

There are many successful “Open” universities that use similar approaches but I wanted to go through the exercise myself, to allow me the greatest level of intellectual freedom while looking at our curriculum review. Now, I feel that I can focus on Knowledge Areas for my specifics and on the program as a whole, freed of the binding assumption that there is an inevitable three-year grind ahead for any student. Perhaps one of the greatest benefits for me is the thought that, for students who can come to us for three years, I can put much, much more into the course if they have the time – and these things, of interest, regarding beauty, of intellectual pursuits, can replace some of the things that we’ve lost over the years in the last two decades of change in University.


Brief but good news

A happy surprise in my mailbox today, but first the background. We’ve been teaching Puzzle Based Learning at Adelaide for several years now, based on Professor Zbigniew Michalewicz’s concept for a course that encouraged problem solving in a domain-free environment. (You can read more details about it by searching for Puzzle Based Learning with the surnames Falkner, Michalewicz and Sooriamurthi – we’ve had work published on this in IEEE Computer and as a workshop at SIGCSE, among several others.) Zbyszek (Adelaide), Raja (Sooriamurthi, a Teaching Professor at CMU) and I teamed up with Professor Ed Meyer (Physics at Baldwin-Wallace) to put together a textbook proposal to help people teach this information.

Great news – our proposal has been accepted by an excellent publishing house who appear to be genuinely excited about the book! As this is my first book, I’m very excited and pleased – but it’s a great reflection on the strength of the team and our composite skills and background, especially with the inter-disciplinary aspects. I’ve seen a lot of exciting work come out of Baldwin-Wallace and, while this is my first time working with Ed, I’m really looking forward to it. (Zbyszek, Raja and I have worked together a lot but I’m still excited to be working with them again!)

Good news after a rather difficult week.


Reflecting on rewards – is Time Banking a reward or a technique?

The Reward As It Is Often Implemented
(In Advance and Starting With a B)

Enough advocacy for a while, time to think about research again! Given that I’ve just finished Alfie Kohn’s Punished by Rewards, and more on that later, I’ve been looking very carefully at everything I do with students to work out exactly what I am trying to do. One of Kohn’s theses is that we tend to manipulate people towards compliance through extrinsic tools such as incentives and rewards, rather than provide an environment in which their intrinsic motivational aspects dominate and they are driven to work through their own interest and requirements. Under Kohn’s approach, a gold star for sitting quietly achieve little except to say that sitting quietly must be so bad that you need to be bribed, and developing a taste for gold stars in the student. If someone isn’t sitting quietly, is it because they haven’t managed sitting quietly (the least rewarding unlockable achievement in any game) or that they are disengaged, bored or failing to understand why they are there? Is it, worse, because they are trying to ask questions about work that they don’t understand or because they are so keen to discuss it that they want to talk? Kohn wants to know WHY people are or aren’t doing things rather than just to stop or start people doing things through threats and bribery.

Where, in this context, does time banking fit? For those who haven’t read me on this before, time banking is described in a few posts I’ve made, with this as one of the better ones to read. In summary, students who hand up work early (and meet a defined standard) get hours in the bank that they can spend at a later date to give themselves a deadline extension – and there are a lot of tuneable parameters around this, but that’s the core. I already have a lot of data that verifies that roughly a third of students hand in on the last day and 15-18% hand up late. However, the 14th highest hand-in hour is the one immediately after the deadline. There’s an obvious problem where people aren’t giving themselves enough time to do the work but “near-missing” by one hour is a really silly way to lose marks. (We won’t talk about the pedagogical legitimacy of reducing marks for late work at the moment, that’s a related post I hope to write soon. Let’s assume that our learning design requires that work be submitted at a certain time to reinforce knowledge and call that the deadline – the loss, as either marks or knowledge reinforcement, is something that we want to avoid.)

But, by providing a “reward” for handing up early, am I trying to bribe my students into behaviour that I want to see? I think that the answer is “no”, for reasons that I’ll go into.

Firstly, the fundamental concept of time banking is that students have a reason to look at their assignment submission timetable as a whole and hand something up early because they can then gain more flexibility later on. Under current schemes, unless you provide bonus points, there is no reason for anyone to hand up more than one second early – assuming synchronised clocks. (I object to bonus points for early hand-in for two reasons: it is effectively a means to reward the able or those with more spare time, and because it starts to focus people on handing up early rather than the assignment itself.) This, in turn, appears to lead to a passive, last minute thinking pattern and we can see the results of that in our collected assignment data – lots and lots of near-miss late hand-ins. Our motivation is to focus the students on the knowledge in the course by making them engage with the course as a whole and empowering themselves into managing their time rather than adhering to our deadlines. We’re not trying to control the students, we’re trying to move them towards self-regulation where they control themselves.

Secondly, the same amount of work has to be done. There is no ‘reduced workload’ for handing in early, there is only ‘increased flexibility’. Nobody gets anything extra under this scheme that will reinforce any messages of work as something to be avoided. The only way to get time in the bank is to do the assignments – it is completely linked to the achievement that is the core of the course, rather than taking focus elsewhere.

Thirdly, a student can choose not to use it. Under almost every version of the scheme I’ve sketched out, every student gets 6 hours up their sleeve at the start of semester. If they want to just burn that for six late hand-ins that are under an hour late, I can live with that. It will also be very telling if they then turn out to be two hours late because, thinking about it, that’s a very interesting mental model that they’ve created.

But how is it going to affect the student? That’s a really good question. I think that, the way that it’s constructed, it provides a framework for students to work with, one that ties in with intrinsic motivation, rather than a framework that is imposed on students – in fact, moving away from the rigidly fixed deadlines that (from our data) don’t even to be training people that well anyway is a reduction in manipulate external control.

Will it work? Oh, now, there’s a question. After about a year of thought and discussion, we’re writing it all up at the moment for peer review on the foundations, the literature comparison, the existing evidence and the future plans. I’ll be very interested to see both the final paper and the first responses from our colleagues!


To Leave or Not To Leave (Academia)

There’s a post that’s been making the rounds from a University of New Mexico academic who is leaving to go to Google. Mark has blogged on it, and linked to a more positive post that reinforces why you would stay in the job, but my reaction to the original post is that there are far too many solid, scoring, points being made and, while it’s not gloom for the whole sector yet, there are large storm clouds hanging heavily over our heads.

I think that we’ve made some crucial mistakes that, on reflection, we need to address if we want to stop people leaving. Be in no doubt, when the storms come, yes, the casual workforce takes it in the neck but a lot of other people jump as well. They go somewhere else that supports them, inspires them, challenges them and does not make them wonder why they’re doing the job. It takes 10-20 years to produce a “useful” academic. Get the University climate wrong and they will pick up and leave. Will that work for everyone? No. It will work for your passionate, knowledgable, personable, approachable and amazing staff who will easily find work elsewhere.

Which, of course, leaves your schools and departments gutted of the firebrands, the doers, the visionaries and those who can inspire and lead the rest of us to the same level. I believe that we can all lift to the level of these great people – if we can remain in contact with them. Take them away and we stagnate. We all know, deep down, that bad cultures come from uninspired people, and uninspired people are uninspiring. Gut a school enough and you will have a terrible time of rebuilding it. But what happened?

I think that we made three terrible mistakes.

  1. We let people cut our funding and we all just worked harder.
    If you can cut the amount that you pay the worker, while keeping the same productivity level, why on earth would you pay them any more? You separate the worth of the activity or the person from the value that they produce and then you try to maximise your profits. Why do people keep cutting University and school funding? Because we just step up and work harder because we are committed to our jobs.
    What is worse, we not only work harder at our real jobs, we do all of the extra stuff as well.
  2. We did all of the admin on top of our real jobs, which include mentoring, guidance, teaching, learning, research, and so on.
    This is the crazy thing – not only are we all working harder meeting imposed metrics and standards, we’re also filling out countless forms, sitting around in meetings arguing about paperclip purchase optimisation (or similar) or sitting through yearly regurgitations of what we’ve done, delivered by other academics who can’t manage, and we do it almost as hard as we do the things that we get paid to do as academics.
  3. We didn’t sit down and weigh up the future cost of steps 1 and 2.
    And here’s the killer. Because we’re doing 1 and 2, and because the sky hasn’t fallen and education is still happening, administrators and funding bodies would be crazy to not try and push this further in order to see if they can get even more savings out and still maintain the same levels. This is fundamental business practice – pay the least that you have to for your supplies, charge the most that you can for your product.
    Ultimately, this will kill us. We are have gone from comfortable, to lean and mean – now we’re heading towards starvation. Rather than worrying about this, we stand and admire ourselves in the mirror like mentally ill thirteen year-olds, congratulating ourselves on how good we look when we are starting to lose important function – irreversibly. The fat, such as it was (and I think that has been overplayed for political reasons), is gone. Now we’re cutting muscle and organs.
    Governments talked about tight times, funding bodies talked about financial crises, business found cheaper overseas workers, off-shoring meant that local investment started to dry up – we listened, we nodded, we said “Ok, we’ll keep going” and we sent completely the wrong message.
    Universities take 10-20 years to train academics, but the impact of a drop in educated populace takes about the same time to really have an impact on the workforce. This is well beyond the average lifespan of an elected official and it’s not as direct as the “in your face” nature of a tax increase. But this is our fault, to and extent, because we know that this is a problem and, as a group, we took it.

I had an argument with someone the other day about the role of academics and they were, I think, angry with me because I placed pedagogy and learning quality as a higher priority than convenience of access to the students. Of course, I want everyone to have access to Uni but if what we are teaching is not of sufficient quality then there is no point coming! As a teaching academic, this should be my job. Social equity, access to University, increasing mobility and improving the school systems? That’s the government’s job, the government’s purse, working in association with the schools and universities – I welcome it! I support it! But I have neither the funds, the influence or the training to actually do this. Yet, because of shortfalls elsewhere, as our funding is cut, as the casual workforce grows, as we all work harder , more and more of the things that are not core fall on me and my colleagues.

This is a fantastic job. This is an important job. Universities, in whatever form, are vital to the future and development of our species – when they are run properly and to a high standard. I do not think that all is lost, but I am rapidly reaching a point where I think that we have to stop taking it, look at those crucial three mistakes and say “No more.” Funding bodies, administrators and, on occasion, we ourselves are devaluing ourselves through our professionalism, our dedication and our politeness. Yes, we need to be pragmatic but we have worth, we do a good job and we are part of an essential role: education must be maintained.

My priority is to my students and my colleagues, and to the future. I think that it’s time for some serious re-thinking.


A Flurry of Inauthenticity

I’ve received numerous poorly personalised e-mails recently – today’s was from a company that published some of my work in a book and was addressed as “Dear *TITLE:FNAME*, which made me feel part of the family, I can tell you! Of course this is low hanging fruit because we’re all aware how mail outs actually work. No-one has the mind-numbingly and unnecessarily manual task of sitting down and actually writing these things anymore. They, very sensibly, use a computer to take a repetitive task and automate it. This would be fine, and I have no problem with it, except where we attempt to mimic a genuine concern.

One of the big changes I’ve noticed recently on Qantas, the airline that I do most of my flying with, is that they have noticed that my tickets say “Dr Falkner” and not “Mr Falkner”. For years, they would greet me at the entrance to the plane, look at my ticket and then promptly demonstrate the emptiness of the personal greeting by getting the title wrong. (The title, incidentally, is not a big deal. 99% of my students and colleagues call me ‘Nick’, the remainder resorting to “Dr Nick”. The issue here is that they are attempting to conduct an activity in greeting that is immediately revealed as meaningless.) Over the past two years, however, suddenly everyone is reading the whole ticket and, while it is still an activity akin to saying “Hello, Human”, this is much more reasonable facsimile of a personalised greeting. I note that they did distinguish themselves recently by greeting me as Dr Falkner and my wife, the original Dr Falkner, as (Miss or Mrs, I don’t quite recall) Falkner.

But my mailbox is full of these near misses. Letters from students addressed to Dr Rick Falkner. Many people who write to Professor Falkner, which I get because they’re trying not to offend me but it just goes to show that they haven’t really bothered to look me up. These are all cold calls – surface and shallow, from people who not only don’t know me but, I suspect, they don’t really want to get to know me – they’re just after the “Doctor” part of Nick Falkner. Much as Qantas looked at my Frequent Flyer status and changed their tone based on whether I was Silver or Gold (when you’re Gold, cabin crew come down to have a personal chat with you occasionally, especially if you’re part of a Gold couple flying together. I’m scared to ask what Platinum get), where my name and title were a convenient afterthought, most people who write to Professor Nick Falkner are after that facet of me which is useful to them. This is implicitly manipulative and thoroughly inauthentic.

This is, of course, why I try very hard not to do it with students. I do try to be genuinely concerned with the person, rather than their abilities. There are students I’ve known for years and, were I to walk up to them at a social gathering and be unable to recall anything about them other than their marks, they would have a right to feel exploited and ignored – a small cog in my glorious rise to an average career in Academia. This is, of course, not all that easy, especially when you have my memory but the effort is exceedingly important and a good attempt is often as valuable as a good memory – but a good memory generally comes from caring about something and paying attention. We ask of it our students when we present them with educational experiences. We say “This is important, so please pay attention and you’ll develop useful knowledge” so we’re very open about how we expect people to deal with important things. It is, therefore, much more insulting if we make it obvious that we remember nobody from our classes, or nothing of their lives, or we don’t realise the impact that we have from our privileged position at the centre of the web of knowledge. (Yeah, I think I just called us all spiders. Sorry about that. We’re cool spiders, if that helps.)

There are enough pieces of inauthentic e-mail, flyers, TV ads and day-to-day interactions that already bother us, without adding to the inauthenticity in our relationships with our students and our colleagues. Is it easy? No. Is it worthwhile? Yes. Is it what our students should expect of us to at least attempt? I think, yes, but I’d be interested to know what other people think about this – am I setting the bar too high for us or is this just part of our world?


Grand Challenges and the New Car Smell

It has been a crazy week so far. In between launching the new course and attending a number of important presentations, our Executive Dean, Professor Peter Dowd, is leaving the role after 8 years and we’re all getting ready for the handover. At time of writing, I’m sitting in an airport lounge in Adelaide Airport waiting for my flight to Melbourne to go and talk about the Learning and Teaching Academy of which I’m a Fellow so, given that my post queue is empty and that I want to keep up my daily posting routine, today’s post may be a little rushed. (As one of my PhD students pointed out, the typos are creeping in anyway, so this shouldn’t be too much of a change. Thanks, T. 🙂 )

The new course that I’ve been talking about, which has a fairly wide scope with high performing students, has occupied five hours this week and it has been both very exciting and a little daunting. The student range is far wider than usual: two end-of-degree students, three start-of-degree students, one second year and one internal exchange student from the University of Denver. As you can guess, in terms of learning design, this requires me to have a far more flexible structure than usual and I go into each activity with the expectation that I’m going to have to be very light on my feet.

I’ve been very pleased by two things in the initial assessment: firstly, that the students have been extremely willing to be engage with the course and work with me and each other to build knowledge, and secondly, that I have the feeling that there is no real ‘top end’ for this kind of program. Usually, when I design something, I have to take into account our general grading policies (which I strongly agree with) that are not based on curve grading and require us to provide sufficient assessment opportunities and types to give students the capability to clearly demonstrate their ability. However, part of my role is pastoral, so that range of opportunities has to be carefully set so that a Pass corresponds to ‘acceptable’ and I don’t set the bar so high that people pursuing a High Distinction (A+) don’t destroy their prospects in other courses or burn out.

I’ve stressed the issues of identity and community in setting up this course, even accidentally referring to the discipline as Community Science in one of my intro slides, and the engagement level of the students gives me the confidence that, as a group, they will be able to develop each other’s knowledge and give them some boosting – on top of everything and anything that I can provide. This means that the ‘top’ level of achievements are probably going to be much higher than before, or at least I hope so. I’ve identified one of my roles for them as “telling them when they’ve done enough”, much as I would for an Honours or graduate student, to allow me to maintain that pastoral role and to stop them from going too far down the rabbit hole.

Yesterday, I introduced them to R (statistical analysis and graphical visualisation) and Processing (a rapid development and very visual programming language) as examples of tools that might be useful for their projects. In fairly short order, they were pushing the boundaries, trying new things and, from what I could see, enjoying themselves as they got into the idea that this was exploration rather than a prescribed tool set. I talked about the time burden of re-doing analysis and why tools that forced you to use the Graphical User Interface (clicking with the mouse to move around and change text) such as Excel had really long re-analysis pathways because you had to reapply a set of mechanical changes that you couldn’t (easily) automate. Both of the tools that I showed them could be set up so that you could update your data and then re-run your analysis, do it again, change something, re-run it, add a new graph, re-run it – and it could all be done very easily without having to re-paste Column C into section D4 and then right clicking to set the format or some such nonsense.

It’s too soon to tell what the students think because there is a very “new car smell” about this course and we always have the infamous, if contested, Hawthorne Effect, where being obviously observed as part of a study tends to improve performance. Of course, in this case, the students aren’t part of an experiment but, given the focus, the preparation and the new nature – we’re in the same territory. (I have, of course, told the students about the Hawthorne Effect in loose terms because the scope of the course is on solving important and difficult problems, not on knee-jerk reactions to changing the colour of the chair cushions. All of the behaviourists in the audience can now shake their heads, slowly.)

Early indications are positive. On Monday I presented an introductory lecture laying everything out and then we had a discussion about the course. I assigned some reading (it looked like 24 pages but was closer to 12) and asked students to come in with a paragraph of notes describing what a Grand Challenge was in their own words, as well as some examples. The next day, less than 24 hours after the lecture, everyone showed up and, when asked to write their description up on the white board, all got up and wrote it down – from their notes. Then they exchanged ideas, developed their answers and I took pictures of them to put up on our forum. Tomorrow, I’ll throw these up and ask the students to keep refining them, tracking their development of their understanding as they work out what they consider to be the area of grand challenges and, I hope, the area that they will start to consider “their” area – the one that they want to solve.

If even one more person devotes themselves to solving an important problem to be work then I’ll be very happy but I’ll be even happier if most of them do, and then go on to teach other people how to do it. Scale is the killer so we need as many dedicated, trained, enthusiastic and clever people as we can  – let’s see what we can do about that.


Environmental Impact: Iz Tweetz changing ur txt?

Please, please forgive me for the diabolical title but I have been wondering about the effects of saturation in different communication environments and Twitter seemed like an interesting place to start. For those who don’t know about Twitter, it’s an online micro-blogging social media service. Connect to it via your computer or phone and you can put a message in that is up to 140 characters, where each message is called a tweet. What makes Twitter interesting is the use of hashtags and usernames to allow the grouping of these messages by area by theme (#firstworldproblems, if you’re complaining about the service in Business Class, for example) or to respond to someone (@katyperry – Russell Brand, SRSLY?). Twitter has very significant penetration in the celebrity market and there are often “professional” tweeters for certain organisations.

There is a lot more to say about Twitter but what I want to focus on is the maximum number of characters available – 140. This limit was set for compatibility with SMS messages and, unsurprisingly, a lot of abbreviations used in Twitter have come in from the SMS community. I have been restricting myself to ~1,000 words in recent posts (+/-10%, if I’m being honest) and, with the average word length of approximately 5 for English then, by adding spaces and punctuation to take this to 6, you’d expect my posts to be somewhere in the region of 6,000 characters. Anyone who’s been reading this for a while will know that I love long words and technical terms so there’s a possibility that it’s up beyond this. So one of my posts, as the largest Tweets, would take up about 43 tweets. How long would that take the average Twitterer?

Here’s an interesting site that lists some statistics, from 2009 – things will have changed but it’s a pretty thorough snapshot. Firstly, the more followers you have the more you tweet (cause and effect not stated!) but even then, 85% of users update less than once per day, with only 1% updating more than 10 times per day. With the vast majority of users having less than 100 followers (people who are subscribed to read all of your tweets), this makes two tweets per day the dominant activity. But that was back in 2009 and Twitter has grown considerably since then. This article updates things a little, but not in the same depth, and gives us two interesting facts. Firstly, that Twitter has grown amazingly since 2009. Secondly, that event reporting now takes place on Twitter – it has become a news and event dissemination point. This is happening to the extent that a Twitter reported earthquake can expand outwards in the same or slightly less time than the actual earthquake itself. This has become a bit of a joke, where people will tweet about what is happening to them rather than react to the event.

From Twitter’s own blog, March, 2011, we can also see this amazing growth – more people are using Twitter and more messages are being sent. I found another site listing some interesting statistics for Twitter: 225,000,000 users, most tweets are 40 characters long, 40% if users don’t tweet but just read and the average user still has around 100 followers (115 actually). If the previous behaviour patterns hold, we are still seeing an average of two tweets for the majority user who actually posts. But a very large number of people are actually reading Twitter far more than they ever post.

To summarise, millions of people around the world are exposed to hundreds of messages that are 4o characters long and this may be one of their leading sources of information and exposure to text throughout the day. To put this in context, it would take 150 tweets to convey one of my average posts at the 40 character limit and this is a completely different way of reading information because, assuming that the ‘average’ sentence is about 15-20 words, very few of these tweets are going to be ‘full’ sentences. Context is, of course, essential and a stream of short messages, even below sentence length, can be completely comprehensible. Perhaps even sentence fragments? Or three words. Two words? One? (With apologies to Hofstadter!) So there’s little mileage in arguing that tweeting is going to change our semantic framework, although a large amount of what moves through any form of blogging, micro or other, is going to always have its worth judged by external agents who don’t take part in that particular activity and find it wanting. (I blog, you type, he/she babbles.)

But is this shortening of phrase, and our immersion in a shorter sentence structure, actually having an impact on the way that we write or read? Basically, it’s very hard to tell because this is such a recent phenomenon. Early social media sites, including the BBs and the multi-user shared environments, did not value brevity as much as they valued contribution and, to a large extent, demonstration of knowledge. There was no mobile phone interaction or SMS link so the text limit of Twitter wasn’t required. LiveJournal was, if anything, the antithesis of brevity as the journalling activity was rarely that brief and, sometimes, incredibly long. Facebook enforces some limits but provides notes so that longer messages can be formed but, of course, the longer the message, the longer the time it takes to write.

Twitter is an encourager of immediacy, of thought into broadcast, but this particular messaging mode, the ability to globally yell “I like ice cream and I’m eating ice cream” as one is eating ice cream is so new that any impact on overall language usage is going to be hard to pin down. As it happens, it does appear that our sentences are getting shorter and that we are simplifying the language but, as this poster notes, the length of the sentence has shrunk over time but the average word length has only slightly shortened, and all of this was happening well before Twitter and SMS came along. If anything, perhaps this indicates that the popularity of SMS and Twitter reflects the direction of language, rather than that language is adapting to SMS and Twitter. (Based on the trend, the Presidential address of 2300 is going to be something along the lines of “I am good. The country is good. Thank you.”)

I haven’t had the time that I wanted to go through this in detail, and I certainly welcome more up-to-date links and corrections, but I much prefer the idea that our technologies are chosen and succeed based on our existing drives tastes, rather than the assumption that our technologies are ‘dumbing us down’ or ‘reducing our language use’ and, in effect, driving us. I guess you may say I’m a dreamer.

(But I’m not the only one!)


The Early-Career Teacher

Recently, I mentioned the Australian Research Council (ARC) grant scheme, which recognises that people who have had their PhDs for less than five years are regarded as early-career researchers (ECRs). ECRs have a separate grant scheme (now, they used to have a different way of being dealt with in the grant application scheme) that recognises the fact that their track records, the number of publications and activity relative to opportunity, is going to be less than that of more seasoned individuals.

What is interesting about this is that someone who has just finished their PhD will have spent (at least) three years, more like four, doing research and, we hope, competent research under guidance for the last two of those years. So, having spent a couple of years doing research, we then accept that it can take up to five years for people to be recognised as being at the same level.

But, for the most part, there is no corresponding recognition of the early-career teacher, which is puzzling given that there is no requirement to meet any teaching standards or take part in any teaching activities at all before you are put out in front of a class. You do no (or are not required to do any) teaching during your PhD in Australia, yet we offer support and recognition of early status for the task that you HAVE been doing – and don’t have a way to recognise the need to build up your teaching.

We discussed ideas along these lines at a high-level meeting that I attended this morning and I brought up the early-career teacher (and mentoring program to support it) because someone had brought up a similar idea for researchers. Mentoring is very important, it was one of the big HERDSA messages and almost everywhere I go stresses this, and it’s no surprise that it’s proposed as a means to improve research but, given the realities of the modern Australian University where more of our budget comes from teaching than research, it is indicative of the inherent focus on research that I need to propose teaching-specific mentoring in reaction to research-specific mentoring, rather than vice versa.

However, there are successful general mentoring schemes where senior staff are paired with more junior staff to give them help with everything that they need and I quite like this because it stresses the nexus of teaching and research, which is supposed to be one of our focuses, and it also reduces the possibility of confusion and contradiction. But let’s return to the teaching focus.

The impact of an early-career teacher program would be quite interesting because, much as you might not encourage a very raw PhD to leap in with a grant application before there was enough supporting track record, you might have to restrict the teaching activities of ECTs until they had demonstrated their ability, taken certain courses or passed some form of peer assessment. That, in any form, is quite confronting and not what most people expect when they take up a junior lectureship. It is, however, a practical way to ensure that we stress the value of teaching by placing basic requirements on the ability to demonstrate skill within that area! In some areas, as well as practical skill, we need to develop scholarship in learning and teaching as well – can we do this in the first years of the ECT with a course of educational psychology, discipline educational techniques and practica to ensure that our lecturers have the fundamental theoretical basis that we would expect from a school teacher?

Are we dancing around the point and, extending the heresy, require something much closer to the Diploma of Education to certify academics as teachers, moving the ECR and the ECT together to give us an Early Career Academic (ECA), someone who spends their first three years being mentored in research and teaching? Even ending up with (some sort of) teaching qualification at the end? (With the increasing focus on quality frameworks and external assessment, I keep waiting for one of our regulatory bodies to slip in a ‘must have a Dip Ed/Cert Ed or equivalent’ clause sometime in the next decade.)

To say that this would require a major restructure in our expectations would be a major understatement, so I suspect that this is a move too far. But I don’t think it’s too much to put limits on the ways that we expose our new staff to difficult or challenging teaching situations, when they have little training and less experience. This would have an impact on a lot of teaching techniques and accepted practices across the world. We don’t make heavy use of Teaching Assistants (TAs) at my Uni but, if we did, a requirement to reduce their load and exposure would immediately push more load back onto someone else. At a time when salary budgets are tight and people are already heavily loaded, this is just not an acceptable solution – so let’s look at this another way.

The way that we can at least start this, without breaking the bank, is to emphasise the importance of teaching and take it as seriously as we take our research: supporting and developing scholarship, providing mentoring and extending that mentoring until we’re sure that the new educators are adapting to their role. These mentors can then give feedback, in conjunction with the staff members, as to what the new staff are ready to take on. Of course, this requires us to carefully determine who should be mentored, and who should be the mentor, and that is a political minefield as it may not be your most senior staff that you want training your teachers.

I am a fairly simple man in many ways. I have a belief that the educational role that we play is not just staff-to-student, but staff-to-staff and student-to-student. Educating our new staff in the ways of education is something that we have to do, as part of our job. There is also a requirement for equal recognition and support across our two core roles: learning and teaching, and research. I’m seeing a lot of positive signs in this direction so I’m taking some heart that there are good things on the nearish horizon. Certainly, today’s meeting met my suggestions, which I don’t think were as novel as I had hoped they would be, with nobody’s skull popping out of their mouth. I take that as a positive sign.

 


The Heart of Darkness

My friend, fellow educator and cousin, Liz, commented on yesterday’s post where I (basically) asked why we waste educational opportunities by being unpleasant or bullying. Here’s something that she wrote in the comments:

How we respond to young people is vitally important. How a parent or teacher responds is so important to the self-esteem of a child/student. There is rarely a call for being brutally blunt or thoughtlessly cruel. But bashing is in style. It’s been in style a long time, long enough for an entire generation to think it is the norm.

The emphasis of that phrase “But bashing is in style” is mine because I couldn’t agree with it more. You can see it where we knock people down for being good in ways that we think that we may not be able to attain, while feting people who are wealthy, because somehow we can see ourselves being millionaires. Steinbeck, unsurprisingly, said it best and we paraphrase is longer thoughts on this as:

“Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.” as given in A Short History of Progress (2005) by Ronald Wright.

So there’s surprisingly little bashing of the “haves that we might attain if we are really lucky or play the game in the right way”, but there is a great deal of bashing of visionaries, dreamers, risk-takers, experimenters, those who challenge the status quo and those who dare to dabble within a field in which we consider ourselves expert. I think that list of ‘types’ pretty much describes every single good student I’ve ever had so it’s not that surprising that a large number of the experiences that these students have are negative.

This has not always happened – the forward thinking, the intellectual, the artistic manifesto maker have been highly prized before but, somehow, this seems to have faded away. (I know that every generation complains about this but, with our media saturation and our near-instantaneous communication, I think that the impact of negative feedback and bashing has a far wider reach, as well as being less focused on debate and more on cruelty, destruction and brutality.)

Let me give you an example. I am an artist, across a few different outlets but mainly writing and design, and I am creating a manifesto to describe my intentions in the artistic space, my motives in doing so, and my views on the fusion between creativity and the more rigid aspects of my discipline. The reaction to this, if I tell people, is predominantly negative. Firstly, due to a certain famous manifesto, most people assume that I am making some sort of revolutionary political statement. (The book “100 Artistic Manifestos” is an excellent reference to get a different view on this.)  Secondly, most people assume that I am somehow incapable of doing this – I suspect it’s because they believe that my job is me or that Computer Scientists can’t be creative. The general reaction is one of “knocking”, a gentle form of dismissive undermining common in Australia, but this is just a polite version of bashing. People don’t believe I can do this and have no problem expressing this in a variety of ways. Fortunately, I’ve reached the point in my career and my art that the need to write a manifesto is based on a desire to explain and to share, so people not understanding why I would do it just tells me that I need to do it. (Of course, calling yourself an artist is a hard one, as well. Am I published? No. Do I have any works on display? No. Do I make my living from it? No. Am I driven to create art? Yes. By my definition, I’m an artist. If I ever sell two paintings, of any kind, I’ve doubled Van Gogh’s lifetime sales. 🙂 )

This is the environment in which my students are learning and growing – and it’s a dark one. If I have noted nothing else from working with the young, it is that they are amazingly fragile at some points. The moments that you have to work with people, when they feel comfortable enough to be open and honest with you, are surprisingly few and far between – being cruel, taking a cheap shot, not having the time, cutting them down, not listening… it’ll have an effect, alright, and it may even be an effect that stays with that student for life. Going back over your memory of your teachers and lecturers, I bet you can remember every single one that changed your life, whether for good or for ill.

I don’t really want to harden my students, to make them into living armour, because I think that is really going to get in the way of them being people. Yes, I need them to be resilient but that’s a very different thing to rigid or tough. I need them to be able to commit to a particular set of ideas, that they choose, and to be able to withstand reasonable argument and debate, because this is the burden of the critical thinker. But I’m always worried that making them insensitive to criticism risks making them easily manipulable and ignorant of useful sources. It’s far too easy to respond to people you see as bashers with bashing – Richard Dawkins and Christopher Hitchens both spring to mind as people who wield words and ideas as weapons in an (on occasion) unnecessarily cruel, dismissive or self-satisfied way. There is a particular smugness of “basher-bashing” that is as repellent as the original action and this is also not a great way to train people that you wish to be out there, sharing and discussing ideas. If I wanted repellently smug and self-serving prose, I’d read Jeremy Clarkson, who is (at least) occasionally funny.

The obvious rejoinder to this is that “well, we need people on our side who are as tough as the opponents” and, frankly, I don’t buy it. That sounds more like revenge to me, with a side order of schaudenfreude. If we don’t act top stop it, then we make an environment in which bashing is tolerated and, if we do that, then the most successful basher will win. I’ll tell you right now that it won’t have to be the person who is smartest, most correct, most well-prepared – it is far more likely that it is the person who is willing to be the most cruel, the utterly vindictive and the inescapable persecutor who will win that battle.

So, longwindedly, I complete agree with Liz and want to finish by emphasising the start of her quote: “How we respond to young people is vitally important. How a parent or teacher responds is so important to the self-esteem of a child/student. There is rarely a call for being brutally blunt or thoughtlessly cruel.”

I am convinced that the majority of educators and parents are doing everything that needs to be done to give a good environment, but we also have to look at the world around us and ask how we can make that better.

 

 


A Missed Opportunity: Miles Davis and “Little Miles”

(Edit: someone claiming to be “Little Miles” has now commented on this and said that he was fine with it. That’s great, but he also says that he accepted Miles’ comments in the face of him being known for being curt. It’s worth a read and, of course, I was speculating but my comments on the utility of Miles’ comments stand. Miles was known for being like this and I don’t see talent, even talent as great as Miles’, as being an excuse for bad behaviour. Jazz people may feel differently. I’m in the business of education, not torturing students. I would suggest that this is something all exemplars in a field should keep in mind if they want their area to flourish.)

If you click on this linked video (SFW) (YouTube), you’ll see a young trumpet player, who goes by the nickname “Little Miles”, play “On Green Dolphin Street” in front of Miles Davis. Now, it appears that, if I’ve done my detective work correctly, it’s a 1986 interview conducted by Bill Boggs (corrections welcome!).

Now, if you’ve watched that video, you’ve seen three things.

  1. You’ve seen a young trumpet player, who really isn’t that good, do a tolerable version of a song with a couple of mistakes.
  2. You’ve seen Miles Davis sit all the way back in his chair, then, finally, in a dismissive tone offer the advice of “Get some more practice” and “It’s in E Flat, you’re playing it in D Natural”, which is about as close to telling the kid to go back to wherever he came from and take up the tambourine as you can without actually going to the effort of doing so.
  3. You’ve seen a young trumpet player who, more than likely, is not going to keep playing the trumpet for much longer. The host quickly gets him off stage before anything more unpleasant can happen to him.

Now there is a world of wrong-thinking going on here to even let a young boy, who is called (whether he calls it himself or not) “Little Miles”, anywhere within fifty metres of Miles Davis, unless that young trumpet player is so, SO, good that Miles is going to have to accept that it’s not that much of an insult. And, being honest, the kid’s not that good. When you look at Miles Davis’ past, he was playing professionally for 3-4 years and studying at Juilliard before he went out and hunted down his idol, Coltrane. (Edit: my apologies, it was, of course, Charlie Parker. Thank you, Lewis, for noticing this!) When you think about it like that, wandering into a television studio calling yourself “Little Bird” after playing the sax for a few years and, obviously, not at a standard where you could play professionally – that’s a pretty silly thing to do.

But, of course, Miles’ reaction was pretty toxic. It was unnecessary. The kid wasn’t a threat to anyone and, after playing that way, “Little Miles” was going to fade away, unless he practiced a whole heap more. Taking Miles’ comments at face value, could they have been educational? Ehhhh, not in that tone and with that delay and posture. It was a “Buzz off, kid” if it was anything.

The funny thing is that this is a cascade of bad decision making, which resulted in the worst kind of outcome – no-one actually learned anything.

  1. Whoever was putting the boy up should have either prepared him better or held him back until he was. He shouldn’t have been here.
  2. Whoever gave the kid the name or encouraged him to use it should really had thought twice about it, if proximity to the real thing was even on the horizon.
  3. Someone let this train-wreck happen in front of Miles Davis.
  4. Someone didn’t get the kid off or go to commercial when it was (blatantly) obvious what was about to happen.
  5. Miles was offensively honest in a way designed to injure.

So, someone had put together a view of jazz trumpet playing and exposed the student to it so that they thought that their version of “On Green Dolphin Street” was good enough that they could stand on a stage, called “Little Miles” and expect anything else. That’s a problem with the teacher, for me.

That name… Oh! That name! The hubris required to call yourself that, unless you are so, so, very good that the comparisons leap to all lips. Somebody didn’t sit back and look at that from enough perspectives to work out that it was sending completely the wrong message.

What could the boy learn from listening to Miles? Practice more and stay on key. Wow. Thanks. It was, as I’ve said, not designed to be educational but hurtful – and of course it had no real educational value. It was a punishment and, like any punishment, it’s designed to make you avoid a behaviour, not train you into a new behaviour. Stay away from the trumpet, Kid.

The boy learned nothing that he couldn’t have known by playing with some echo. He certainly didn’t learn anything from one of the finest horn players in the world. What worries me the most is that, after this all happened, his parents or his teacher came up to him and said something “Well, what does that Miles Davis know, anyway?”

“What does he know? I named myself (or you named me) after him as a nickname. I’ve been looking forward to this for three months (say). And now you say it’s nothing?”

Little Miles now has two extreme options, as well as the continuum of compromise in the middle. Either he’s crazy enough to believe that Miles Davis was wrong and that he’s going to be the best ever, spending his life pursuing a vindictive dream where  any intrinsic motivation is swamped by a burning hatred for Gold Lamé, or he suddenly realises that his teachers and his parents don’t know that much about music – and that everything that they’ve said has been wrong.

I started out talking about education, but I’m coming to finish up talking about joy. Yes, there was a failure to educate, a failure of guardianship, many failures of judgement but there has also been a loss of joy. That young man was happy, mistakes and all, until Miles Davis slammed his angry fist down on him and I can’t really see how his love of trumpet would have survived that, without being at least a little bent and mangled.

It’s really easy to be unpleasantly critical and it’s hard to be constructively critical, especially when people are washed in the warm milk of low expectations, but I really wonder sometime why more people just don’t try a little harder to do it.