Short and Sweet

Well, it’s official. I’ve started to compromise my ability to work through insufficient rest. Despite reducing my additional work load, chewing through my backlog is keeping me working far too much and, as you can tell from the number and nature of the typos in these posts, it’s affecting me. I am currently reorganising tasks to see what I can continue to fit in without compromising quality, which means this week a lot of e-mail is being sent to sort out my priorities.

This weekend, I’m sitting down to brainstorm the rest of 2012 and work out what has to happen when – nothing is going to sneak up on me (again) this year.

In very good news, we have 18 students coming back for the pilot activity of “Our students, their words” where we ask students who love ICT an important question – “what do you like and why do you think someone else might like it?” We’re brainstorming with the students for all of Friday morning and passing their thoughts (as research) to a graphic designer to get some posters made. This is stage 1. Stage 2, the national campaign, is also moving – slowly but surely. This is why I really need to rest: I’m getting to the point where it’s important that I am at my best and brightest. Sleeping in and relaxing is probably the best thing I can do for the future of ICT! 🙂

Rather than be a hypocrite, I’m switching to ultra-short posts until I’m rested up enough to work properly again.

See you tomorrow!


Putting it all together – discussing curriculum with students

One of the nice things about my new grand challenges course is that the lecture slots are a pre-reading based discussion of the grand challenges in my discipline (Computer Science), based on the National Science Foundation’s Taskforce report. Talking through this with students allows us to identify the strengths of the document and, perhaps more interestingly, some of its shortfalls. For example, there is much discussion on inter-disciplinary and international collaboration as being vital, followed by statements along the lines of “We must regain the ascendancy in the discipline that we invented!” because the NSF is, first and foremost, a US-funded organisation. There’s talk about providing the funds for sustainability and then identifying the NSF as the organisation giving the money, and hence calling the shots.

The areas of challenge are clearly laid out, as are the often conflicting issues surrounding the administration of these kinds of initiative. Too often, we see people talking about some amazing international initiative – only to see it fail because nobody wants to go first, or no country/government wants to put money up that other people can draw on until everyone does it at the same time.

In essence, this is a timing and trust problem. If we may quote Wimpy from the Popeye cartoons:

A picture of Wimpy saying "I will gladly pay you Tuesday for a hamburger today!"

Via theawl.com. Click on the link for a very long discussion of Popeye and Wimpy related issues.

The NSF document lays bare the problem we always have: those who have the hamburgers are happy to talk about sharing the meal but there are bills to be paid. The person who owns the hamburger stand is going to have words with you if you give everything away with nothing to show in return except a promise of payment on Tuesday.

Having covered what the NSF considered important in terms of preparing us for the heavily computerised and computational future, my students finished with a discussion of educational issues and virtual organisations. The educational issues were extremely interesting because, having looked at the NSF Taskforce report, we then looked at the ACM/IEEE 2013  Computer Science Strawman curriculum to see how many areas overlapped with the task force report. Then we looked at the current curriculum of our school, which is undergoing review at the moment but was last updated for the 2008 ACM/IEEE Curriculum.

What was pleasing was, rom the range of students, how many of the areas were being addressed throughout our course and how much overlap there was between the highlighted areas of the NSF Report and the Strawman. However, one of the key issues from the task force report was the notion of greater depth and breadth – an incredible challenge in the time-constrained curriculum implementations of the 21st century. Adding a new Knowledge Area (KA) to the Strawman of ‘Platform Dependant Computing’ reflects the rise of the embedded and mobile device yet, as the Strawman authors immediately admit, we start to make it harder and harder to fit everything into one course. Combine this with the NSF requirement for greater breadth, including scientific and mathematical aspects that have traditionally been outside of Computing, and their parallel requirement for the development of depth… and it’s not easy.

The lecture slot where we discussed this had no specific outcomes associated with it – it was a place to discuss the issues arising but also to explain to the students why their curriculum looks the way that it does. Yes, we’d love to bring in Aspect X but where does it fit? My GC students were looking at the Ethics aspects of the Strawman and wondered if we could fit Ethics into its own 3-unit course. (I suspect that’s at least partially my influence although I certainly didn’t suggest anything along these lines.) “That’s fine,” I said, “But what do we lose?”

In my discussions with these students, they’ve identified one of the core reasons that we changed teaching languages, but I’ve also been able to talk to them about how we think as we construct courses – they’ve also started to see the many drivers that we consider, which I believe helps them in working out how to give feedback that is the most useful form for us to turn their needs and wants into improvements or developments in the course. I don’t expect the students to understand the details and practice of pedagogy but, unless I given them a good framework, it’s going to be hard for them to communicate with me in a way that leads most directly to an improved result for both of us.

I’ve really enjoyed this process of discussion and it’s been highly rewarding, again I hope for both sides of the group, to be able to discuss things without the usual level of reactive and (often) selfish thinking that characterises these exchanges. I hope this means that we’re on the right track for this course and this program.


Partnership vs Prison Experiment

The Stanford Prison Experiment (official site, Wikipedia, BBC Recreation)  is notorious in many ways. For those who haven’t heard of it, in 1971, a randomly selected group of 24 males (out of 70) were split into two randomly assigned groups: prisoners and guards. They were then placed into a mock prison situation. Despite agreeing to a 7-14 day experimental run, the experiment was terminated after 6 days. By this stage, 1/3 of the guards were showing sadistic tendencies, 2 prisoners had quit and the abuse that prisoners were suffering included solitary confinement, loss of mattresses, reduced access to toilets (or enforced primitive access).

Lest you think that the researcher controlling it, Professor Philip Zimbardo, terminated this for altruistic reasons, it was in response to the objections of a graduate student who was observing the experiment – who he was dating and went on to marry. Of the fifty people who had observed the experiment, and the deteriorating conditions, Zimbardo claims that only this one observer objected.

I assume that some of you thinking “Surely… someone else said something?” Yeah, I thought that too, when I first read about it. Apparently not.) It’s worth noting that Zimbardo’s prison started out as a more ‘extreme’ prison than usual, with degrading activities forced onto the prisoners fairly early. You can read about these in the site or you can read about the Abu Ghraib incident, which is strikingly similar.

What is worth noting up front is that this research has never been fully successfully replicated, for a number of reasons, and the publication standard was low. The Stanford Prison Experiment stands, however, in many ways as a failure to protect the people in an experiment, even if the actions of the agents in the system was not as random (there are claims Zimbardo engineered large sections of this) or as meaningful (the experiment was very poorly constructed) as it may appear.

The random selection of the participants, 24 out of an original 70 and randomised roles, seems to indicate a situational attribution of behaviour, rather than one that we are born with. Put someone in a position where they have power over somebody else, put enough rules in the way – *bang* you’re potentially recreating the Stanford Prison Experiment. (Paging Dr Milgram… a topic for a later post.)

Ultimately, demanding compliance can place us in difficult positions where we require authoritarianism on the part of those who demand, and compliance from those who must obey. Whatever the Stanford study showed, this arrangement of struct power-divided rules does not allow for a meeting of minds in any kind of partnership.

One of things I dislike the most about some of the seemingly arbitrary things that we sometimes do (or are encouraged to do) in teaching is that any situation that devolves to “because I said so” or “because I’ve been told to” is requiring compliance under the aegis of institutional support, and driven by some legitimising framework. This gets in the way of one of the most useful and constructive relationships that we can form – the partnership between educator and student. Now, I’m not suggesting for a second that most students have the maturity or depth of knowledge to devise and run entire courses but a partnership role allows us to avoid falling into the traps of guard and prisoner. We do have hard limits that we need to adhere to, to make the recognition of education possible in many senses, but building courses that clearly set these limits in a constructive and useful way, rather than a reactive and inauthentic way, pulls us out of the “I told you to do this” and allows us to move into the “why didn’t you do that?”

(We could talk about allowing individuals mobility to reduce their dependency on external validation from their peers, and hence allow us to encourage the pursuit of individual goals and reduce any fighting over favouritism but I’m not well-versed enough in social identity theory yet to give this much flesh.)

I, as the subject matter expert, am trying to assist the student in developing knowledge within a particular set of subjects and any useful associated areas. If I have created something where, in order to understand the work, you need to complete certain readings and assignments, perform certain actions, and do so in a certain timeframe or lose the opportunity to participate – most students will actually do this. On top of the issues of knowledge, we have the other skills that we are trying to transfer: design, time management, ethics, professionalism, communication skills. This is where it gets hard.

Say, for example, I design a course where you need to finish Assignment 2 before we discuss a certain topic in a lecture/tutorial/studio activity. Therefore, you have a reason to finish assignment 2 before some deadline. I can set a deadline that is just before the next activity or I can set it a few days before to give people some digestion time prior to looking at it again. Or I can set an earlier deadline to give people practice at time management. However, if Assignment 2 is work that will not be referred to elsewhere in the course, except for the exam, when should I set the deadline?

The problem we have is that allowing deadlines to run late means no marking or feedback until late – this, of course, drives our education design to bring formative work forward but, once again, this only makes sense if that feedback will be useful earlier on.

So, to briefly recap, setting an arbitrary hand-in time that is purely to make your marking life easier and has no pedagogical driver or no impact on student learning is understandable but, in many ways, potentially an abuse of your position. (I am all too familiar of the realities of staff and resource shortages on when and how we can mark, especially when we start getting told to increase feedback or have all assignments back within time X. But let’s get this straight: formative and summative have different roles and marking loads. We know that we can achieve things with good learning design that far exceed what we can manage with arbitrary action.)

Now let’s look at a more complex issue – late penalties. I have evidence that students change their behaviour when late penalties are fixed on 24 hour barriers. We’ve seen students line up with these and start handing up in response to these new barriers: miss one and you lose even more marks. But have we changed the right behaviour or does this merely lead to a certain form of resignation in the face of arbitrary authority?

Why am I removing marks anyway? If the work is handed in before the time that it’s needed, then, from a knowledge point of view, the aim has been achieved. Which skill am I developing? If you responded with ‘time management’, then providing that we are completely clear on when the work must be handed in to achieve certain requirements AND that we have added an overall factor in the ‘professional’ spectrum of time management, we are probably doing the right thing. If we’re just saying “hand it in on time OR ELSE” then we are conflating issues of knowledge development with issues of compliance and this is where it starts to get murky.

Now it doesn’t have to get murky but it’s completely possible in this zone. You risk ending up academics who won’t accept anything because it’s late (regardless of reason) or students who start acting up (out of defiance) or, potentially worse, students who become completely passive and dependent upon your authority. If self-regulation is supposed to be in play, then we haven’t achieved much by doing this.

Nothing I’ve said should be interpreted as “no deadlines” or “no authority” but what I am saying is that we know what happens when we take a randomly assigned group of people and make one beholden to the other, when there is no really good reason or sense of equality or partnership between them. We’ve seen it time and time again.

Kohn, in “Punishing with Rewards”, makes a number of observations, some good and some bad, including that one of our biggest risks is in the rupturing of relationships by setting up a disparity of power levels, where one person controls and the other person complies or seeks to appease, rather than to achieve the actual objective. It’s an interesting way to look at a very challenging problem, to give us more lines along which to think.

I should finish this by noting, again, that Zimbardo’s experiment was flawed in many ways and deriving significance from the role is hard. It appears, from the UK version, that leadership plays a key part in what happens. It was only when strong leadership started to lead the prison guards down dark paths in the UK recreation that they started to approach what had happened in Stanford. Zimbardo admits that his role in the experiment may have been not been all that sensible in many ways but it may be that his briefing set the scene for what happened. His passive observation as matters deteriorated, with the guards knowing that he was watching, certainly validated their actions. Either way, if it is a fact that one key leader can have so much impact, then that makes what we do even more important – even if it’s occasionally looking at something, thinking about it and saying ‘No, actually, that’s wrong.”


Silk Purses and Pig’s Ears

There’s an old saying “You can’t make a silk purse out of a pig’s (or sow’s) ear”. It’s the old chestnut that you can’t make something good out of something bad and, when you’re talking about bad grapes or rotten wood, then it has some validity (but even then, not much, as I’ll note later). When it’s applied to people, for any of a large range of reasons, it tends to become an excuse to give up on people or a reason why a lack of success on somebody’s part cannot be traced back to you.

I’m doing a lot of reading in the medical and general ethics as part of my preparation for one of the Grand Challenge lectures. The usual names and experiments show up, of course, when you start looking at questionable or non-existent ethics: Milgram, the Nazis, Stanford Prison Experiment, Unit 731, Tuskegee Syphilis Experiment, Little Albert and David Reimer. What starts to come through from this study is that, in many of these cases, the people being experimented upon have reached a point in the experimenter’s eyes where they are not people, but merely ‘subjects’ – and all too often in the feudal sense as serfs, without rights or ability to challenge what is happening.

But even where the intention is, ostensibly, therapeutic, there is always the question of who is at fault when a therapeutic procedure fails to succeed. In the case of surgical malpractice or negligence, the cause is clear – the surgeon or a member of her or his team at some point made a poor decision or acted incorrectly and thus the fault lies with them. I have been reading up on early psychiatric techniques, as these are full of stories of questionable approaches that have been later discredited, and it is interesting in how easy it is for some practitioners to wash their hands of their subject because they had a lack of “good previous personality” – you can’t make a silk purse out of a pig’s ear. In many cases, with this damning judgement, people with psychiatric problems would often be shunted off to the wards of mental hospitals.

I refer, in this case, to William Sargant (1907-1988), a British psychiatrist who had an ‘evangelical zeal’ for psychosurgery, deep sleep treatment, electroconvulsive therapy (ECT) and insulin shock therapy. Sargant used narcosis extensively, drug induced deep sleep, as he could then carry out a range of procedures on the semi- and unconscious patients that they would have possibly learned to dread if they have received them while conscious. Sargant believed that anyone with psychological problems should be treated early and intensively with all available methods and, where possible, all these methods should be combined and applied as necessary. I am not a psychiatrist and I leave it to the psychiatric and psychotherapy community to assess the efficacy and suitability of Sargant’s methods (they disavow them, for the most part, for what it’s worth) but I mention him here because he did not regard failures as being his fault. It is his words that I am quoting in the previous paragraph. People for whom his radical, often discredited, zealous and occasionally lethal experimentation did not work were their own problem because they lacked a “good previous personality”. You cannot, as he was often quoted to have said, make a silk purse out of a pig’s ear.

How often I have heard similar ideas being expressed within the halls of academia and the corridors of schools. How easy a thing it is to say. Well, one might say, we’ve done all that we can with this particular pupil, but… They’re just not very bright. They daydream in class rather than filling out their worksheets. They sleep at their desks. They never do the reading. They show up too late. They won’t hang around after class. They ask too many questions. They don’t ask enough questions. They won’t use a pencil. They only use a pencil. They talk back. They don’t talk. They think they’re so special. Their kind never amounts to anything. They’re just like their parents. They’re just like the rest of them.

“We’ve done all we can but you can’t make a silk purse out of a sow’s ear.”

As always, we can look at each and every one of those problems and ask “Why?” and, maybe, we’ll get an answer that we can do something about. I realise that resources and time are both scarce commodities but, even if we can’t offer these students the pastoral care that they need (and most of those issues listed above are more likely to be social/behavioural than academic anyway), let us stop pretending that we can walk away, blameless, as Sargant did because these students are fundamentally unsalvageable.

Yeah, sorry, I know that I go on about this but it’s really important to keep on hammering away at this point, every time that I see how my own students could be exposed to it. They need to know that the man that they’re working with expects them to do things but that he understands how much of his job is turning complex things into knowledge forms that they can work with – even if all he does is start the process and then he hands it to them to finish.

Do you want to know how to make great wine? Start with really, really good grapes and then don’t mess it up. Want to know how to make good wine? Well, as someone who used to be a reasonable wine maker, you can give me just about anything – good fruit, ok fruit, bad fruit, mouldy fruit – and I could turn it into wine that you would happily drink. I hasten to point out that I worked for good wineries and the vast quantity of what I did was making good wine from good grapes, but there were always the moments where you had something that, from someone else’s lack of care or inattention, had got into a difficult spot. Understanding the chemical processes, the nature of wine and working out how we could recover  the wine? That is a challenge. It’s time consuming, it takes effort, it takes a great deal of scholarly knowledge and you have to try things to see if they work.

In the case of wine, while I could produce perfectly reasonable wine from bad grapes, simple chemistry prevents me from leaving in enough of the components that could make a wine great. That is because wine recovery is all about taking bad things out. I see our challenge in education as very different. When we find someone who is need of our help, it is what we can put in that changes them. Because we are adding, mentoring, assisting and developing, we are not under the same restrictions as we are with wine – starting from anywhere, I should be able to help someone to become a great someone.

The pig’s ears are safe because I think that we can make silk purses out of just about anything that we set our minds to.


Grand Challenges Course: Great (early) progress on the project work.

While I’ve been talking about the project work in my new “Grand Challenge”-based course a lot, I’ve also identified a degree of, for want of a better word, fearfulness on the part of the students. Given that their first project is a large poster with a visualisation of some interesting data, which they have to locate and analyse, and that these are mostly Computer Science students with no visualisation experience, they are understandably slightly concerned. We’ve been having great discussions and lots of contributions but next week is their first pitch and, suddenly, they need a project theme.

I’ve provided a fair bit of guidance for the project pitch, and I reproduce it here in case you’re interested:

Project 1: First Deliverable, the Pitch

Due 2pm, Wednesday, the 8th of August Because group feedback is such an important part of this project, you must have your pitch ready to present for this session and have the best pitch ready that you can. Allocate at least 10 hours to give you enough time to do a good job.

What is the pitch?

A pitch is generally an introduction of a product or service to an audience who knows nothing about it but is often used to expand knowledge and provide a detailed description of something that the audience is already partially familiar with. The key idea is that you wish to engage your audience and convince them that what you are proposing is worth pursuing. In film-making, it’s used to convey an idea to people who need to agree to support it from a financial or authority perspective.

One of the most successful pitches in Hollywood history is (reputedly) the four word pitch used to convince a studio to fund the movie “Twins”. The pitch was “Schwarzenegger. De Vito. Twins.”

You are not trying to sell anything but you are trying to familiarise a group of people with your project idea and communicate enough information that the group can give you useful feedback to improve your project. You need to think carefully about how you will do this and I strongly suggest that you rehearse before presenting. Trust me when I say that very few people are any good at presentation without rehearsal and I will generally be able to tell the amount of effort that you’ve expended. An indifferent presentation says that you don’t care – and then you have to ask why anyone else would be that motivated to help you.

If you like the way I lecture, then you should know that I still rehearse and practice regularly, despite having been teaching for over 20 years.

How will it work?

You will have 10 minutes to present your project outline. During this time you will:

  • Identify, in one short and concise sentence, what your poster is about.
  • Clearly state the purpose.
  • Identify your data source.
  • Answer all of the key questions raised in the tutorial.
  • Identify your starting strategy, based on the tools given in the tutorial, with a rough outline of a timeline.
  • Outline your analysis methodology.
  • Summarise the benefits of this selection of data and presentation – why is it important/useful?
  • Show a rough prototype layout on an A3 format.

We will then take up to 10 minutes to provide you with constructive feedback regarding any of these aspects. Participants will be assessed both on the pitch that they present and the quality of their feedback and critique. Critique guides will be available for this session.

How do I present it?

This is up to you but I would suggest that you summarise the first seven points as a handout, and provide a copy of your A3 sketch, for reference during critique. You may also use presentations (PowerPoint, Keynote or PDF) if you wish, or the whiteboard. As a guideline, I would suggest no more than four slides, not including title, or your poster sketch. You may use paper and just sketch on that – the idea and your ability to communicate it are paramount at this stage, not the artfulness of the rough sketch.

Important Notes

Some people haven’t been getting all of their work ready on time and, up until now, this has had no impact on your marks or your ability to continue working with the group. If you don’t have your project ready, then I cannot give you any marks for your project and you miss out on the opportunity for group critique and response – this will significantly reduce your maximum possible mark for this project.

I am interested in you presenting something that you find interesting or that you feel will benefit from working with – or that you think is important. The entire point of this course is to give you the chance to do something that is genuinely interesting and to challenge yourself. Please think carefully about your data and your approach and make sure that you give yourself the opportunity to make something that you’d be happy to show other people, as a reflection of yourself, your work and what you are capable of.

END OF THE PITCH DESCRIPTION

We then had a session where we discussed ideas, looked at sources and started to think about how we could get some ideas to build a pitch on. I used small group formation and a bit of role switching and, completely unsurprisingly to the rest of you social constructivists, not only did we gain benefit from the group work but it started to head towards a self-sustaining activity. We went from “I’m not really sure what to do” to something very close to “flow” for the majority of the class. To me it was obvious that the major benefit was that the ice had been broken and, through careful identification of what to happen with the ideas and a deliberate use of Snow’s Cholera diagram as an example of how powerful a good (but fundamentally) simple visualisation could be, the group was much better primed to work on the activity.

The acid test will be next week but, right now, I’m a lot more confident that I will get a good set of first pitches. Given how much I was holding my breath, without realising it, that’s quite a good thing!


The 1-Year Degree – what’s your reaction?

I’m going to pose a question and I’d be interested in your reaction.

“Is there a structure and delivery mechanism that could produce a competent professional graduate from a degree course such as engineering or computer science, which takes place over a maximum of 12 months including all assessment, without sacrificing quality or content?”

What was your reaction? More importantly, what is the reasoning behind your reaction?

For what it’s worth my answer is “Not with our current structures but, apart from that, maybe.” which is why one of my side projects is an attempt to place an entire degree’s worth of work into a 12-month span as a practice exercise for discussing the second and third year curriculum review that we’re holding later on this year.

Our ‘standard’ estimate for any normal degree program is that a student is expected to have a per-semester load of four courses (at 3 units a course, long story) and each of these courses will require 156 hours from start to finish. (This is based on 10 hours per week, including contact and non-contact, and roughly 36 hours for revision towards examination or the completion of other projects.) Based on this estimate, and setting up an upper barrier of 40 hours/week, for all of the good research-based reasons that I’ve discussed previously, there is no way that I can just pick up the existing courses and drop them into a year. A three-year program has six semesters, with four courses per semester, which gives an hour burden of 24*156 = 3,744. At 40 hours per week, we’d need 93.6 weeks (let’s call that 94), or 1.8 years.

But, hang on, we already have courses that are 6-unit and span two semesters – in fact, we have enormous projects for degree programs like Honours that are worth the equivalent of four courses. Interestingly, rather than having an exam every semester, these have a set of summative and formative assignments embedded to allow the provision of feedback and the demonstration of knowledge and skill acquisition – does this remove the need to have 36 hours for exam study for each semester if we build the assignments correctly?

Let’s assume that it does. Now we have a terminal set of examinations at the end of each year, instead of every semester. Now I have 12 courses at 120 hours each and 12 at 156 hours each. Now we’re down to 3,312 – which is only 1.6 years. Dang. Still not there. But it’s ok, I can see all of you who have just asked “Well, why are you so keen on using examinations if you’re happy with summative assignments testing concepts as you go and then building in the expectation of this knowledge in later modules?” Let’s drop the exam requirement even further to a final set of professional level assessment criteria, carried out at the end of the degree to test high-level concepts and advanced skills. Now, of the 24 courses that a student sits, almost all assessment work has moved into continuous assessment mode, rich in feedback, with summative checkpoints and a final set of examinations as part of the four capstone courses at the end. This gives us 3,024 hours – about 1.45 years.

But this is also ignoring that the first week of many of these courses is required revision after some 6-18 weeks of inactivity as the students go away to summer break or home for various holidays. Let’s assume even further that, with the exception of the first four courses that they do, that we build this continuously so that skills and knowledge are reinforced as micro slides, scattered throughout the work, supported with recordings, podcasts, notes, guides and quick revision exercises in the assessment framework. Now I can slice maybe 5 hours off 20 of the courses (the last 20) – cutting me down by another 100 hours and that’s half a month saved, down to 1.4 years.

Of course, I’m ignoring a lot of issues here. I’m ignoring the time it takes someone to digest information but, having raised that, can you tell me exactly how long it takes a student to learn a new concept? This is a trick question as the answer generally depends upon the question “how are you teaching them?” We know that lectures are one of the worst ways to transfer information, with A/V displays, lectures and listening all having a retention rate less than 40%. If you’re not retaining, your chances of learning something are extremely low. At the same time, somewhere between 30-50% of the time that we’re allocating to those courses we already teach are spent in traditional lectures – at time of writing. We can improve retention (of both knowledge and students) when we use group work (50% and higher for knowledge) or get the students to practice (75%) or, even better, instruct someone else (up to 90%). If we can restructure the ’empty’ or ‘low transfer’ times into other activities that foster collaboration or constructive student pedagogy with a role transfer that allows students to instruct each other, then we can potentially greatly improve our usage of time.

If we use this notion and slice, say, 20 hours from each course because we can get rid of that many contact hours that we were wasting and get the same, if not better, results, we’re down to 2,444 hours, about 1.18 years. And I haven’t even started looking at the notion of concept alignment, where similar concepts are taught across two different concepts and could be put in one place, taught once, consistently and then built upon for the rest of the course. Suddenly, with the same concepts and a potentially improved educational design – we’re looking the 1-year degree in the face.

Now, there will be people who will say “Well, how does the student mature in this time? That’s only one year!” to which my response is “Well, how are you training them for maturity? Where are the developing exercises? The formative assessment based on careful scaffolding in societal development and intellectual advancement?” If the appeal of the three-year degree is that people will be 19-20 when they graduate, and this is seen as a good thing, then we solve this problem for the 1-year degree by waiting two years before they start!

Having said all of this, and believing that a high quality 1-year degree is possible, let me conclude by saying that I think that it is a terrible idea! University is more than a sequence of assessments and examinations, it is a culture, a place for intellectual exploration and the formation of bonds with like-minded friends. It is not a cram school to turn out a slightly shell-shocked engineer who has worked solidly, and without respite, for 52 weeks. However, my aim was never actually to run a course in a year, it was to see how I could restructure a course to be able to more easily modularise it, to break me out of the mental tyranny of a three- or four-year mandate and to focus on learning outcomes, educational design and sound pedagogy. The reason that I am working on this is so that I can produce a sound course structure with which students can engage, regardless of whether they are full-time or not, clearly outlining dependency and requirements. Yes, if we break this up into part-time, we need to add revision modules back in – but if we teach it intensively (or on-line) then those aren’t required. This is a way to give students choice and the freedom to come in at any age, with whatever time they have, but without sacrificing the quality of the underlying program. This is a bootstrap program for a developing nation, a quick entry point for people who had to go to work – this is making up for decades of declining enrolments in key areas.

This is going on a war footing against the forces of ignorance.

There are many successful “Open” universities that use similar approaches but I wanted to go through the exercise myself, to allow me the greatest level of intellectual freedom while looking at our curriculum review. Now, I feel that I can focus on Knowledge Areas for my specifics and on the program as a whole, freed of the binding assumption that there is an inevitable three-year grind ahead for any student. Perhaps one of the greatest benefits for me is the thought that, for students who can come to us for three years, I can put much, much more into the course if they have the time – and these things, of interest, regarding beauty, of intellectual pursuits, can replace some of the things that we’ve lost over the years in the last two decades of change in University.


Brief but good news

A happy surprise in my mailbox today, but first the background. We’ve been teaching Puzzle Based Learning at Adelaide for several years now, based on Professor Zbigniew Michalewicz’s concept for a course that encouraged problem solving in a domain-free environment. (You can read more details about it by searching for Puzzle Based Learning with the surnames Falkner, Michalewicz and Sooriamurthi – we’ve had work published on this in IEEE Computer and as a workshop at SIGCSE, among several others.) Zbyszek (Adelaide), Raja (Sooriamurthi, a Teaching Professor at CMU) and I teamed up with Professor Ed Meyer (Physics at Baldwin-Wallace) to put together a textbook proposal to help people teach this information.

Great news – our proposal has been accepted by an excellent publishing house who appear to be genuinely excited about the book! As this is my first book, I’m very excited and pleased – but it’s a great reflection on the strength of the team and our composite skills and background, especially with the inter-disciplinary aspects. I’ve seen a lot of exciting work come out of Baldwin-Wallace and, while this is my first time working with Ed, I’m really looking forward to it. (Zbyszek, Raja and I have worked together a lot but I’m still excited to be working with them again!)

Good news after a rather difficult week.


Environmental Impact: Iz Tweetz changing ur txt?

Please, please forgive me for the diabolical title but I have been wondering about the effects of saturation in different communication environments and Twitter seemed like an interesting place to start. For those who don’t know about Twitter, it’s an online micro-blogging social media service. Connect to it via your computer or phone and you can put a message in that is up to 140 characters, where each message is called a tweet. What makes Twitter interesting is the use of hashtags and usernames to allow the grouping of these messages by area by theme (#firstworldproblems, if you’re complaining about the service in Business Class, for example) or to respond to someone (@katyperry – Russell Brand, SRSLY?). Twitter has very significant penetration in the celebrity market and there are often “professional” tweeters for certain organisations.

There is a lot more to say about Twitter but what I want to focus on is the maximum number of characters available – 140. This limit was set for compatibility with SMS messages and, unsurprisingly, a lot of abbreviations used in Twitter have come in from the SMS community. I have been restricting myself to ~1,000 words in recent posts (+/-10%, if I’m being honest) and, with the average word length of approximately 5 for English then, by adding spaces and punctuation to take this to 6, you’d expect my posts to be somewhere in the region of 6,000 characters. Anyone who’s been reading this for a while will know that I love long words and technical terms so there’s a possibility that it’s up beyond this. So one of my posts, as the largest Tweets, would take up about 43 tweets. How long would that take the average Twitterer?

Here’s an interesting site that lists some statistics, from 2009 – things will have changed but it’s a pretty thorough snapshot. Firstly, the more followers you have the more you tweet (cause and effect not stated!) but even then, 85% of users update less than once per day, with only 1% updating more than 10 times per day. With the vast majority of users having less than 100 followers (people who are subscribed to read all of your tweets), this makes two tweets per day the dominant activity. But that was back in 2009 and Twitter has grown considerably since then. This article updates things a little, but not in the same depth, and gives us two interesting facts. Firstly, that Twitter has grown amazingly since 2009. Secondly, that event reporting now takes place on Twitter – it has become a news and event dissemination point. This is happening to the extent that a Twitter reported earthquake can expand outwards in the same or slightly less time than the actual earthquake itself. This has become a bit of a joke, where people will tweet about what is happening to them rather than react to the event.

From Twitter’s own blog, March, 2011, we can also see this amazing growth – more people are using Twitter and more messages are being sent. I found another site listing some interesting statistics for Twitter: 225,000,000 users, most tweets are 40 characters long, 40% if users don’t tweet but just read and the average user still has around 100 followers (115 actually). If the previous behaviour patterns hold, we are still seeing an average of two tweets for the majority user who actually posts. But a very large number of people are actually reading Twitter far more than they ever post.

To summarise, millions of people around the world are exposed to hundreds of messages that are 4o characters long and this may be one of their leading sources of information and exposure to text throughout the day. To put this in context, it would take 150 tweets to convey one of my average posts at the 40 character limit and this is a completely different way of reading information because, assuming that the ‘average’ sentence is about 15-20 words, very few of these tweets are going to be ‘full’ sentences. Context is, of course, essential and a stream of short messages, even below sentence length, can be completely comprehensible. Perhaps even sentence fragments? Or three words. Two words? One? (With apologies to Hofstadter!) So there’s little mileage in arguing that tweeting is going to change our semantic framework, although a large amount of what moves through any form of blogging, micro or other, is going to always have its worth judged by external agents who don’t take part in that particular activity and find it wanting. (I blog, you type, he/she babbles.)

But is this shortening of phrase, and our immersion in a shorter sentence structure, actually having an impact on the way that we write or read? Basically, it’s very hard to tell because this is such a recent phenomenon. Early social media sites, including the BBs and the multi-user shared environments, did not value brevity as much as they valued contribution and, to a large extent, demonstration of knowledge. There was no mobile phone interaction or SMS link so the text limit of Twitter wasn’t required. LiveJournal was, if anything, the antithesis of brevity as the journalling activity was rarely that brief and, sometimes, incredibly long. Facebook enforces some limits but provides notes so that longer messages can be formed but, of course, the longer the message, the longer the time it takes to write.

Twitter is an encourager of immediacy, of thought into broadcast, but this particular messaging mode, the ability to globally yell “I like ice cream and I’m eating ice cream” as one is eating ice cream is so new that any impact on overall language usage is going to be hard to pin down. As it happens, it does appear that our sentences are getting shorter and that we are simplifying the language but, as this poster notes, the length of the sentence has shrunk over time but the average word length has only slightly shortened, and all of this was happening well before Twitter and SMS came along. If anything, perhaps this indicates that the popularity of SMS and Twitter reflects the direction of language, rather than that language is adapting to SMS and Twitter. (Based on the trend, the Presidential address of 2300 is going to be something along the lines of “I am good. The country is good. Thank you.”)

I haven’t had the time that I wanted to go through this in detail, and I certainly welcome more up-to-date links and corrections, but I much prefer the idea that our technologies are chosen and succeed based on our existing drives tastes, rather than the assumption that our technologies are ‘dumbing us down’ or ‘reducing our language use’ and, in effect, driving us. I guess you may say I’m a dreamer.

(But I’m not the only one!)


The Early-Career Teacher

Recently, I mentioned the Australian Research Council (ARC) grant scheme, which recognises that people who have had their PhDs for less than five years are regarded as early-career researchers (ECRs). ECRs have a separate grant scheme (now, they used to have a different way of being dealt with in the grant application scheme) that recognises the fact that their track records, the number of publications and activity relative to opportunity, is going to be less than that of more seasoned individuals.

What is interesting about this is that someone who has just finished their PhD will have spent (at least) three years, more like four, doing research and, we hope, competent research under guidance for the last two of those years. So, having spent a couple of years doing research, we then accept that it can take up to five years for people to be recognised as being at the same level.

But, for the most part, there is no corresponding recognition of the early-career teacher, which is puzzling given that there is no requirement to meet any teaching standards or take part in any teaching activities at all before you are put out in front of a class. You do no (or are not required to do any) teaching during your PhD in Australia, yet we offer support and recognition of early status for the task that you HAVE been doing – and don’t have a way to recognise the need to build up your teaching.

We discussed ideas along these lines at a high-level meeting that I attended this morning and I brought up the early-career teacher (and mentoring program to support it) because someone had brought up a similar idea for researchers. Mentoring is very important, it was one of the big HERDSA messages and almost everywhere I go stresses this, and it’s no surprise that it’s proposed as a means to improve research but, given the realities of the modern Australian University where more of our budget comes from teaching than research, it is indicative of the inherent focus on research that I need to propose teaching-specific mentoring in reaction to research-specific mentoring, rather than vice versa.

However, there are successful general mentoring schemes where senior staff are paired with more junior staff to give them help with everything that they need and I quite like this because it stresses the nexus of teaching and research, which is supposed to be one of our focuses, and it also reduces the possibility of confusion and contradiction. But let’s return to the teaching focus.

The impact of an early-career teacher program would be quite interesting because, much as you might not encourage a very raw PhD to leap in with a grant application before there was enough supporting track record, you might have to restrict the teaching activities of ECTs until they had demonstrated their ability, taken certain courses or passed some form of peer assessment. That, in any form, is quite confronting and not what most people expect when they take up a junior lectureship. It is, however, a practical way to ensure that we stress the value of teaching by placing basic requirements on the ability to demonstrate skill within that area! In some areas, as well as practical skill, we need to develop scholarship in learning and teaching as well – can we do this in the first years of the ECT with a course of educational psychology, discipline educational techniques and practica to ensure that our lecturers have the fundamental theoretical basis that we would expect from a school teacher?

Are we dancing around the point and, extending the heresy, require something much closer to the Diploma of Education to certify academics as teachers, moving the ECR and the ECT together to give us an Early Career Academic (ECA), someone who spends their first three years being mentored in research and teaching? Even ending up with (some sort of) teaching qualification at the end? (With the increasing focus on quality frameworks and external assessment, I keep waiting for one of our regulatory bodies to slip in a ‘must have a Dip Ed/Cert Ed or equivalent’ clause sometime in the next decade.)

To say that this would require a major restructure in our expectations would be a major understatement, so I suspect that this is a move too far. But I don’t think it’s too much to put limits on the ways that we expose our new staff to difficult or challenging teaching situations, when they have little training and less experience. This would have an impact on a lot of teaching techniques and accepted practices across the world. We don’t make heavy use of Teaching Assistants (TAs) at my Uni but, if we did, a requirement to reduce their load and exposure would immediately push more load back onto someone else. At a time when salary budgets are tight and people are already heavily loaded, this is just not an acceptable solution – so let’s look at this another way.

The way that we can at least start this, without breaking the bank, is to emphasise the importance of teaching and take it as seriously as we take our research: supporting and developing scholarship, providing mentoring and extending that mentoring until we’re sure that the new educators are adapting to their role. These mentors can then give feedback, in conjunction with the staff members, as to what the new staff are ready to take on. Of course, this requires us to carefully determine who should be mentored, and who should be the mentor, and that is a political minefield as it may not be your most senior staff that you want training your teachers.

I am a fairly simple man in many ways. I have a belief that the educational role that we play is not just staff-to-student, but staff-to-staff and student-to-student. Educating our new staff in the ways of education is something that we have to do, as part of our job. There is also a requirement for equal recognition and support across our two core roles: learning and teaching, and research. I’m seeing a lot of positive signs in this direction so I’m taking some heart that there are good things on the nearish horizon. Certainly, today’s meeting met my suggestions, which I don’t think were as novel as I had hoped they would be, with nobody’s skull popping out of their mouth. I take that as a positive sign.

 


The Heart of Darkness

My friend, fellow educator and cousin, Liz, commented on yesterday’s post where I (basically) asked why we waste educational opportunities by being unpleasant or bullying. Here’s something that she wrote in the comments:

How we respond to young people is vitally important. How a parent or teacher responds is so important to the self-esteem of a child/student. There is rarely a call for being brutally blunt or thoughtlessly cruel. But bashing is in style. It’s been in style a long time, long enough for an entire generation to think it is the norm.

The emphasis of that phrase “But bashing is in style” is mine because I couldn’t agree with it more. You can see it where we knock people down for being good in ways that we think that we may not be able to attain, while feting people who are wealthy, because somehow we can see ourselves being millionaires. Steinbeck, unsurprisingly, said it best and we paraphrase is longer thoughts on this as:

“Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.” as given in A Short History of Progress (2005) by Ronald Wright.

So there’s surprisingly little bashing of the “haves that we might attain if we are really lucky or play the game in the right way”, but there is a great deal of bashing of visionaries, dreamers, risk-takers, experimenters, those who challenge the status quo and those who dare to dabble within a field in which we consider ourselves expert. I think that list of ‘types’ pretty much describes every single good student I’ve ever had so it’s not that surprising that a large number of the experiences that these students have are negative.

This has not always happened – the forward thinking, the intellectual, the artistic manifesto maker have been highly prized before but, somehow, this seems to have faded away. (I know that every generation complains about this but, with our media saturation and our near-instantaneous communication, I think that the impact of negative feedback and bashing has a far wider reach, as well as being less focused on debate and more on cruelty, destruction and brutality.)

Let me give you an example. I am an artist, across a few different outlets but mainly writing and design, and I am creating a manifesto to describe my intentions in the artistic space, my motives in doing so, and my views on the fusion between creativity and the more rigid aspects of my discipline. The reaction to this, if I tell people, is predominantly negative. Firstly, due to a certain famous manifesto, most people assume that I am making some sort of revolutionary political statement. (The book “100 Artistic Manifestos” is an excellent reference to get a different view on this.)  Secondly, most people assume that I am somehow incapable of doing this – I suspect it’s because they believe that my job is me or that Computer Scientists can’t be creative. The general reaction is one of “knocking”, a gentle form of dismissive undermining common in Australia, but this is just a polite version of bashing. People don’t believe I can do this and have no problem expressing this in a variety of ways. Fortunately, I’ve reached the point in my career and my art that the need to write a manifesto is based on a desire to explain and to share, so people not understanding why I would do it just tells me that I need to do it. (Of course, calling yourself an artist is a hard one, as well. Am I published? No. Do I have any works on display? No. Do I make my living from it? No. Am I driven to create art? Yes. By my definition, I’m an artist. If I ever sell two paintings, of any kind, I’ve doubled Van Gogh’s lifetime sales. 🙂 )

This is the environment in which my students are learning and growing – and it’s a dark one. If I have noted nothing else from working with the young, it is that they are amazingly fragile at some points. The moments that you have to work with people, when they feel comfortable enough to be open and honest with you, are surprisingly few and far between – being cruel, taking a cheap shot, not having the time, cutting them down, not listening… it’ll have an effect, alright, and it may even be an effect that stays with that student for life. Going back over your memory of your teachers and lecturers, I bet you can remember every single one that changed your life, whether for good or for ill.

I don’t really want to harden my students, to make them into living armour, because I think that is really going to get in the way of them being people. Yes, I need them to be resilient but that’s a very different thing to rigid or tough. I need them to be able to commit to a particular set of ideas, that they choose, and to be able to withstand reasonable argument and debate, because this is the burden of the critical thinker. But I’m always worried that making them insensitive to criticism risks making them easily manipulable and ignorant of useful sources. It’s far too easy to respond to people you see as bashers with bashing – Richard Dawkins and Christopher Hitchens both spring to mind as people who wield words and ideas as weapons in an (on occasion) unnecessarily cruel, dismissive or self-satisfied way. There is a particular smugness of “basher-bashing” that is as repellent as the original action and this is also not a great way to train people that you wish to be out there, sharing and discussing ideas. If I wanted repellently smug and self-serving prose, I’d read Jeremy Clarkson, who is (at least) occasionally funny.

The obvious rejoinder to this is that “well, we need people on our side who are as tough as the opponents” and, frankly, I don’t buy it. That sounds more like revenge to me, with a side order of schaudenfreude. If we don’t act top stop it, then we make an environment in which bashing is tolerated and, if we do that, then the most successful basher will win. I’ll tell you right now that it won’t have to be the person who is smartest, most correct, most well-prepared – it is far more likely that it is the person who is willing to be the most cruel, the utterly vindictive and the inescapable persecutor who will win that battle.

So, longwindedly, I complete agree with Liz and want to finish by emphasising the start of her quote: “How we respond to young people is vitally important. How a parent or teacher responds is so important to the self-esteem of a child/student. There is rarely a call for being brutally blunt or thoughtlessly cruel.”

I am convinced that the majority of educators and parents are doing everything that needs to be done to give a good environment, but we also have to look at the world around us and ask how we can make that better.