No Bricks Without Clay

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (Sherlock Holmes in The Adventure of the Copper Beeches, Sir Arthur Conan Doyle)

“And while we’re on the subject, Watson, you also can’t make those terrible ashtrays beloved of amateur potters, so it’s not all bad.”

I’ve used this quote before but, in this case, I’m talking about students. I cannot produce graduates without first year students, and I have no first year students unless interested students come in from the schools.I’ve just returned from a school visit to years 10-12 and, regrettably, I don’t have many bricks. Last night I attended a careers night that was a joint event between two fee-paying secondary schools, one all-girls, one co-educational. There were three presentation slots allocated across the night and, because of space limitations, the more interest you had, the more slots you got.

Medicine had all three slots filled, law had all three and teaching had all three – all with standing room only. Journalism had all three slots filled. Media studies had two, sports studies had two.  Information technology had one – right at the end of the night – and I only had eight people scheduled to attend. Of these, seven were male and one was female. And that is the total number of bricks that I have from two prominent, highly academic and well-resourced schools in my primary catchment. What puzzles me the most is that one of these schools has one of the richest IT environments in the state and these students are surrounded, every day, by the fruits of the labours of my students. But, as I’ve discussed before, Computing/IT/CS/ICT/IS has a loosely defined identity and doesn’t have the best image to start with.

I’m disappointed by the level of interest but I’m not surprised. The attendees were almost all from Year 10 – the peak of interest in our school body, with one Year 12 signed up to attend but, I believe, who didn’t show up. Regrettably, it because obvious that my placement at the end of the night put me into the burnout zone for several of the students. One attended, put their head down on the table before I’d started speaking and only raised it to leave at the end of the talk. Even the parents looked a little glazed over at the start and it took a lot of showmanship to bring people back into the activity. However, I think that we had a good showing for those people who were there. As always, the problem is that vast number of people who weren’t there.

Tomorrow, I disrupt my schedule completely to fly to Sydney to discuss a systematic, sponsored outreach activity into schools to try and fix this. What I would like to see, in 2-3 years time, is full sessions and standing room only for Computing and ICT sessions, with a strong showing from both genders and under-represented groups. It was useful to be on the ground to see the problem face-to-face but I’d be lying if I said that today wasn’t starting on a demoralised note.

Tomorrow, we try again to fix the problem.


The 1-Year Degree – what’s your reaction?

I’m going to pose a question and I’d be interested in your reaction.

“Is there a structure and delivery mechanism that could produce a competent professional graduate from a degree course such as engineering or computer science, which takes place over a maximum of 12 months including all assessment, without sacrificing quality or content?”

What was your reaction? More importantly, what is the reasoning behind your reaction?

For what it’s worth my answer is “Not with our current structures but, apart from that, maybe.” which is why one of my side projects is an attempt to place an entire degree’s worth of work into a 12-month span as a practice exercise for discussing the second and third year curriculum review that we’re holding later on this year.

Our ‘standard’ estimate for any normal degree program is that a student is expected to have a per-semester load of four courses (at 3 units a course, long story) and each of these courses will require 156 hours from start to finish. (This is based on 10 hours per week, including contact and non-contact, and roughly 36 hours for revision towards examination or the completion of other projects.) Based on this estimate, and setting up an upper barrier of 40 hours/week, for all of the good research-based reasons that I’ve discussed previously, there is no way that I can just pick up the existing courses and drop them into a year. A three-year program has six semesters, with four courses per semester, which gives an hour burden of 24*156 = 3,744. At 40 hours per week, we’d need 93.6 weeks (let’s call that 94), or 1.8 years.

But, hang on, we already have courses that are 6-unit and span two semesters – in fact, we have enormous projects for degree programs like Honours that are worth the equivalent of four courses. Interestingly, rather than having an exam every semester, these have a set of summative and formative assignments embedded to allow the provision of feedback and the demonstration of knowledge and skill acquisition – does this remove the need to have 36 hours for exam study for each semester if we build the assignments correctly?

Let’s assume that it does. Now we have a terminal set of examinations at the end of each year, instead of every semester. Now I have 12 courses at 120 hours each and 12 at 156 hours each. Now we’re down to 3,312 – which is only 1.6 years. Dang. Still not there. But it’s ok, I can see all of you who have just asked “Well, why are you so keen on using examinations if you’re happy with summative assignments testing concepts as you go and then building in the expectation of this knowledge in later modules?” Let’s drop the exam requirement even further to a final set of professional level assessment criteria, carried out at the end of the degree to test high-level concepts and advanced skills. Now, of the 24 courses that a student sits, almost all assessment work has moved into continuous assessment mode, rich in feedback, with summative checkpoints and a final set of examinations as part of the four capstone courses at the end. This gives us 3,024 hours – about 1.45 years.

But this is also ignoring that the first week of many of these courses is required revision after some 6-18 weeks of inactivity as the students go away to summer break or home for various holidays. Let’s assume even further that, with the exception of the first four courses that they do, that we build this continuously so that skills and knowledge are reinforced as micro slides, scattered throughout the work, supported with recordings, podcasts, notes, guides and quick revision exercises in the assessment framework. Now I can slice maybe 5 hours off 20 of the courses (the last 20) – cutting me down by another 100 hours and that’s half a month saved, down to 1.4 years.

Of course, I’m ignoring a lot of issues here. I’m ignoring the time it takes someone to digest information but, having raised that, can you tell me exactly how long it takes a student to learn a new concept? This is a trick question as the answer generally depends upon the question “how are you teaching them?” We know that lectures are one of the worst ways to transfer information, with A/V displays, lectures and listening all having a retention rate less than 40%. If you’re not retaining, your chances of learning something are extremely low. At the same time, somewhere between 30-50% of the time that we’re allocating to those courses we already teach are spent in traditional lectures – at time of writing. We can improve retention (of both knowledge and students) when we use group work (50% and higher for knowledge) or get the students to practice (75%) or, even better, instruct someone else (up to 90%). If we can restructure the ’empty’ or ‘low transfer’ times into other activities that foster collaboration or constructive student pedagogy with a role transfer that allows students to instruct each other, then we can potentially greatly improve our usage of time.

If we use this notion and slice, say, 20 hours from each course because we can get rid of that many contact hours that we were wasting and get the same, if not better, results, we’re down to 2,444 hours, about 1.18 years. And I haven’t even started looking at the notion of concept alignment, where similar concepts are taught across two different concepts and could be put in one place, taught once, consistently and then built upon for the rest of the course. Suddenly, with the same concepts and a potentially improved educational design – we’re looking the 1-year degree in the face.

Now, there will be people who will say “Well, how does the student mature in this time? That’s only one year!” to which my response is “Well, how are you training them for maturity? Where are the developing exercises? The formative assessment based on careful scaffolding in societal development and intellectual advancement?” If the appeal of the three-year degree is that people will be 19-20 when they graduate, and this is seen as a good thing, then we solve this problem for the 1-year degree by waiting two years before they start!

Having said all of this, and believing that a high quality 1-year degree is possible, let me conclude by saying that I think that it is a terrible idea! University is more than a sequence of assessments and examinations, it is a culture, a place for intellectual exploration and the formation of bonds with like-minded friends. It is not a cram school to turn out a slightly shell-shocked engineer who has worked solidly, and without respite, for 52 weeks. However, my aim was never actually to run a course in a year, it was to see how I could restructure a course to be able to more easily modularise it, to break me out of the mental tyranny of a three- or four-year mandate and to focus on learning outcomes, educational design and sound pedagogy. The reason that I am working on this is so that I can produce a sound course structure with which students can engage, regardless of whether they are full-time or not, clearly outlining dependency and requirements. Yes, if we break this up into part-time, we need to add revision modules back in – but if we teach it intensively (or on-line) then those aren’t required. This is a way to give students choice and the freedom to come in at any age, with whatever time they have, but without sacrificing the quality of the underlying program. This is a bootstrap program for a developing nation, a quick entry point for people who had to go to work – this is making up for decades of declining enrolments in key areas.

This is going on a war footing against the forces of ignorance.

There are many successful “Open” universities that use similar approaches but I wanted to go through the exercise myself, to allow me the greatest level of intellectual freedom while looking at our curriculum review. Now, I feel that I can focus on Knowledge Areas for my specifics and on the program as a whole, freed of the binding assumption that there is an inevitable three-year grind ahead for any student. Perhaps one of the greatest benefits for me is the thought that, for students who can come to us for three years, I can put much, much more into the course if they have the time – and these things, of interest, regarding beauty, of intellectual pursuits, can replace some of the things that we’ve lost over the years in the last two decades of change in University.


Brief but good news

A happy surprise in my mailbox today, but first the background. We’ve been teaching Puzzle Based Learning at Adelaide for several years now, based on Professor Zbigniew Michalewicz’s concept for a course that encouraged problem solving in a domain-free environment. (You can read more details about it by searching for Puzzle Based Learning with the surnames Falkner, Michalewicz and Sooriamurthi – we’ve had work published on this in IEEE Computer and as a workshop at SIGCSE, among several others.) Zbyszek (Adelaide), Raja (Sooriamurthi, a Teaching Professor at CMU) and I teamed up with Professor Ed Meyer (Physics at Baldwin-Wallace) to put together a textbook proposal to help people teach this information.

Great news – our proposal has been accepted by an excellent publishing house who appear to be genuinely excited about the book! As this is my first book, I’m very excited and pleased – but it’s a great reflection on the strength of the team and our composite skills and background, especially with the inter-disciplinary aspects. I’ve seen a lot of exciting work come out of Baldwin-Wallace and, while this is my first time working with Ed, I’m really looking forward to it. (Zbyszek, Raja and I have worked together a lot but I’m still excited to be working with them again!)

Good news after a rather difficult week.


Reflecting on rewards – is Time Banking a reward or a technique?

The Reward As It Is Often Implemented
(In Advance and Starting With a B)

Enough advocacy for a while, time to think about research again! Given that I’ve just finished Alfie Kohn’s Punished by Rewards, and more on that later, I’ve been looking very carefully at everything I do with students to work out exactly what I am trying to do. One of Kohn’s theses is that we tend to manipulate people towards compliance through extrinsic tools such as incentives and rewards, rather than provide an environment in which their intrinsic motivational aspects dominate and they are driven to work through their own interest and requirements. Under Kohn’s approach, a gold star for sitting quietly achieve little except to say that sitting quietly must be so bad that you need to be bribed, and developing a taste for gold stars in the student. If someone isn’t sitting quietly, is it because they haven’t managed sitting quietly (the least rewarding unlockable achievement in any game) or that they are disengaged, bored or failing to understand why they are there? Is it, worse, because they are trying to ask questions about work that they don’t understand or because they are so keen to discuss it that they want to talk? Kohn wants to know WHY people are or aren’t doing things rather than just to stop or start people doing things through threats and bribery.

Where, in this context, does time banking fit? For those who haven’t read me on this before, time banking is described in a few posts I’ve made, with this as one of the better ones to read. In summary, students who hand up work early (and meet a defined standard) get hours in the bank that they can spend at a later date to give themselves a deadline extension – and there are a lot of tuneable parameters around this, but that’s the core. I already have a lot of data that verifies that roughly a third of students hand in on the last day and 15-18% hand up late. However, the 14th highest hand-in hour is the one immediately after the deadline. There’s an obvious problem where people aren’t giving themselves enough time to do the work but “near-missing” by one hour is a really silly way to lose marks. (We won’t talk about the pedagogical legitimacy of reducing marks for late work at the moment, that’s a related post I hope to write soon. Let’s assume that our learning design requires that work be submitted at a certain time to reinforce knowledge and call that the deadline – the loss, as either marks or knowledge reinforcement, is something that we want to avoid.)

But, by providing a “reward” for handing up early, am I trying to bribe my students into behaviour that I want to see? I think that the answer is “no”, for reasons that I’ll go into.

Firstly, the fundamental concept of time banking is that students have a reason to look at their assignment submission timetable as a whole and hand something up early because they can then gain more flexibility later on. Under current schemes, unless you provide bonus points, there is no reason for anyone to hand up more than one second early – assuming synchronised clocks. (I object to bonus points for early hand-in for two reasons: it is effectively a means to reward the able or those with more spare time, and because it starts to focus people on handing up early rather than the assignment itself.) This, in turn, appears to lead to a passive, last minute thinking pattern and we can see the results of that in our collected assignment data – lots and lots of near-miss late hand-ins. Our motivation is to focus the students on the knowledge in the course by making them engage with the course as a whole and empowering themselves into managing their time rather than adhering to our deadlines. We’re not trying to control the students, we’re trying to move them towards self-regulation where they control themselves.

Secondly, the same amount of work has to be done. There is no ‘reduced workload’ for handing in early, there is only ‘increased flexibility’. Nobody gets anything extra under this scheme that will reinforce any messages of work as something to be avoided. The only way to get time in the bank is to do the assignments – it is completely linked to the achievement that is the core of the course, rather than taking focus elsewhere.

Thirdly, a student can choose not to use it. Under almost every version of the scheme I’ve sketched out, every student gets 6 hours up their sleeve at the start of semester. If they want to just burn that for six late hand-ins that are under an hour late, I can live with that. It will also be very telling if they then turn out to be two hours late because, thinking about it, that’s a very interesting mental model that they’ve created.

But how is it going to affect the student? That’s a really good question. I think that, the way that it’s constructed, it provides a framework for students to work with, one that ties in with intrinsic motivation, rather than a framework that is imposed on students – in fact, moving away from the rigidly fixed deadlines that (from our data) don’t even to be training people that well anyway is a reduction in manipulate external control.

Will it work? Oh, now, there’s a question. After about a year of thought and discussion, we’re writing it all up at the moment for peer review on the foundations, the literature comparison, the existing evidence and the future plans. I’ll be very interested to see both the final paper and the first responses from our colleagues!


Environmental Impact: Iz Tweetz changing ur txt?

Please, please forgive me for the diabolical title but I have been wondering about the effects of saturation in different communication environments and Twitter seemed like an interesting place to start. For those who don’t know about Twitter, it’s an online micro-blogging social media service. Connect to it via your computer or phone and you can put a message in that is up to 140 characters, where each message is called a tweet. What makes Twitter interesting is the use of hashtags and usernames to allow the grouping of these messages by area by theme (#firstworldproblems, if you’re complaining about the service in Business Class, for example) or to respond to someone (@katyperry – Russell Brand, SRSLY?). Twitter has very significant penetration in the celebrity market and there are often “professional” tweeters for certain organisations.

There is a lot more to say about Twitter but what I want to focus on is the maximum number of characters available – 140. This limit was set for compatibility with SMS messages and, unsurprisingly, a lot of abbreviations used in Twitter have come in from the SMS community. I have been restricting myself to ~1,000 words in recent posts (+/-10%, if I’m being honest) and, with the average word length of approximately 5 for English then, by adding spaces and punctuation to take this to 6, you’d expect my posts to be somewhere in the region of 6,000 characters. Anyone who’s been reading this for a while will know that I love long words and technical terms so there’s a possibility that it’s up beyond this. So one of my posts, as the largest Tweets, would take up about 43 tweets. How long would that take the average Twitterer?

Here’s an interesting site that lists some statistics, from 2009 – things will have changed but it’s a pretty thorough snapshot. Firstly, the more followers you have the more you tweet (cause and effect not stated!) but even then, 85% of users update less than once per day, with only 1% updating more than 10 times per day. With the vast majority of users having less than 100 followers (people who are subscribed to read all of your tweets), this makes two tweets per day the dominant activity. But that was back in 2009 and Twitter has grown considerably since then. This article updates things a little, but not in the same depth, and gives us two interesting facts. Firstly, that Twitter has grown amazingly since 2009. Secondly, that event reporting now takes place on Twitter – it has become a news and event dissemination point. This is happening to the extent that a Twitter reported earthquake can expand outwards in the same or slightly less time than the actual earthquake itself. This has become a bit of a joke, where people will tweet about what is happening to them rather than react to the event.

From Twitter’s own blog, March, 2011, we can also see this amazing growth – more people are using Twitter and more messages are being sent. I found another site listing some interesting statistics for Twitter: 225,000,000 users, most tweets are 40 characters long, 40% if users don’t tweet but just read and the average user still has around 100 followers (115 actually). If the previous behaviour patterns hold, we are still seeing an average of two tweets for the majority user who actually posts. But a very large number of people are actually reading Twitter far more than they ever post.

To summarise, millions of people around the world are exposed to hundreds of messages that are 4o characters long and this may be one of their leading sources of information and exposure to text throughout the day. To put this in context, it would take 150 tweets to convey one of my average posts at the 40 character limit and this is a completely different way of reading information because, assuming that the ‘average’ sentence is about 15-20 words, very few of these tweets are going to be ‘full’ sentences. Context is, of course, essential and a stream of short messages, even below sentence length, can be completely comprehensible. Perhaps even sentence fragments? Or three words. Two words? One? (With apologies to Hofstadter!) So there’s little mileage in arguing that tweeting is going to change our semantic framework, although a large amount of what moves through any form of blogging, micro or other, is going to always have its worth judged by external agents who don’t take part in that particular activity and find it wanting. (I blog, you type, he/she babbles.)

But is this shortening of phrase, and our immersion in a shorter sentence structure, actually having an impact on the way that we write or read? Basically, it’s very hard to tell because this is such a recent phenomenon. Early social media sites, including the BBs and the multi-user shared environments, did not value brevity as much as they valued contribution and, to a large extent, demonstration of knowledge. There was no mobile phone interaction or SMS link so the text limit of Twitter wasn’t required. LiveJournal was, if anything, the antithesis of brevity as the journalling activity was rarely that brief and, sometimes, incredibly long. Facebook enforces some limits but provides notes so that longer messages can be formed but, of course, the longer the message, the longer the time it takes to write.

Twitter is an encourager of immediacy, of thought into broadcast, but this particular messaging mode, the ability to globally yell “I like ice cream and I’m eating ice cream” as one is eating ice cream is so new that any impact on overall language usage is going to be hard to pin down. As it happens, it does appear that our sentences are getting shorter and that we are simplifying the language but, as this poster notes, the length of the sentence has shrunk over time but the average word length has only slightly shortened, and all of this was happening well before Twitter and SMS came along. If anything, perhaps this indicates that the popularity of SMS and Twitter reflects the direction of language, rather than that language is adapting to SMS and Twitter. (Based on the trend, the Presidential address of 2300 is going to be something along the lines of “I am good. The country is good. Thank you.”)

I haven’t had the time that I wanted to go through this in detail, and I certainly welcome more up-to-date links and corrections, but I much prefer the idea that our technologies are chosen and succeed based on our existing drives tastes, rather than the assumption that our technologies are ‘dumbing us down’ or ‘reducing our language use’ and, in effect, driving us. I guess you may say I’m a dreamer.

(But I’m not the only one!)


The Early-Career Teacher

Recently, I mentioned the Australian Research Council (ARC) grant scheme, which recognises that people who have had their PhDs for less than five years are regarded as early-career researchers (ECRs). ECRs have a separate grant scheme (now, they used to have a different way of being dealt with in the grant application scheme) that recognises the fact that their track records, the number of publications and activity relative to opportunity, is going to be less than that of more seasoned individuals.

What is interesting about this is that someone who has just finished their PhD will have spent (at least) three years, more like four, doing research and, we hope, competent research under guidance for the last two of those years. So, having spent a couple of years doing research, we then accept that it can take up to five years for people to be recognised as being at the same level.

But, for the most part, there is no corresponding recognition of the early-career teacher, which is puzzling given that there is no requirement to meet any teaching standards or take part in any teaching activities at all before you are put out in front of a class. You do no (or are not required to do any) teaching during your PhD in Australia, yet we offer support and recognition of early status for the task that you HAVE been doing – and don’t have a way to recognise the need to build up your teaching.

We discussed ideas along these lines at a high-level meeting that I attended this morning and I brought up the early-career teacher (and mentoring program to support it) because someone had brought up a similar idea for researchers. Mentoring is very important, it was one of the big HERDSA messages and almost everywhere I go stresses this, and it’s no surprise that it’s proposed as a means to improve research but, given the realities of the modern Australian University where more of our budget comes from teaching than research, it is indicative of the inherent focus on research that I need to propose teaching-specific mentoring in reaction to research-specific mentoring, rather than vice versa.

However, there are successful general mentoring schemes where senior staff are paired with more junior staff to give them help with everything that they need and I quite like this because it stresses the nexus of teaching and research, which is supposed to be one of our focuses, and it also reduces the possibility of confusion and contradiction. But let’s return to the teaching focus.

The impact of an early-career teacher program would be quite interesting because, much as you might not encourage a very raw PhD to leap in with a grant application before there was enough supporting track record, you might have to restrict the teaching activities of ECTs until they had demonstrated their ability, taken certain courses or passed some form of peer assessment. That, in any form, is quite confronting and not what most people expect when they take up a junior lectureship. It is, however, a practical way to ensure that we stress the value of teaching by placing basic requirements on the ability to demonstrate skill within that area! In some areas, as well as practical skill, we need to develop scholarship in learning and teaching as well – can we do this in the first years of the ECT with a course of educational psychology, discipline educational techniques and practica to ensure that our lecturers have the fundamental theoretical basis that we would expect from a school teacher?

Are we dancing around the point and, extending the heresy, require something much closer to the Diploma of Education to certify academics as teachers, moving the ECR and the ECT together to give us an Early Career Academic (ECA), someone who spends their first three years being mentored in research and teaching? Even ending up with (some sort of) teaching qualification at the end? (With the increasing focus on quality frameworks and external assessment, I keep waiting for one of our regulatory bodies to slip in a ‘must have a Dip Ed/Cert Ed or equivalent’ clause sometime in the next decade.)

To say that this would require a major restructure in our expectations would be a major understatement, so I suspect that this is a move too far. But I don’t think it’s too much to put limits on the ways that we expose our new staff to difficult or challenging teaching situations, when they have little training and less experience. This would have an impact on a lot of teaching techniques and accepted practices across the world. We don’t make heavy use of Teaching Assistants (TAs) at my Uni but, if we did, a requirement to reduce their load and exposure would immediately push more load back onto someone else. At a time when salary budgets are tight and people are already heavily loaded, this is just not an acceptable solution – so let’s look at this another way.

The way that we can at least start this, without breaking the bank, is to emphasise the importance of teaching and take it as seriously as we take our research: supporting and developing scholarship, providing mentoring and extending that mentoring until we’re sure that the new educators are adapting to their role. These mentors can then give feedback, in conjunction with the staff members, as to what the new staff are ready to take on. Of course, this requires us to carefully determine who should be mentored, and who should be the mentor, and that is a political minefield as it may not be your most senior staff that you want training your teachers.

I am a fairly simple man in many ways. I have a belief that the educational role that we play is not just staff-to-student, but staff-to-staff and student-to-student. Educating our new staff in the ways of education is something that we have to do, as part of our job. There is also a requirement for equal recognition and support across our two core roles: learning and teaching, and research. I’m seeing a lot of positive signs in this direction so I’m taking some heart that there are good things on the nearish horizon. Certainly, today’s meeting met my suggestions, which I don’t think were as novel as I had hoped they would be, with nobody’s skull popping out of their mouth. I take that as a positive sign.

 


A Design Challenge, a Grand Design Challenge, if you will.

Question: What is one semester long, designed as a course for students who perform very well academically, has no prerequisites and can be taken by students with no programming exposure and by students with a great deal of programming experience?

Answer: I don’t know but I’m teaching it on Monday.

While I talk about students who perform well academically, this is for the first instance of this course. My goal is that any student can take this course, in some form, in the future.

The new course in our School, Grand Challenges in Computer Science, is part of our new degree structure, the Bachelor of Computer Science (Advanced). This adds  lot more project work and advanced concepts, without disrupting the usual (and already excellent) development structure of the degree. One of the challenges of dealing with higher-performing students is keeping them in a sufficiently large and vibrant peer group while also addressing the minor problem that they’re moving at a different pace to many people that they are friends with. Our solution has been to add additional courses that sit outside of the main progression but still provide interesting material for these students, as well as encouraging them to take a more active role in the student and general community. They can spend time with their friends, carry on with their degrees and graduate at the same time, but also exercise themselves to greater depth and into areas that we often don’t have time to deal with.

In case you’re wondering, I know that some of my students read this blog and I’m completely comfortable talking about the new course in this manner because (a) they know that I’m joking about the “I don’t know” from the Answer above and (b) I have no secrets regarding this course. There are some serious challenges facing us as a species. We are now in a position where certain technologies and approaches may be able to help us with this. One of these is the notion of producing an educational community that can work together to solve grand challenges and these students are very much a potential part of this new community.

The biggest challenge for me is that I have such a wide range of students. I have students who potentially have no programming background and students who have been coding for four years. I have students who are very familiar with the School’s practices and University, and people whose first day is Monday. Of course, my solution to this is to attack it with a good design. But, of course, before a design, we have to know the problem that we’re trying to solve.

The core elements of this course are the six grand challenges as outlined but he NSF, research methods that will support data analysis, the visualisation of large data sources as a grand challenge and community participation to foster grand challenge communities. I don’t believe that a traditional design of lecturing is going to support this very well, especially as the two characteristics that I most want to develop in the students are creativity and critical thinking. I really want all of my students to be able to think their way around, over or through an obstacle and I think that this course is going to be an excellent place to be able to concentrate on this.

I’ve started by looking at my learning outcomes for this course – what do I expect my students to know by the end of this course? Well, I expect them to be able to tell me what the grand challenges are, describe them, and then provide examples of each one. I expect them to be able to answer questions about key areas and, in the areas that we explore in depth, demonstrate this knowledge through the application of relevant skills, including the production of assignment materials to the best of their ability, given their previous experience. Of course, this means that every student may end up performing slightly differently, which immediately means that personalised assessment work (or banded assessment work) is going to be required but it also means that the materials I use will need to be able to support a surface reading, a more detailed reading and a deep reading, where students can work through the material at their own pace.

I don’t want the ‘senior’ students to dominate, so there’s going to have be some very serious scaffolding, and work from me, to support role fluidity and mutual respect, where the people leading discussion rotate to people supporting a point, or critiquing a point, or taking notes on the point, to make sure that everyone gets a say and that we don’t inhibit the creativity that I’m expecting to see in this course. I will be setting standards for projects that take into account the level of experience of each person, discussed and agreed with the student in advance, based on their prior performance and previous knowledge.

What delights me most about this course is that I will be able to encourage people to learn from each other. Because the major assessment items are all unique to a student, then sharing knowledge will not actually lead to plagiarism or copying. Students will be actively discouraged from doing work for each other but, in this case, I have no problem in students helping each other out – as long as the lion’s share of the work is done by the main student. (The wording of this is going to look a lot more formal but that’s a Uni requirement. To quote “The Castle”, “It’s about the vibe.”) Students will regularly present their work for critique and public discussion, with their response to that critique forming a part of their assessment.

I’m trying to start these students thinking about the problems that are out there, while at the same time giving them a set of bootstrapping tools that can set them on the path to investigation and (maybe) solution well ahead of the end of their degrees. This then feeds into their project work in second and third year. (And, I hope, for at least some of them, Honours and maybe PhD beyond.)

Writing this course has been a delight. I have never had so much excuse to buy books and read fascinating things about challenging issues and data visualisation. However, I think that it will be the student’s response to this that will give me something that I can then share with other people – their reactions and suggestions for improvement will put a seal of authenticity on this that I can then pack up, reorganise, and put out into the world as modules for general first year and high school outreach.

I’m very much looking forward to Monday!


Good Design: Building In Important Features From the Start

The game “Deus Ex” is widely regarded as one of the best computer games that has been made so far. It has won a very large number of “best game” awards and regularly shows up in the top 5 of lists of “amazing games”. Deus Ex was released in 2000, designed and developed by Ion Storm under Warren Spector and Harvey Smith and distributed by Eidos. (I mentioned it before in this post, briefly.) Here is the description of this game from Wikipedia:

Set in a dystopian world during the year 2052, the central plot follows rookie United Nations Anti-Terrorist Coalition agent JC Denton, as he sets out to combat terrorist forces, which have become increasingly prevalent in a world slipping ever further into chaos. As the plot unfolds, Denton becomes entangled in a deep and ancient conspiracy, encountering organizations such as Majestic 12, the Illuminati, and the Hong Kong Triads throughout his journey.

Deus Ex had a cyberpunk theme, a world of shadowy corporations and many corruptions of the human soul, ranging from a generally materialistic culture to body implants producing cyborg entities that no longer had much humanity. While looking a lot like a First-Person Shooter (you see through the character’s eyes and kill things), the game also had a great deal of stealth play (sneaking around trying very hard not to get noticed, shot or both). However, what sets DE apart from most other games it that the choice of how you solved most of the problems was pretty much left up to you. This was no accident. The fact that you could solve 99% of the problems in the game by using different forms of violence, many forms of stealth or a combination of these was down to the way that the game was designed.

When I was at Game Masters at ACMI, Melbourne, over the weekend, I was able to read the front page of a document entitled “Just What IS Deus Ex” by Warren Spector. Now, unfortunately, they had a “no photographs” rule so I don’t have a copy of it (and, for what it’s worth, I also interpreted that to mean “no tiresome hand transcription onto the iPhone in order to make a replica” ) but one of the most obvious and important design features was that they wanted to be able to support player exploration: players’ actions had to have consequences and players needed to be able to make their plans, without feeling constrained by the world. (Fortunately, while not being the actual document, there is an article here where Warren talks about most of the important things. If you’re interested in design, have a look at it after you’ve finished this.) Because of this, a number of the items in the game can be used in a number of quite strange ways and, while it appears that this is a bug, suddenly you’ll run across an element of the game that makes you realise that the game designers knew that this was possible.

Do not climb if red lights active!

For example, in the Triad-run Hong Kong of 2052, there is a very tall tower on one edge of the explorable area. There are grenades (LAMs)in the game that adhere ‘magnetically’ to walls and then explode if armed and someone enters their proximity. However, it is possible to use these grenades to climb up walls, assuming you don’t arm them of course, by sticking them to walls, getting close enough to hop up, placing another grenade above you and then doing the same thing. With patience, you can climb quite high. Sounds like a bug, right? Yeah, well, that’s what I thought until I climbed to the top of the tower in Hong Kong and found a guy, one of the Non-Player Characters, standing on top.

This was a surprise but it shouldn’t have been. I’d already realised that there was always more than one way to do things and, because the game was designed to make it is as easy as possible for me to try many paths to achieve success, the writer had put in early hints designed to discourage a ‘blow everything up’ approach. The skill system makes it relatively easy for you to make your life a lot easier by working with what is already in the environment rather than trying to do it all yourself.

In terms of the grenades, rather than just being pictures on a wall, they became real world objects when placed and were as solid as any other element. This allowed them to be climbed and the designers/programmers recognised this by putting a guy on top of a tower that you had no other way to get to (without invoking cheats). The objects in Deus Ex were designed to be as generally usable as possible. The sword could open crates as well (Ok, well much better) than a crowbar could and reduced the need to carry two things. Many weapons came with multiple ammunition types, allowing you to customise your load out to the kind of game you wanted to play. Other nice features included the fact that there very few situations of ‘spontaneous creation’, where monsters appeared at some point in a scripted scene, which would have enforced a certain approach. If you were crawling in somewhere from completely the wrong side, everything would be there and ready, rather than all spontaneously reappearing when you happened to approach from the ‘triggering’ side.

In short, it felt like a real world. (With the usual caveat regarding it being a real world where you are a killer cyborg in 2052.)

The big advantage of this is that you feel a great deal of freedom in your planning and implementation and, combined with the fact that the game reacts and changes to the decisions that you make, this makes the endings of the game feel very personal – when you finally choose between the three possible endings, you do so feeling like the game is actually going along with the persona that you have set up. This increases the level of engagement, achievement and enjoyment.

One of Mark Guzdial’s recent posts talked about the importance of good design when it comes to constructing instructional materials and I couldn’t agree more. Good design at the start, with a clear idea of what you’re trying to achieve, allows you to build a consistent experience that will allow you and your students to achieve your objectives. Deus Ex is, in my opinion, considered one of the best games of the 21st century because it started from a simple and clear design document that was set out to maximise the degree of influence that the player could feel in the game – everyone who plays Deus Ex takes their own path through it, has their own experience and gets something slightly different out of it.

I’m not saying it’s that easy for educational design as a global issue, but it is a very good reminder of why we should be doing good design at the very beginning of our courses!


The Big Picture and the Drug of Easy Understanding: Part I

There is a tendency to frame artistic works such as films and books inside a larger frame. It’s hard to find a fantasy novel that isn’t “Book 1 of the Mallomarion Epistemology Cycle” or a certain type of mainstream film that doesn’t relate to a previous film (as II, III or higher) or as a re-interpretation of a film in the face of another canon (the re-re-reboot cycle). There are still independent artistic endeavours within this, certainly, but there is also a strong temptation to assess something’s critical success and then go on to make another version of it, in an attempt to make more money. Some things were always multi-part entities in the planning and early stages (such as the Lord of the Rings books and hence movies), some had multiplicity thrust upon them after unlikely success (yes, Star Wars, I’m looking at you, although you are strangely similar to Hidden Fortress so you aren’t even the start point of the cycle).

From a commercial viewpoint, selling something that only sells itself is nowhere near as interesting as selling something that draws you into a consumption cycle. This does, however, have a nasty habit of affecting the underlying works. You only have to look at the relative length of the Harry Potter books, and the quality of editing contained within, to realise that Rowling reached a point where people stopped cutting her books down – even if that led to chapters of aimless meandering in a tent in later books. Books one to three are, to me, far, far better than the later ones, where commercial influence, the desire to have a blockbuster and the pressure of producing works that would continue to bring in more consumers and potentially transfer better to the screen made some (at least for me) detrimental changes to the work.

This is the lure of the Big Picture – that we can place everything inside a grand plan, a scheme laid out from the beginning, and it will validate everything that has gone before, while including everything that is yet to come. Thus, all answers will be given, our confusion will turn to understanding and we will get that nice warm feeling from wrapping everything up. In many respects, however, the number of things that are actually developed within a frame like this, and remain consistent, is very small. Stephen King experimented with serial writing (short instalments released regularly) for a while, including the original version of “The Green Mile”. He is a very talented and experienced writer and he still found that he had made some errors in already published instalments that he had to either ignore or correct in later instalments. Although he had a clear plan for the work, he introduced errors to public view and he discovered them in later full fleshings of the writing. He makes a note in the book of the Green Mile that one of the most obvious, to him, was having someone scratch their nose with their hand while in a straitjacket. Not having all of the work to look at leaves you open to these kinds of errors, even where you do have a plan, unless you have implemented everything fully before you deploy it.

So it’s no surprise that we’re utterly confused by the prequels to Star Wars, because (despite Lucas’ protestations), it is obvious that there was not even a detailed sketch of what would happen. The same can be said of the series “Lost” where any consistency that was able to be salvaged from it was a happy accident, as the writers had no idea what half of the early things actually were – it just seemed cool. And, as far as I’m concerned, there is no movie called Highlander 2.

Seriously, this is just someone attempting Photoshop. Anything else is untrue.

(I should note that this post is Part 1 of 2, but I am writing both parts side by side, to try and prevent myself from depending in Part 2 upon something that I got wrong in Part 1.)

To take this into an educational space, it is tempting to try and construct learning from a sequence of high-reward moments of understanding. Our students are both delighted and delightful when they “get” something – it’s a joy to behold and one of the great rewards of the teacher. But, much like watching TED talks every day won’t turn you into a genius, it is the total construction of the learning experience that provides something that is consistent throughout and does not have to endure any unexpected reversals or contradictions later on. We don’t have a commercial focus here to hook the students. Instead, we want to keep them going throughout the necessary, but occasionally less exciting, foundation work that will build them up to the point where they are ready to go, in Martin Gardner’s words, “A-ha!”

My problem arises if I teach something that, when I develop a later part of the course, turns out to not provide a complete basis, reinterprets the work in a way that doesn’t support a later point or places an emphasis upon the wrong aspect. Perhaps we are just making the students look at the wrong thing, only to realise later that had we looked at the details, rather than our overall plan, we would have noticed this error. But, now, it is too late and the wrong message is out there.

This is one of the problems of gamification, as I’ve referred to previously, in that we focus on the drug of understanding as a fiero (fierce joy) moment to the exclusion of the actual education experience that the game and reward elements should be reinforcing. This is one of the problems of stating that something is within a structure when it isn’t: any coincidence of aims or correlation of activities is a happy accident, serendipity rather than strategy.

In tomorrow’s post, I’ll discuss some more aspects of this and the implications that I believe it has for all of us as educators.


A Brief Note on the Blog

My posts recently have been getting longer and longer and I think I’m hitting the point where ‘prolix’ is an eligible adjective: I’m at risk of using so many words that people may not finish or start reading, or risk being bored by the posts. Despite the fact that I write quickly, it does take some time to write 2,000 words. I want to write the number of words to carry the point across and make the best of your time and my time.

I’m going to experiment with posts that are as informative/useful but that are slightly shorter, aiming for 1,000 words as an upper bound and splitting posts thematically where possible to keep to this. At the end of July, assuming I remember, I’m going to review this to see how it’s going. (The risk, of course, is that editing to keep inside this frame will consume far more time than just writing. Believe me, I’m aware of that one!)

As always, feedback is very welcome and I reserve the right to completely forget about this and start writing 10,000 word megaposts again because I’ve become carried away. Thanks for reading!