Legitimisation and Agency: I Believe That’s My Ox on Your Bridge
Posted: December 19, 2012 Filed under: Education | Tags: advocacy, blogging, collaboration, community, curriculum, design, education, educational problem, Generation Why, grand challenge, higher education, learning, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, workload 1 CommentThere’s an infamous newspaper advertisement that never ran, which reflected the entry of IBM into the minicomputer market. A number of companies, Data General principal among them, but including such (historically) powerful players as Digital Equipment Corporation, Prime and Hewlett Packard, were quite successful in the minicomputer market, growing rapidly and stealing market share from IBM’s mainframe market. (For an excellent account of these times, I recommend “The Soul of a New Machine” by Tracy Kidder.) IBM finally decided to enter the minicomputer market and, as analysts remarked at the time, IBM’s move into minicomputers legitimised the market.
Ed DeCastro, CEO of Data General, had a full-page news paper advertisement prepared, which I reproduce (mildly bowdlerised to keep my all ages posting status):
“They Say IBM’s Entry Into the Minicomputer Market Will Legitimize the Industry. The B***ards Say, Welcome.”
The ad never actually ran but was framed and put on Ed’s wall. The point, however, was well and precisely made: IBM’s approval was neither required nor desired, and nobody had set a goal of being legitimised.
Over on Mark’s blog, we see that a large number of UK universities are banding together to launch an on-line project, including the highly successful existing player in the analogous space, the Open University, but also some high power players such as Southampton and the disturbingly successful St Andrews. As Mark notes in the title, this is a serious change in terms of allying a UK effort that will produce a competitor (or competitors) to the existing US dominance. As Mark also notes:
Hmm — OxBridge isn’t throwing hats into the rings yet.
And this is a very thoughtful Hmm, because the Universities of Oxford and Cambridge are the impossible-to-ignore legitimising agencies because of their sheer weight on the rubber sheet of UK Academy Spacetime. When it comes to talking about groups of Universities in the UK, and believe me there are quite a few, the Russell Group awards the lion’s share of PhDs, with 78% of the most highly graded research staff as well, across the 24 Universities. One of its stated goals is to lead the research efforts of the UK, with another being to attract the best staff and students to its member institutions. However, the group of participants in the new on-line project involve Russell Group Universities and those outside, which makes the non-participation of Oxford and Cambridge even more interesting. How can a trans-group on-line proposal bring the best students in – or is this why we aren’t seeing involvement from Oxbridge, because of the two-tier perception between traditional and on-line? One can easily argue that Oxford and Cambridge have no need to participate because they are so entrenched in their roles and their success that, as I’ve noted on a different post, any ranking system that rates them out of, say, the top 5 in the UK has made itself suspect as a ranking, rather than a reflection of dropping quality. Oxbridge is at the heart of the UK’s tertiary system and competition will continue to be fierce to gain entry for the foreseeable future. They have no need to get together with the others in their group or beyond, although it’s not from protecting themselves from competitors, as they are not really in competition with most of the other Russell Group members – because they are Oxford and Cambridge.
It’s worth noting that Cambridge’s vice-chancellor Leszek Borysiewicz did think that this consortium was exciting, and I quote from the THE article:
“Online education is becoming an important approach which may open substantial opportunities to those without access to conventional universities,” he said.
And that pretty much confirms why Cambridge is happy to stand back – because they are almost the definition of a conventional university, catering to a well-established market for whom attending a bricks-and-mortar University is as (if not more) important than the course content or delivery mechanisms. The “Gentleman’s Third”, receiving the lowest possible passing grade for your degree examinations, indicates a dedication to many things at the University that are, most likely, of a less-than-scholarly nature but it is precisely for these activities that some people go to Oxford and Cambridge and it is also precisely these non-scholarly activities that we will have great difficulty transferring into a MOOC. There will be no Oxford-Cambridge boat race carried out on a browser-based Flash game, with distributed participants hooked up to rowing machines across the globe, nor will the Footlights be conducted as a Google Hangout (except, of course, highly ironically).
Over time, we’ll find out more about the role of tradition and convention in the composition and participation, but let me return to my opening anecdote. We are already dealing with issues of legitimacy in the on-line learning space, whether from pedagogical fatigue, academic cultural inertia, xenophobia, or the fact that some highly vaunted previous efforts have not been very good. The absence of two of the top three Universities in the UK in this fascinating and potentially quite fruitful collaboration makes me think a lot about IBM. I think of someone sitting back, watching things happen, certain in the knowledge that what they do is what the market needs and it is, oh happy day, what they are currently doing. When Oxford and Cambridge come in and anoint the MOOC, if they every do or if we ever can, then we have the same antique avuncular approach to patting an entire sector on the head and saying “oh, well done, but the grownups are here now”, and this is unlikely to result in anything good in terms of fellow feeling or transferability and accreditation of students, key challenges in MOOCs being taken more seriously. Right now, Oxford and Cambridge are choosing not to step in, and there is no doubt that they will continue to be excellent Universities for their traditional attendees – but is this a sensible long term survival strategy? Could they be contributing to the exploration of the space in a productive manner by putting their legitimising weight in sooner rather than later, at a time when they are saying “Let’s all look at this to see if it’s any good”, rather than going “Oh, hell. Now we have to do something”? Would there be much greater benefit in bringing in their considerable expertise, teaching and research excellence, and resources now, when there is so much room for ground level innovation?
This is certainly something I’m fearful of in my own system, where the Group of 8 Universities has most of the research funding, most of the higher degree granting and, as a goal at least, targets the best staff and students. Our size and tradition can be barriers to agility and innovation, although our recent strategy is obviously trying to set our University on a more innovative and more agile course. A number of recent local projects are embracing the legitimacy of new learning and teaching approaches. It is, however, very important to remember the example of IBM and how the holders of tradition may not necessarily be welcomed as a legitimising influence when other have been highly successful innovating in a new space, which the tradition holder has deemed beneath them until reality finally intruded.
It’s easy to stand back and say “Well, that’s fine for people who can’t afford mainframes” but such a stance must be balanced with looking to see whether people still need or want to afford mainframes. I think the future of education is heavily blended – MOOC + face-to-face is somewhere where I think we can do great things – but for now it’s very interesting to see how we develop as we start to take more and more steps down this path.
Is It Called Ranking Because It Smells Funny?
Posted: December 15, 2012 Filed under: Education | Tags: advocacy, community, education, educational problem, higher education, measurement, MIKE, ranking, reflection, resources, thinking, times higher education supplement 3 CommentsYears ago, I was a professional winemaker, which is an awesome job but with very long hours (that seems to be a trend for me). One of the things that we did a lot in winemaking was to assess the quality of wine to work out if we’d made what we wanted to but also to allow us to blend this parcel with that parcel and come up with a better wine. Wine judging, for wine shows, is an important part of getting feedback on the quality of your wine as it’s perceived by other professionals. Wine is judged on a 20 point scale most of the time, although some 100 point schemes are in operation. The problem is that this scale is not actually as wide as it might look. Wines below 12/20 are usually regarded as faulty or not at commercial level – so, in reality, most wine shows are working in the range 12-19.5 (20 was relatively rare but I don’t know what it’s like now). This gets worse for the “100 point” ranges, where Wine Spectator claim to go from 50-100, but James Halliday (a prominent wine critic) rates from 75-100, where ‘Good’ starts at 80. This is really quite unnecessarily confusing, because it means that James Halliday is effectively using a version of the 16 available ranks (12-19.5 at 0.5 interval) of the 20 point scale, mapped into a higher range.
Of course, the numbers are highly subjective, even to a very well trained palate, because the difference between an 87 and an 88 could be colour, or bouquet, or flavour – so saying that the wine at 88 is better doesn’t mean anything unless you know what the rater actually means by that kind of ranking. I used to really enjoy the wine selections of a wine writer called Mark Shields, because he used a straightforward rating system and our palates were fairly well aligned. If Mark liked it, I’d probably like it. This is the dirty secret of any ranking mechanism that has any aspect of subjectivity or weighting built into it – it needs to agree with your interpretation of reasonable or you will always be at odds with it.
In terms of wine, the medal system that is in use really does give us four basic categories: commercially sound (no medal), better than usual (bronze), much better than usual (silver) and “please pass me another bottle” (gold). On top of that, you have the ‘best in show’ effectively which says that, in this place and from these tasters, this was the best overall in this category. To be frank, the gold wines normally blow away the bronzes and the no-awards, but the line between silver and gold is a little more blurred. However, the show medals have one advantage in that a given class has been inspected by the same people and the wines have actually been compared (in one sense) and ranked. However, if nothing is outstanding then no medals will be awarded because it is based on the marks on the 20 point scale, so if all the wines come in at 13, there will be no gongs – there doesn’t have to be a gold or a silver, or even a bronze, although that would be highly unusual. More subtly, gold at one show may not even get bronze at another – another dirty little secret of subjective ranking, sometimes what you are comparing things to makes a very big difference.
Which brings me to my point, which is the ranking on Universities. You’re probably aware that there are national and international rankings of Universities across a range of metrics, often related to funding and research, but that the different rankings have broad agreement rather than exact agreement as to who is ‘top’, top 5 and so on. The Times Higher Education supplement provides a stunning area of snazzy looking graphics, with their weightings as to what makes a great University. But, when we look at this, and we appear to have accuracy to one significant figure (ahem), is it significant that Caltech is 1.8 points higher than Stanford? Is this actually useful information in terms of which university a student might wish to attend? Well, teaching (learning environment) makes 30% of the score, international outlook makes up 7.5%, industry income makes up 2.5%, research is 30% (volume, income and reputation) and citations (research influence) make up the last 30%. If we sort by learning environment (because I am a curious undergraduate, say) then the order starts shifting, not at the very top to the list, but certainly further down – Yale would leap to 4th in the US instead of 9th. Once we get out of the top 200, suddenly we have very broad bands and, honestly, you have to wonder why we are still putting together the numbers if the thing that people appear to be worrying about is the top 200. (When you deem worthiness on a rating scale as only being a subset of the available scale, you rapidly turn something that could be considered continuous into something with an increasingly constrained categorical basis.) But let’s go to the Shanghai rankings, where Caltech is dropped from number 1 to number 6. Or the QS World Rankings, who rate Caltech as #10.
Obviously, there is no doubt about the general class of these universities, but it does appear that the judges are having some difficulty in consistently awarding best in class medals. This would be of minor interest, were it not for the fact that these ratings do actually matter in terms of industry confidence in partnership, in terms of attracting students from outside of your home educational system and in terms of who get to be the voices who decide what constitutes a good University. It strikes me that broad classes are something could apply quite well here. Who really cares whether Caltech is 1, 6 or 10 – it’s obviously rating well across the board and, barring catastrophe, always will.
So why keep ranking it? What we’re currently doing is polishing the door knob on the ranking system, devoting effort to ranking Universities like Caltech, Stanford, Harvard, Cambridge and Oxford, who could not, with any credibility be ranked low – or we’d immediately think that the ranking mechanism was suspect. So let’s stop ranking them, because it’s compressing the ranking at a point where the ranking is not even vaguely informational. What would be interesting was more devotion to the bands further down, where a University can assess its global progress against its peers to find out if it’s cutting the mustard.
If I put a bottle of Grange (one of Australia’s best red wines and it is pretty much worthy of its reputation if not its price) into a wine show and it came back with less than 17/20, I’d immediately suspect the rating system and the professionalism of the judges. The question is, of course, why would I put it in other than to win gold medals – what am I actually achieving in this sense? If it’s a commercial decision to sell more wine then I get this but wine is, after all, just wine and you and I drink it the same way. Universities, especially when ranked across complex weighted metrics and by different people, are very different products to different people. The single figure ranking may carry prestige and probably attracts both students and money but should it? Does it make any sense to be so detailed (one significant figure, indeed) about how one stacks up against each other, when in reality you have almost exponentially separated groups – my University will never ‘challenge’ Caltech, and if Caltech ‘drops’ to the graded level of the University of Melbourne (one of our most highly ranked Unis), I’m not sure that the experience will tell Caltech anything other than “Ruh oh!”
If I could summarise all of this, it would be to say that our leader board and ranking obsession would be fine, were it not for the amount of time spent on these things, the weight placed upon what is ultimately highly subjective even in terms of the weighting, and is not clearly defined as to how these rankings can be used to make sensible decisions. Perhaps there is something more useful we could be doing with our time?
Taught for a Result or Developing a Passion
Posted: December 13, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, Bloom, community, curriculum, design, education, educational problem, educational research, ethics, feedback, Generation Why, grand challenge, higher education, learning, measurement, MIKE, PIRLS, PISA, principles of design, reflection, research, resources, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design, workload Leave a commentAccording to a story in the Australian national broadcaster, the ABC, website, Australian school children are now ranked 27th out of 48 countries in reading, according to the Progress in International Reading Literacy Study, and that a quarter of Australia’s year 4 students had failed to meet the minimum standard defined for reading at their age. As expected, the Australian government has said “something must be done” and the Australian Federal Opposition has said “you did the wrong thing”. Ho hum. Reading the document itself is fascinating because our fourth graders apparently struggle once we move into the area of interpretation and integration of ideas and information, but do quite well on simple inference. There is a lot of scope for thought about how we are teaching, given that we appear to have a reasonably Bloom-like breakdown on the data but I’ll leave that to the (other) professionals. Another international test, the Program for International School Assessment (PISA) which is applied to 15 year olds, is something that we rank relatively highly in, which measures reading, mathematics and science. (And, for the record, we’re top 10 on the PISA rankings after a Year 4 ranking of 27th. Either someone has gone dramatically wrong in the last 7 years of Australian Education, or Year 4 results on PIRLS doesn’t have as much influence as we might have expected on the PISA).We don’t yet have the results for this but we expect it out soon.
What is of greatest interest to me from the linked article on the ABC is the Oslo University professor, Svein Sjoberg, who points out the comparing educational systems around the globe is potentially too difficult to be meaningful – which is a refreshingly honest assessment in these performance-ridden and leaderboard-focused days. As he says:
“I think that is a trap. The PISA test does not address the curricular test or the syllabus that is set in each country.
Like all of these tests, PIRLS and PISA measure a student’s ability to perform on a particular test and, regrettably, we’re all pretty much aware, or should be by now, that using a test like this will give you the results that you built the test to give you. But one thing that really struck me from his analysis of the PISA was that the countries who perform better on the PISA Science ranking generally had a lower measure of interest in science. Professor Sjoberg noted that this might be because the students had been encouraged to become result-focused rather than encouraging them to develop a passion.
If Professor Sjoberg is right, then is not just a tragedy, it’s an educational catastrophe – we have now started optimising our students to do well in tests but be less likely to go and pursue the subjects in which they can get these ‘good’ marks. If this nasty little correlation holds, then will have an educational system that dominates in the performance of science in the classroom, but turns out fewer actual scientists – our assessment is no longer aligned to our desired outcomes. Of course, what it is important to remember is that the vast majority of these rankings are relative rather than absolute. We are not saying that one group is competent or incompetent, we are saying that one group can perform better or worse on a given test.
Like anything, to excel at a particular task, you need to focus on it, practise it, and (most likely) prioritise it above something else. What Professor Sjoberg’s analysis might indicate, and I realise that I am making some pretty wild conjecture on shaky evidence, is that certain schools have focused the effort on test taking, rather than actual science. (I know, I know, shock, horror) Science is not always going to fit into neat multiple choice questions or simple automatically marked answers to questions. Science is one of the areas where the viva comes into its own because we wish to explore someone’s answer to determine exactly how much they understand. The questions in PISA theoretically fall into roughly the same categories (MCQ, short answer) as the PIRLS so we would expect to see similar problems in dealing with these questions, if students were actually having a fundamental problem with the questions. But, despite this, the questions in PISA are never going to be capable of gauging the depth of scientific knowledge, the passion for science or the degree to which a student already thinks within the discipline. A bigger problem is the one which always dogs standardised testing of any sort, and that is the risk that answering the question correctly and getting the question right may actually be two different things.
Years ago, I looked at the examination for a large company’s offering in a certain area, I have no wish to get sued so I’m being deliberately vague, and it became rapidly apparent that on occasion there was a company answer that was not the same as the technically correct answer. The best way to prepare for the test was not to study the established base of the discipline but it was to read the corporate tracts and practise the skills on the approved training platforms, which often involved a non-trivial fee for training attendance. This was something that was tangential to my role and I was neither of a sufficiently impressionable age nor strongly bothered enough by it for it to affect me. Time was a factor and incorrect answers cost you marks – so I sat down and learned the ‘right way’ so that I could achieve the correct results in the right time and then go on to do the work using the actual knowledge in my head.
However, let us imagine someone who is 14 or 15 and, on doing the practice tests for ‘test X’ discovers that what is important is in hitting precisely the right answer in the shortest time – thinking about the problem in depth is not really on the table for a two-hour exam, unless it’s highly constrained and students are very well prepared. How does this hypothetical student retain respect for teachers who talk about what science is, the purity of mathematics, or the importance of scholarship, when the correct optimising behaviour is to rote-learn the right answers, or the safe and acceptable answers, and reproduce those on demand. (Looking at some of the tables in the PISA document, we see that the best performing nations in the top band of mathematical thinking are those with amazing educational systems – the desired range – and those who reputedly place great value in high power-distance classrooms with large volumes of memorisation and received wisdom – which is probably not the desired range.)
Professor Sjoberg makes an excellent point, which is that trying to work out what is in need of fixing, and what is good, about the Australian education system is not going to be solved by looking at single figure representations of our international rankings, especially when the rankings contradict each other on occasion! Not all countries are the same, pedagogically, in terms of their educational processes or their power distances, and adjacency of rank is no guarantee that the two educational systems are the same (Finland, next to Shanghai-China for instance). What is needed is reflection upon what we think constitutes a good education and then we provide meaningful local measures that allow us to work out how we are doing with our educational system. If we get the educational system right then, if we keep a bleary eye on the tests we use, we should then test well. Optimising for the tests takes the effort off the education and puts it all onto the implementation of the test – if that is the case, then no wonder people are less interested in a career of learning the right phrase for a short answer or the correct multiple-choice answer.
Core Values of Education and Why We Have To Oppose “Pranking”
Posted: December 11, 2012 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, community, education, educational problem, ethics, feedback, Generation Why, higher education, in the student's head, learning, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, workload 2 CommentsI’ve had a lot of time to think about education this year (roughly 400 hours at current reckoning) so it’s not surprising that I have some opinions on what constitutes the key values of education. Of course, as was noted at the Creative Innovations conference I went to, a corporate values statement is a wish list that doesn’t necessarily mean much so I’m going to talk about what I see when education is being performed well. After I’ve discussed these, I’m then going to briefly argue for why these values mean that stupid stunts (such as the Royal prank where some thoughtless DJs called up a hospital) should be actions that we identify as cruel and unnecessary interpretations of the term ‘entertainment’.
- Truth.
We start from the assumption that we only educate or train our students in what we, reasonably, assume to be the truth. We give the right answers when we know them and we admit it when we don’t. Where we have facts, we use them. When we are standing on opinion, we identify it. When we are telling a story, where the narrative matters more than the contents, we are careful to identify what we are doing and why we are doing it. We try not to deceive, even accidentally, and we do not make a practice of lying, even to spare someone’s feelings. In order to know the truth, we have to know our subject and we try to avoid blustering, derision and appealing to authority when we feel that we are being challenged.
There is no doubt that this can be hard in contentious and emerging areas but, as a primary value, it’s at the core of our educational system. Training someone to recite something that is not true, while still popular in many parts of the world, is indoctrination, not education.
- Respect.
We respect the students that we teach and, in doing this, we prepare them to respect us. We don’t assume that they are all the same, that they all learn at the same rate, that they have had all the preparation that they need for courses or our experiences, nor do assume that they can take anything that we feel inclined to fling at them. We respect them by treating them as people, as individuals, as vulnerable, emotional and potentially flawed humans. We evaluate their abilities before we test their mettle. We give them space to try again. We do all this because it then allows them, without hypocrisy or obligation, to treat us the same way. Respect of effort and of application does not demand perfection or obsession from either party.
- Fairness. We are objective in our assignment and assessment of work and generous in our interpretations when such generosity does not compromise truth or respect. We do not give false praise but we do all give all praise that is due, at the same time giving all of the notes for improvement. We strive to ensure that every student has the same high-quality and fair experience, regardless of who they are and what they do. When we define the rules, we stick to them, unless we have erred in their construction when, having fixed the rules, we then offer the best interpretation to every student. Our students acting in error or unfairly does not allow us to reciprocate in kind. The fairness of our system is not conditional upon a student being a perfect person and its strength lies in the fact that it is fair for all, regardless. What we say, we mean and what we mean, we say. A student’s results are ultimately the reflection of their own application to the course, relative to their opportunities to excel. Students are not unfairly punished because we have not bothered to work out if they are prepared for the course (which is very different from their own application of effort inside the course, which is ultimately their responsibility moderated by the unforeseen and vagaries of life), nor does the action of one student unduly influence the results of another, except where this is clearly identified and students have sufficient autonomy to control the outcome of this situation.
These stupid pranking stunts on the radio are usually considered acceptable because the person being pranked is contacted after the fact to ask if it can be broadcast. Frankly, I think this is bordering on coercive (because you risk being a bad sport if you don’t participate and I suspect that the radio stations don’t accept a simple first ‘no’) but some may disagree. (It’s worth noting that while the radio station tried to contact the nurses, they failed to get approval to broadcast.)
These pranks are, at heart, valueless lies, usually calculated to embarrass someone or expose them undertaking a given behaviour. They are neither truthful nor respectful. While this is often the high horse of pomposity (haven’t you got a sense of humour), it is important to realise that truly funny things can usually be enjoyed by everyone and that there is a world of difference between a joke that involves old friends and one that exploits strangers. The second situation just isn’t fair. The radio station is setting up a situation that is designed to elicit a response that everyone other than the victim will find amusing, because the victim is somehow funny or vulnerable. Basically, it’s unfair. You don’t get to laugh at or humiliate someone in a public forum just because you think it’s funny – didn’t we get over this in primary school? A lack of fairness often leads to situations that are coercive because we impose cultural norms, or peer-pressure, to force people to ‘go along with the joke’.
I had a student in my office recently, while another academic who happened to be my wife was helping me clear a backlog of paper, and before I discussed his final mark, I asked my wife if she would mind leaving the room. This was because there was no way I could ask the student if he minded discussing his mark with my wife in the room and not risk the situation being coercive. It’s a really simple thing to fix if you think about it. In order to respect the student’s privacy, I needed to be fair in the way that I controlled his ability to make decisions. Now I’m not worried that this student is easily coerced but that’s not my call to make – it’s not up to me to tell a student if they are going to be comfortable or not.
The Royal prank has clearly identified that that we can easily go down very dark and unexpected roads when we start to treat people as props, without sticking to the truth or respecting them enough to think about how they might feel about our actions, and that’s patently unfair. If these are our core values, and again many would disagree, then we have to stand up and object when we see them being mucked around with by our society. As educators, we have to draw a line and say that “just because you think it’s funny, doesn’t mean that you were right to do it” and we can do that and not be humourless or party-poopers. We do it because we want to allow people to still be funny, and have fun, muck around and have a joke with people that they know – because we’ve successfully trained them to know when they should stop, because we’ve correctly instilled the values of truth, respect and fairness.
Brief Stats Update: Penultimate Word Count Notes
Posted: December 8, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, data visualisation, design, education, educational problem, educational research, higher education, student perspective, teaching, teaching approaches, thinking, tools, work/life balance, workload, writing Leave a commentI occasionally dump the blog and run it through some Python script deliciousness to find out how many words I’ve written. This is no measure of worth or quality, more a metric of my mania. As I noted in October, I was going to hit what I thought was my year target much earlier. Well, yes, it came and it went and, sure enough, I plowed through it. At time of writing, on published posts alone, we’re holding at around 1.2 posts/day, 834 words/post and a smidgen over 340,000 words, which puts me (in word count) just after Ayn Rand’s “The Fountainhead” (311,596) but well behind her opus “Atlas Shrugged” (561,996). In terms of Objectivism? Let’s just say that I won’t be putting any kind of animal into that particular fight at the moment.
Now, of course, I can plug in the numbers and see that this puts my final 2012 word count somewhere in the region of 362,000 words. I must admit, there is a part of me that sees that number and thinks “Well, we could make it an even 365,000 and that’s a neat 1000 words/day” but, of course, that’s dumb for several reasons:
- I have not checked in detail exactly how well my extraction software is grabbing the right bits of the text. There are hyperlinks and embellishments that appear to be taken care of, but we are probably only on the order of 95% accuracy here. Yes, I’ve inspected it and I haven’t noticed anything too bad, but there could be things slipping through. After all of this is over, I am going to drag it all together and analyse it properly but, let me be clear, just because I can give you a word count to 6 significant figures, doesn’t mean that it is accurate to 6 significant figures.
- Should I even be counting those sections of text that are quoted? I do like to put quotes in, sometimes from my own work, and this now means I’m either counting something that I didn’t write or I’m counting something that I did write twice!
- Should I be counting the stats posts themselves as they are, effectively, metacontent? This line item is almost above that again! This way madness lies!
- It was never about the numbers in the first place, it was about thinking about my job, my students, my community and learning and teaching. That goal will have been achieved whether I write one word/day from now on or ten thousand!
But, oh, the temptation to aim for that ridiculous and ultimately deceptive number. How silly but, of course, how human to look at the measurable goal rather than the inner achievement or intrinsic reward that I have gained from the thinking process, the writing, the refining of the text, the assembly of knowledge and the discussion.
Sometime after January the 1st, I will go back and set the record straight. I shall dump the blog and analyse it from here to breakfast time. I will release the data to interested (and apparently slightly odd) people if they wish. But, for now, this is not the meter that I should be watching because it is not measuring the progress that I am making, nor is it a good compass that I should follow.
Ebb and Flow – Monitoring Systems Without Intrusion
Posted: November 23, 2012 Filed under: Education | Tags: collaboration, community, curriculum, data visualisation, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, learning, measurement, MIKE, principles of design, reflection, resources, student perspective, SWEDE, teaching, teaching approaches, thinking, tools Leave a commentI’ve been wishing a lot of people “Happy Thanksgiving” today because, despite being frightfully Antipodean, I have a lot of friends and family who are Thanksgiving observers in the US. However, I would know that something was up in the US anyway because I am missing about 40% of my standard viewers on my blog. Today is an honorary Sunday – hooray, sleep-ins all round! More seriously, this illustrates one of the most interesting things about measurement, which is measuring long enough to be able to determine when something out of the ordinary occurs. As I’ve already discussed, I can tell when I’ve been linked to a higher profile blog because my read count surges. I also can tell when I haven’t been using attractive pictures because the count drops by about 30%.

A fruit bat, in recovery, about to drink its special fruit smoothie. (Yes, this is shameless manipulation.)
This is because I know what the day-to-day operation of the blog looks like and I can spot anomalies. When I was a network admin, I could often tell when something was going wrong on the network just because of the way that certain network operations started to feel, and often well before these problems reached the level where they would trigger any sort of alarm. It’s the same for people who’ve lived by the same patch of sea for thirty years. They’ll look at what appears to be a flat sea on a calm day and tell you not to go out – because they can read a number of things from the system and those things mean ‘danger’.
One of the reasons that the network example is useful is because any time you send data through the network to see what happens, you’re actually using the network to do it. So network probes will actually consume network bandwidth and this may either mask or exacerbate your problems, depending on how unlucky you are. However, using the network for day-today operations, and sensing that something is off, then gives you a reason to run those probes or to check the counters on your networking gear to find out exactly why the hair on the back of your neck is going up.
I observe the behaviour of my students a lot and I try to gain as much information as I can from what they already give me. That’s one of the reasons that I’m so interested in assignment submissions, because students are going to submit assignments anyway and any extra information I can get from this is a giant bonus! I am running a follow-up Piazza activity on our remote campus and I’m fascinated to be able to watch the developing activity because it tells me who is participating and how they are participating. For those who haven’t heard about Piazza, it’s like a Wiki but instead of the Wiki model of “edit first, then argue into shape”, Piazza encourages a “discuss first and write after consensus” model. I put up the Piazza assignment for the class, with a mid-December deadline, and I’ve already had tens of registered discussions, some of which are leading to edits. Of course, not all groups are active yet and, come Monday, I’ll send out a reminder e-mail and chat to them privately. Instead of sending a blanket mail to everyone saying “HAVE YOU STARTED PIAZZA”, I can refine my contact based on passive observation.
The other thing about Piazza is that, once all of the assignment is over, I can still see all of their discussions, because that’s where I’ve told them to have the discussion! As a result, we can code their answers and track the development of their answers, classifying them in terms of their group role, their level of function and so on. For an open-ended team-based problem, this allows me a great deal of insight into how much understanding my students have of the area and allows me to fine-tune my teaching. Being me, I’m really looking for ways to improve self-regulation mechanisms, as well as uncovering any new threshold concepts, but this nonintrusive monitoring has more advantages than this. I can measure participation by briefly looking at my mailbox to see how many mail messages are foldered under a particular group’s ID, from anywhere, or I can go to Piazza and see it unfolding there. I can step in where I have to, but only when I have to, to get things back on track but I don’t have to prove or deconstruct a team-formed artefact to see what is going on.
In terms of ebb and flow, the Piazza groups are still unpredictable because I don’t have enough data to be able to tell you what the working pattern is for a successful group. I can tell you that no activity is undesirable but, even early on, I could tell you some interesting things about the people who post the most! (There are some upcoming publications that will deal with things along these lines and I will post more on these later.) We’ve been lucky enough to secure some Summer students and I’m hoping that at least some of their work will involve looking at dependencies in communication and ebb and flow across these systems.
As you may have guessed, I like simple. I like the idea of a single dashboard that has a green light (healthy course), an orange light (sick course) and a red light (time to go back to playing guitar on the street corner) although I know it will never be that easy. However, anything that brings me closer to that is doing me a huge favour, because the less time I have to spend actively probing in the course, the less of my students’ time I take up with probes and the less of my own time I spend not knowing what is going on!
Oh well, the good news is that I think that there are only three more papers to write before the Mayan Apocalypse occurs and at least one of them will be on this. I’ll see if I can sneak in a picture of a fruit bat. 🙂
Verbs and Nouns: Designing a Design
Posted: November 22, 2012 Filed under: Education | Tags: authenticity, community, curriculum, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools Leave a commentWe have a very bad habit in Computing of ‘verbing the noun’, where we take a perfectly good noun and repurpose it as a verb. If, in the last few weeks, you’ve googled, face booked, photoshopped or IMed, then you know what I mean. (Coining new words like this, often genericised trademarks, is not new, as anyone who has hoovered the rug will tell you!) In some cases, we use the same word for the action (to design) as we do for the product (a design) and, especially in the case of design, this can cause trouble because it becomes very easy to ask someone for the product when what you want is the process.
Now, I realise that I do enjoy linguistic shenanigans (anyone who plays with which syllable to stress when saying interstices is spending too much time thinking about language) but this is not some syntactic mumbo jumbo, this is a genuine concern. If I ask a student to submit a design for their program, then I am usually assuming that the artefact submitted will be the product of the design process. However, I have to realise that a student must understand what the design process actually is in order for my instruction (give me a design) to be mapped into the correct action (undertake the design process). We’ve collected a lot of first-year student reflections on design and it is becoming increasingly apparent that there is not a clear link between the verb and noun forms of this very simple word. We can now start to understand why a student would feel frustrated if, when asked for a design, they submit what is effectively a re-writing of their final written program on a separate document with some arrows and we turn around and tell them that “this is not a design”. Well, what did we want? The student has given us a document with stuff on it and the word ‘design’ at the top – what did we expect?
The same is, more subtly, true of the word program. After all the practise of programming is the production of programs (and the consumption and elimination of problems but that’s another post). Hence, when I ask a student for a program, or for a solution, I am often not explicitly placing the written instructions into a form that clearly elucidates the process and, as a result, I may miss important constructive steps that could assist the student in understanding and applying the process.
Let’s face it, if you don’t know what you’re doing, or don’t understand that there is a process to follow (the verb form), then any instructions I give you “Make sure you use diagrams”, “clearly label your variables”, “use UML” are going to be perceived in a way that is grounded in the final product, not the steps along the way. If I can use neo-Piagetian terminology briefly, then we’re looking at the magical thinking that we’d normally associate with the pre-operational stage. Not only is the knowledge not sinking in but we will engender a cargo-cult like inclusion of features that are found in the artefact but have no connection back to the process at all. We have potentially reached the unpleasant point where students now think that we are deliberately, or unfairly, ignoring the work that they provided in direct accordance with our instructions!
Anyone who has ever looked at a design with the steady sinking feeling that comes from reading poorly translated programming language, marked with superfluous arrows and dogged, yet unnecessary, underlining of the obvious, will probably be feeling a pang of empathy at the moment.
So what to do? How do we address this problem? The first step is to remember how fiendishly ambiguous language actually is (if English were easy, we wouldn’t need constrained and artificial programming languages to unambiguously assign meaning for computers) and be precise about the separation between the process and the product. The design process, which we provide guidance and steps for, will produce a design document. We are luckier in programming because while you can program and produce a program, you cannot produce a programming! In this case, the clarification is that you have assigned a programming task in order to produce a program. In our heads, we are always clear about what we mean but it is still amazing how often we can resort to asking for a product that is the final stage of a long and difficult process, which we are intending to teach, without realising that we are describing the desirable characteristics of the end point without considering the road that must be travelled!
On reviewing my own teaching, I’m intending to add more process-based instructions, on the grounds that encouraging a certain behaviour in the production process is more likely to lead to a successful product, than specifying an end product and hoping that the path taken is the ‘right’ one. This isn’t exactly rocket science, it’s well established in how we should be constructing these activities, but it does require the educator to keep a clear head on whether we are discussing the product or process.
When a student has established enough understanding, and hopefully all will by the end of the process, then I can ease back on these linguistic scaffolds and expect a little more “this means that” in their everyday activity, but at the start of the educational process, it is probably better if I always try consider how I specify these potentially ambiguous noun/verb pairs. After all, if a student could pick this up by osmosis or plain exposure to the final product (or even by neurolinguistic programming through the mere mention of the name of the artefact) then I would be thoroughly unnecessary as an educator!
I strive to reduce ambiguity and this requires me to think, very carefully, about how my words are read by students who are not even in the foothills of mastery. Reorienting my own thinking to clearly separate product from process, and labelling and treating each clearly and separately, is an important reminder to me of how easy it is to confuse students.
Learner Pull and Educator Push
Posted: November 21, 2012 Filed under: Education | Tags: collaboration, community, curriculum, education, educational problem, feedback, higher education, moocs, resources, teaching, teaching approaches, thinking Leave a commentWe were discussing some of the strategic investments that might underpin my University’s progress for the next 5 years (all very hand wavy as we don’t yet have the confirmed strategy for the next 5 years) and we ended up discussing Learner Push and Educator Pull – in the context of MOOCs, unsurprisingly.
We know that if all we do is push content to people then we haven’t really undertaken any of the learning experience construction that we’re supposed to. If we expect students to mysteriously know what they need and then pull it all towards them, then we’re assuming that students are automatically self-educating and this is, fairly obviously, not universally true or there would have been no need for educational institutions for… hundreds of thousands of years.
What we actually have is a combination of push and pull from both sides, maintaining the right tension if you will, and it’s something that we have to think about the moment that we talk about any kind of information storage system. A library is full of information but you have to know what you’re looking for, where to find out and you have to want to find it! I’ve discussed on other blogs my concerns about the disconnected nature of MOOCs and the possibility of students “cherry picking” courses that are of interest to them but lead nowhere in terms of the construction of a professional level of knowledge.
Mark Guzdial recently responded to a comment of mine to remind me of the Gates Foundation initiative to set up eight foundation courses based on MOOCs but that’s a foundation level focus – how do we get from there to fourth year engineers or computer scientists? Part of the job of the educator is to construct an environment where the students not only want the knowledge but they want, and here’s the tricky bit, the right knowledge. So rather than forcing content down the student’s throat (the incorrect assumption of educator push, in my opinion) we are creating an environment that inspires, guides and excites – and pushing that.
I know that my students have vast amounts of passion and energy – the problem is getting it directed in the right way!
It’s great to be talking about some of these philosophical issues as we look forward over the next 5-10 years because, of course, by itself the IT won’t fix any of our problems unless we use it correctly. As an Associate Dean (IT) and a former systems administrator, I know that spending money on IT is easy but it’s always very easy to spend a lot of money and make no progress. Good, solid, principles help a lot and, while we have a lot of things to sort out, it’s going to be interesting to see how things develop, especially with the concept of the MOOC floating above us.
Unearthing the Community: A Surprisingly Rapid Result
Posted: November 20, 2012 Filed under: Education | Tags: ALTA, blogging, collaboration, community, conventicle, curriculum, education, educational problem, educational research, feedback, higher education, icer, raymond lister, reflection, resources, teaching, teaching approaches, tools 2 CommentsNext Monday I am co-hosting the first Adelaide Computing Education Conventicle, an offshoot of the very successful program in the Eastern states which encourages the presentation of work that has gone to conferences, or is about to go, and to provide a forum for conversations and panel discussions on Computing Education. The term ‘conventicle’ refers to “A secret or unlawful religious meeting, typically of people with nonconformist views” and stems from the initial discussions in Melbourne and Sydney, back when Computing Education was not perhaps as accepted as it is now. The name is retained for gentle amusement and a linkage to previous events. To quote my own web page on this:
The Conventicle is a one-day conference about all aspects of teaching computing in higher education, in its practical and theoretical aspects, which includes computer science, information systems, information technology, and branches of both mathematics and statistics. The Conventicle is free and open to all who wish to attend. The format will consist of presentations, discussion forums and opportunities to network over lunch, and morning and afternoon tea.
The Conventicles have a long history in other states, allowing a discussion forum for how we teach, why we teach, what we can do better and provide us with an opportunity to share our knowledge at a local level without having to travel to conferences or subscribe to an every growing set of journals.
One of my ALTA colleagues set his goal as restarting the conventicles where they had stopped and starting them where they had never been and, combining this with my goal of spreading the word on CSE, we decided to work together and host the informal one-day event. The Australian gravity well is deep and powerful: few of my colleagues get to go to the larger educational conferences and being able to re-present some key papers, especially when the original presenters can be there, is fantastic. We’re very lucky to have two interstate visitors. Simon, my ALTA colleague, is presenting some of his most recent work, and Raymond Lister, from UTS, is presenting a very interesting paper that I saw him present at ICER. When he mentioned that he might be able to come, I didn’t wast much time trying to encourage him… and ask him if he’d mind presenting a paper. It appears that I’m learning how to run a conference.
The other good news is that we have a full program! It turns out that many people are itching to talk about their latest projects, their successes, recent papers and about the things that challenge so many of us. I still have space for a lot more people to attend and, with any luck, by this time tomorrow I’ll have the program nailed down. If you’re in the neighbourhood, please check out the web page and let me know if you can come.
I hope to see at least some of the following come out of the First Adelaide Computing Education Conventicle:
- Raised awareness of Computing Education across my faculty and University.
- Raised awareness of how many people are already doing research in this!
- An opportunity for the local community to get together and make connections.
- Some good discussion with no actual blows being landed. 🙂
In the longer term, I’d love to see joint papers, grant applications and all those good things that help us to tick our various boxes. Of course, being me, I also want to learn more, to help other people to learn more (even if it’s just by hosting) and get some benefit for all of our students.
There’s enough time to get it all organised, which is great, but I’ll have a busy Monday next week!






