[Edit: Gernot has put a further discussion of the points raised both in the previous post and this one, which you can find here. In this one, Gernot clearly explains why approaches were taken the way they were, how NICTA is benefiting from the ongoing work (as are we) and further identifies that the original article didn’t manage to capture a lot of the detail of what had happened. My thanks again to Professor Heiser for taking the time to respond to this so thoroughly and so patiently!]
I’ve put this back up on the top so that you can read Professor Gernot Heiser’s response to the points I raised in my blog post “Enhancing the Reputation of Australian IT Research – by giving it away?” Gernot’s main point is that I don’t understand the spin-off and VC process, which has thus led to the misinterpretation of the Australian IT article (which he also finds fault with). He has taken the time to write a blog post on it that you can read over here. I have also edited the original post to include a reference to this, which I’ve put there for people reading it from scratch, as I am very serious about the final statement on the piece, namely that:
If Professor Heiser is reading this, then I welcome any clarification that he can make and, in the Australian have miscast this, then I welcome and will publish any supported correction. I sincerely hope that this is merely a miscommunication because the alternative is really rather embarrassing for all concerned.
I also posted a comment on Gernot’s blog that he hasn’t yet had time to moderate (I posted it late last night so I’m expecting that he’s asleep!), so I post it below, because the Internet is immediacy. (You’ll note that I used ‘posted’ instead of ‘written’ for the Australian – I’m becoming contaminated!)
I welcome the correction, as I noted in the original article. However, what was posted in the Australian left, to my reading, some serious questions open and, while you have addressed most of these here, they weren’t addressed in the original article. I, as I referred to it in my blog post, was reading the Australian and questioning the content because, in my opinion it cast this whole situation in a strange light.
I do find it interesting that you find my blog less accurate than the article as I had believed that I had addressed the article specifically and raised questions where I asked you, if you were reading it, to clarify issues. I never claimed to be the final interpreter on this – the note I finished on was (and I quote):
“If Professor Heiser is reading this, then I welcome any clarification that he can make and, in the Australian have miscast this, then I welcome and will publish any supported correction. I sincerely hope that this is merely a miscommunication because the alternative is really rather embarrassing for all concerned.”
I can assure that my search for clarification and expansion of, what now appears to be, a misleading piece of reportage was genuine. As you have placed a comment on the blog, which I have now approved, people can read both and now see a true dialogue – the power of the Internet.
One point that you don’t seem to have addressed, which I expect will occur in post 2, is how this specifically enhanced the reputation of Australian IT research. You mention that “NICTA isn’t a software business, it’s a national research lab, which produces world-class research and then gets the results out into the real world for the benefit of the nation.” – we’re obviously on the same page here – so are you arguing benefit in terms of to the VCs (as selling the product to an overseas owner is not, to my perspective, of immediate benefit to Australia except as a one-off payment, which is not being returned as originally discussed) or in terms of local jobs retained, despite the foreign ownership? (Although this was not reported in the original article – the intention is to retain the staff of the start-up as a remote division.)
Had I read the original NICTA release, dry and short though it is, I would have had very few questions. What piqued my curiosity was the way that the situation was reported in the paper and, as I believe we both agree, this did raise a number of questions, the vast majority of which you have addressed above.
There are, however, two things that I would like to note. You state that I misinterpret the role of start-ups, when I don’t believe that I refer to them in any way. There may be a subtlety that I’m missing so, again, clarification is welcome. You also say that claims that NICTA sold the labs are wrong, however, from the original article:
“NICTA last week announced it had sold one of its spin-off companies, Open Kernel Labs, that had developed virtualisation security software used on 1.6 billion mobile devices worldwide to the US giant.” (This was the Australian. A similar story was run on ZDNet and elsewhere. On digging, the GD and NICTA pages themselves refer to ‘acquisition’, without stating a seller.)
Given that most of us are not privy to the internal workings of NICTA, you can see how such an interpretation would have arisen, as there are (as you have identified) so many models to choose from. Yes, the investors may have decided it but we on the outside can only go on what we are told and the Australian and ZD reports are pretty decisive – although wrong as it turns out.
Thank you again, very genuinely, for the clarification. I look forward to section 2!
“This is simply the standard VC investment model. If you don’t like it, don’t ask for venture capital!”
And I will address this. I always understood why the VC investors got their money, and why the bankers got theirs. I even understood why NICTA would trade away IP in attractive deals in order to attract funding. What, and I realise that this may be me just being slow, I am asking is how a project that has taken at least a chunk of federal funding, which has then parlayed that into a bigger company, is giving a national benefit return on that initial investment, given that it was the catalyst for the later financial structures.
I reiterate that I welcome the clarification as I feel that we have all benefitted from the extra information but I’m not sure that I introduced anything new into that Australian IT story than a dissection and contemplation, again, because it was so damn odd. I am really looking forward to Part 2, which I hope will further discuss the realities of running organisations like NICTA in the 21st Century.
I spent most of today working on the paper that I alluded to earlier where, after over a year of trying to work on it, I hadn’t made any progress. Having finally managed to dig myself out of the pit I was in, I had the mental and timeline capacity to sit down for the 6 hours it required and go through it all.
In thinking about procrastination, you have to take into account something important: the fact that most of us work in a hyperbolic model where we expend no effort until the deadline is right upon us and then we put everything in, this is temporal discounting. Essentially we place less importance on things in the future than the things that are important to us now. For complex, multi-stage tasks over some time this is an exceedingly bad strategy, especially if we focus on the deadline of delivery, rather than the starting point. If we underestimate the time it requires and we construct our ‘panic now’ strategy based on our proximity to the deadline, then we are at serious risk of missing the starting point because, when it arrives, it just won’t be that important.
Now, let’s increase the difficulty of the whole thing and remember that the more things we have to think about in the present, the greater the risk that we’re going to exceed our capacity for cognitive load and hit the ‘helmet fire’ point – we will be unable to do anything because we’ve run out of the ability to choose what to do effectively. Of course, because we suffer from a hyperbolic discounting problem, we might do things now that are easy to do (because we can see both the beginning and end points inside our window of visibility) and this runs the risk that the things we leave to do later are far more complicated.
This is one of the nastiest implications of poor time management: you might actually not be procrastinating in terms of doing nothing, you might be working constantly but doing the wrong things. Combine this with the pressures of life, the influence of mood and mental state, and we have a pit that can open very wide – and you disappear into it wondering what happened because you thought you were doing so much!
This is a terrible problem for students because, let’s be honest, in your teens there are a lot of important things that are not quite assignments or studying for exams. (Hey, it’s true later too, we just have to pretend to be grownups.) Some of my students are absolutely flat out with activities, a lot of which are actually quite useful, but because they haven’t worked out which ones have to be done now they do the ones that can be done now – the pit opens and looms.
One of the big advantages of reviewing large tasks to break them into components is that you start to see how many ‘time units’ have to be carried out in order to reach your goal. Putting it into any kind of tracking system (even if it’s as simple as an Excel spreadsheet), allows you to see it compared to other things: it reduces the effect of temporal discounting.
When I first put in everything that I had to do as appointments in my calendar, I assumed that I had made a mistake because I had run out of time in the week and was, in some cases, triple booked, even after I spilled over to weekends. This wasn’t a mistake in assembling the calendar, this was an indication that I’d overcommitted and, over the past few months, I’ve been streamlining down so that my worst week still has a few hours free. (Yeah, yeah, not perfect, but there you go.) However, there was this little problem that anything that had been pushed into the late queue got later and later – the whole ‘deal with it soon’ became ‘deal with it now’ or ‘I should have dealt with that by now’.
Like students, my overcommitment wasn’t an obvious “Yes, I want to work too hard” commitment, it snuck in as bits and pieces. A commitment here, a commitment there, a ‘yes’, a ‘sure, I can do that’, and because you sometimes have to make decisions on the fly, you suddenly look around and think “What happened”? The last thing I want to do here is lecture, I want to understand how I can take my experience, learn from it, and pass something useful on. The basic message is that we all work very hard and sometimes don’t make the best decisions. For me, the challenge is now, knowing this, how can I construct something that tries and defeats this self-destructive behaviour in my students?
This week marks the time where I hope to have cleared everything on the ‘now/by now’ queue and finally be ahead. My friends know that I’ve said that a lot this year but it’s hard to read and think in the area of time management without learning something. (Some people might argue but I don’t write here to tell you that I have everything sorted, I write here to think and hopefully pass something on through the processes I’m going through.)
One of the things about being a Computer Science researcher who is on the way to becoming a Computer Science Education Researcher is the sheer volume of educational literature that you have to read up on. There’s nothing more embarrassing than having an “A-ha!” moment that turns out to have been covered 50 years and the equivalent of saying “Water – when it freezes – becomes this new solid form I call Falkneranium!”
Ahem. So my apologies to all who read my ravings and think “You know, X said that … and a little better, if truth be told.” However, a great way to pick up on other things is to read other people’s blogs because they reinforce and develop your knowledge, as well as giving you links to interesting papers. Even when you’ve seen a concept before, unsurprisingly, watching experts work with that concept can be highly informative.
I was reading Mark Guzdial’s blog some time ago and his post on the Khan Academy’s take on Computer Science appealed to me for a number of reasons, not least for his discussion of scaffolding; in this case, a tutor-guided exploration of a space with students that is based upon modelling, coaching and exploration. Importantly, however, this scaffolding fades over time as the student develops their own expertise and needs our help less. It’s like learning to ride a bike – start with trainer wheels, progress to a running-alongside parent, aspire to free wheeling! (But call a parent if you fall over or it’s too wet to ride home.)
One of my key areas of interest is self-regulation in students – producing students who no longer need me because they are self-aware, reflective, critical thinkers, conscious of how they fit into the discipline and (sufficiently) expert to be able to go out into the world. My thinking around Time Banking is one of the ways that students can become self-regulating – they manage their own time in a mature and aware fashion without me having to waggle a finger at them to get them to do something.
Today, R (postdoc in the Computer Science Education Research Group) and I were brainstorming ideas for upcoming papers over about a 2 hour period. I love a good brainstorm because, for some time afterwards, ideas and phrases come to me that allow me to really think about what I’m doing. Combining my reading of Mark’s blog and the associated links, especially about the deliberate reduction of scaffolding over time, with my thoughts on time management and pedagogy, I had this thought:
If imposed deadlines have any impact upon the development of student timeliness, why do we continue to need them into the final year of undergraduate and beyond? When do the trainer wheels come off?
Now, of course, the first response is that they are an administrative requirement, a necessary evil, so they are (somehow) exempt from a pedagogical critique. Hmm. For detailed reasons that will go into the paper I’m writing, I don’t really buy that. Yes, every course (and program) has a final administrative requirement. Yes, we need time to mark and return assignments (or to provide feedback on those assignments, depending on the nature of the assessment obviously). But all of the data I have says that not only do the majority of students hand up on the last day (if not later), but that they continue to do so into later years – getting later and later as they progress, rather than earlier and earlier. Our administrative requirement appears to have no pedagogical analogue.
So here is another reason to look at these deadlines, or at least at the way that we impose them in my institution. If an entry test didn’t correlate at all with performance, we’d change it. If a degree turned out students who couldn’t function in the world, industry consultation would pretty smartly suggest that we change it. Yet deadlines, which we accept with little comment most of the time, only appear to work when they are imposed but, over time, appear to show no development of the related skill that they supposedly practice – timeliness. Instead, we appear to enforce compliance and, as we would expect from behavioural training on external factors, we must continue to apply the external stimulus in order to elicit the appropriate compliance.
Scaffolding works. Is it possible to apply a deadline system that also fades out over time as our students become more expert in their own time management?
I have two days of paper writing on Thursday and Friday and ‘m very much looking forward to the further exploration of these ideas, especially as I continue to delve into the deep literature pile that I’ve accumulated!
“Why should I do it? What’s in it for me?”
How many times have you heard, said or thought the above sentiment, in one form or another? I go to a lot of meetings so I get to hear this one a lot. Reanalysing my interactions with people over the past 12 months or so, it has become apparent how many people are clearly focused on the payoff, and this is usually not related to their intrinsic reward mechanisms.
We get it from students when they ask “Will this be on the test?” (Should I study this? What’s in it for me?) We get it from our colleagues when they look at a new suggestion and say “Well, no-one’s going to do that.” (Which usually means “I wouldn’t do it. What’s in it for me?”) We get it from ourselves when we don’t do something because something else becomes more important – and this is very interesting as it often gives an indicator of where you sit on the work/life balance scale. Where I work, there are a large number of occasions where the rewards mechanisms used can result in actions and thinking patterns that, as an observer, I find both interesting and disturbing.
Let me give you some background on how research funding works in Australia (very brief). You have a research idea or are inside a group that has some good research ideas. You do research. You discover something. You write it up and get it published in conferences and journals. Repeat this step until you have enough publications to have a credible track record. You can now apply for funding from various bodies, so you spend 3-4 weeks writing a grant and you write up your great grant idea, write it up really well, attach your track record evidence as part of your CV, and then wait. In my discipline, ICT, our success rate is very low, and very few of the people who apply for Australian Research Council Discovery Grants get their grants. Now this is, of course, not a lottery – this is a game of skill! Your grant is rated by other people, you get some feedback, you can respond to this feedback (the rejoinder), and the ratings that you originally received, plus your rejoinder, go forward to a larger panel. Regrettably, there is not much money to go around (most grants are only funded at the 50% level of the 22% of grants that get through across the board), so an initial poor rating means that your grant is (effectively) dead.
This makes grants scarce and intrinsically competitive, as well as artificially inflated in their perceived value. Receiving a grant will also get you public congratulations, the money and gear (obviously) and an invitation to the best Christmas cocktail party in the University – the Winner’s Circle, in effect. The same is true if you bring in a heap of research cash of any other kind – public praise, money and networking opportunities.
Which, if you think about it, is rather curious because you have just been given a wodge (technical term) of cash that you can use to hire staff and buy gear, travel to conferences, and basically improve your chances of getting another grant – but you then get additional extrinsic rewards, including the chance to meet the other people who have risen to this level. This is, effectively, a double reward and I suppose I wouldn’t have much of a problem with it, except that we start to run into those issues of extrinsic motivation again which risks robbing people of their inclination to do research once those extrinsic rewards dry up. I note that we do have a scheme to improve the grant chances of people who just missed out on getting Australian Research Council (ARC) funding but it is literally for those people who just missed out.
Not getting a grant can be a very negative result, because the absence of success is also often accompanied by feedback that will force you to question the value of your performance to date, rather than just the work that has been submitted.
When an early career researcher looks at the ARC application process and thinks “What’s in it for me?” – the answer is far more likely to be “an opportunity to receive feedback of variable quality for the investment of several weeks of your life, from people with whom you are actively competing” rather than an actual grant. So this is obviously a point where mentoring, support and (yes) seed funding to be able to improve become very important – as it provides an ability to develop skill, confidence and (hopefully) the quality of the work, leading to success in the future. The core here, however, is not to bribe the person into improving, it’s to develop the person in order that they improve. Regrettably, a scheme that is (effectively) rewarding the rewarded does not have a built-in “and lifting up those who aren’t there” component. In fact, taking on a less experienced researcher is far more likely to hinder a more capable applicant’s chances. When a senior researcher looks at assisting a more junior researcher, under the current system, “What’s in it for me?” is mostly “Reduced chance of success.” Given that this may also cut you out of the Winner’s Circle, as funds dry up, as you are no longer successful, as it then gets harder to do the research and hence get grants, combined with the fact that you can only apply for these once a year… it’s a positive disincentive to foster emerging talent, unless that talent is so talented that it probably doesn’t need that much help!
So the extrinsic manipulation here has a built-in feedback loop and is, regrettably, prone to splitting people into two groups (successful and not) very early on, at the risk of those groups staying separated for some time to come.
If the large body of work in the area is to be believed, most people don’t plan with the long term outcomes in mind (hence, being told that if you work hard you might get a grant in five years is unlikely to change anyone’s behaviour) and on top of that, as Kohn posits, praising a successful person is more likely to cause envy and division than any real improvement. How does someone else being praised tell you how to improve from your current position?
So what does all of this hot air mean for my students?
I have just finished removing all ‘attendance-based’ incentive schemes from my courses – there are no marks being given just for showing up in any form, marks are only achieved when you demonstrate that you have acquired knowledge. Achievement will not generate any additional reward – the achievement will be the reward. Feedback is crucial but, and this will be challenging, everything I say or do must provide the students with a way to improve, without resorting to the more vague areas of general praise. I will be interested to see if this appears to have any (anecdotal) effect upon the number of times someone asks “What’s in it for me?”
Resilience is “… the inherent and nurtured capacity of individuals to deal with life’s stresses in ways that enable them to lead healthy and fulfilled lives” (Howard & Johnson, 1999)
Pot metal can be prone to instability over time, as it has a tendency to bend, distort, crack, shatter, and pit with age. (Pot Metal, Wikipedia)
The steel is then tempered, […]which ultimately results in a more ductile and fracture-resistant metal. [S]teel [is] used widely in the construction of roads, railways, other infrastructure, appliances, and buildings. Most large modern structures […] are supported by a steel skeleton. Even those with a concrete structure will employ steel for reinforcing. (Steel, Wikipedia)
Building strength, in terms of people and materials, has been a human pursuit for as long as we have been human. Stronger civilisations were able to resist invaders, bronze swords shattered and deformed under the blows of iron, steel allowed us to build our cities, our ships, our cars and our air travel industries. Steel requires some care in the selection of the initial iron ore that is used and, for a particular purpose, we have to carefully select the alloy components that we will use to produce just the right steel and then our smelting and casting must be done in the right way or we have to start again. Pot metal, on the other hand, can be made out of just about anything, smelted at low temperatures without sophisticated foundry equipment or specialist tools.
One of these metals drives a civilisation – the other one flakes, corrodes, bubbles, fails and can’t be easily glued, soldered or welded. Steel can be transformed, fused and joined, but pot metal can only be as fit as the day it was made and then it starts a relatively rapid descent into uselessness. Pot metal does have one good use, for making prototypes before you waste better metal, but that just confirms its built-in obsolescence.
One of the most important transitions for any student is from external control and motivation to self-regulation, intrinsically motivated and ready to commit to a reasoned course of action. Of course, such intention is going to wither away quickly if the student doesn’t actually have any real resilience. If the student isn’t “tough enough” to take on the world then they are unlikely to be able to achieve much.
The steel industry is an interesting analog for this. There are many grades of iron ore and some are easier to turn into steel than others because of carbon levels or things like phosphorus contamination. (No, I’m not saying any of our students are contaminated – I’m emphasising regional and graded difference.) While certain ore sources were originally preferred, later developments in technique made it possible to use more and more different starting points. Now, electric arc furnaces can convert pig iron or scrap metal back to new steel easily but require enough power – sensible use of the widest range of resources requires a cheap and plentiful power source.
The educational equivalent of pot metal manufacture is the production of a student who is not ready for the world, is barely fit for one purpose when they graduate and whose skills will degrade over time – because the world develops but their fragile skills base cannot be extended or redeveloped.
Steel, however, can be redeveloped, reworked, extended. We can build ultra-flexible steels, strong steels, hard steels, corrosion resistant steels and we can temper it to make it easier to work with and less likely to break. The steps that we take in the production process are vital but they incur a cost, require careful planning, take skill and can undergo constant improvement if we keep putting effort into the process.
The tempering process is as vital for students as it is for steel because we want the same things. We want a student who will stand strong but be able to bend without breaking. We want a student who is held up by strong ideas, good teaching and a genuine faith in their own abilities – not the rough and ready imitation of completeness that we get from throwing things together.
I’m not suggesting that we heat up our students and throw them into cold oil, as we would quench and temper steel, but it’s important to look at why we heat steel for annealing/tempering and what we intend to achieve. By understanding the steel and the materials science, we know that reaching certain temperatures changes the nature of the material, changing properties such as hardness and ductility. Sometimes we do this to make the material easier to work, sometimes to make it more flexible in use. The key point is that by knowing the material, and by knowing what happens when we apply changes, we can choose what happens. By knowing which factors to combine at key points we can build something incredible.
There’s a lot of literature on resilience, a lot dealing with disadvantaged students, and the words that spring out are things like “attention” and “caring”, “support” and “trust”. Having a positive and high expectation of students helps to build self-esteem and sense of intrinsic worth through the application of extrinsic factors – you don’t have to make life easy for people because all you’re doing then is taking the Pot Metal approach. But making life too hard, through ignorance or carelessness, doesn’t produce resilience. It breaks people.
The notion of the modern steel foundry is probably quite apt here as we’re at a point in our history where we can offer education to most people, with a reasonable expectation of a good outcome. Our processes are steadily improving, resources for assisting students who have previously been disadvantaged are becoming increasingly widespread, students can now study anywhere (to a great extent) and we have a growing focus on educational research as it can be applied back into our teaching institutions. The problem, of course, and as we have already seen with the Electric Arc Furnace, is that smart and powerful machinery needs power to run it.
In this case, that’s us. The high quality students of the future, coming from every possible source, don’t depend upon limited amounts of rare earths and special metals to form the most resilient people. They need us to make sure that we know our students, know how we can build their strength through careful tempering and then make sure that we’re always doing it. The vast majority of the people that I know are doing this and it’s one of the things that gives me great hope for the future. But no more Pot Metal solutions, please!
A very quick one here. I tend to write long, somewhat editorial and personalised, posts on conferences and I realise that this approach is not for everyone. Katrina’s blog has a (generally) much briefer, to the point, style that comes with a reading list so that you can look at the core of the presentation and then go and explore it a bit more for yourself. I realise I’ve linked to it before but often at the end of long posts so you may have missed it as your eyes glaze over. 🙂
It’s another view of HERDSA and educational research that I find really helpful, especially as she puts in far more links than I do! (I’m trying to fix this in my own posts.) Hope that you find it useful as well.
(Edit: The original link was wrong and the link has now been fixed. Apologies!)
I’ve spent the weekend working on papers, strategy documents, promotion stuff and trying to deal with the knowledge that we’ve had some major success in one of our research contracts – which means we have to employ something like four staff in the next few months to do all of the work. Interesting times.
One of the things I love about working on papers is that I really get a chance to read other papers and books and digest what people are trying to say. It would be fantastic if I could do this all the time but I’m usually too busy to tear things apart unless I’m on sabbatical or reading into a new area for a research focus or paper. We do a lot of reading – it’s nice to have a focus for it that temporarily trumps other more mundane matters like converting PowerPoint slides.
It’s one thing to say “Students want you to give them answers”, it’s something else to say “Students want an authority figure to identify knowledge for them and tell them which parts are right or wrong because they’re dualists – they tend to think in these terms unless we extend them or provide a pathway for intellectual development (see Perry 70).” One of these statements identifies the problem, the other identifies the reason behind it and gives you a pathway. Let’s go into Perry’s classification because, for me, one of the big benefits of knowing about this is that it stops you thinking that people are stupid because they want a right/wrong answer – that’s just the way that they think and it is potentially possible to change this mechanism or help people to change it for themselves. I’m staying at the very high level here – Perry has 9 stages and I’m giving you the broad categories. If it interests you, please look it up!
We start with dualism – the idea that there are right/wrong answers, known to an authority. In basic duality, the idea is that all problems can be solved and hence the student’s task is to find the right authority and learn the right answer. In full dualism, there may be right solutions but teachers may be in contention over this – so a student has to learn the right solution and tune out the others.
If this sounds familiar, in political discourse and a lot of questionable scientific debate, that’s because it is. A large amount of scientific confusion is being caused by people who are functioning as dualists. That’s why ‘it depends’ or ‘with qualification’ doesn’t work on these people – there is no right answer and fixed authority. Most of the time, you can be dismissed as having an incorrect view, hence tuned out.
As people progress intellectually, under direction or through exposure (or both), they can move to multiplicity. We accept that there can be conflicting answers, and that there may be no true authority, hence our interpretation starts to become important. At this stage, we begin to accept that there may be problems for which no solutions exist – we move into a more active role as knowledge seekers rather than knowledge receivers.
Then, we move into relativism, where we have to support our solutions with reasons that may be contextually dependant. Now we accept that viewpoint and context may make which solution is better a mutable idea. By the end of this category, students should be able to understand the importance of making choices and also sticking by a choice that they’ve made, despite opposition.
This leads us into the final stage: commitment, where students become responsible for the implications of their decisions and, ultimately, realise that every decision that they make, every choice that they are involved in, has effects that will continue over time, changing and developing.
I don’t want to harp on this too much but this indicates one of the clearest divides between people: those who repeat the words of an authority, while accepting no responsibility or ownership, hence can change allegiance instantly; and those who have thought about everything and have committed to a stand, knowing the impact of it. If you don’t understand that you are functioning at very different levels, you may think that the other person is (a) talking down to you or (b) arguing with you under the same expectation of personal responsibility.
Interesting way to think about some of the intractable arguments we’re having at the moment, isn’t it?