ICER 2012 Day 3 Research Paper Session 5
Posted: September 15, 2012 Filed under: Education | Tags: advocacy, authenticity, community, education, educational research, ethics, Generation Why, higher education, icer, icer 2012, icer2012, in the student's head, learning, negative perceptions, teaching, teaching approaches, tools, universal principles of design Leave a commentThe last of the research paper sessions and, dear reader, I am sure that you are as glad as I that we are here. Reading about an interesting conference that you didn’t attend is a bit like receiving a message from a friend talking about how he kissed the person that you always loved from afar. Thanks for the information but I would rather have been there myself.
This session opened with “Toward a Validated Computing Attitudes Survey” (Allison Elliott Tew, Brian Dorn and Oliver Schneider), where the problems with negative perceptions of the field and hostile classroom environments, combined with people thinking that they would be no good at CS, conspire to prevent students coming in to, or selecting, our discipline. The Computing Attitudes Survey was built, with major modification, from the Colorado Learning Attitudes about Science Survey (CLASS, pronounced C-LASS). To adapt the original survey, some material was just copied across with a word change (computer science replacing physics), some terminology was changed (algorithm for formula) and some discipline specific statements were added. Having established an expert opinion basis for the discipline specific content, students can now see how much they agree with the experts.
There is, as always, the rip of contentious issues. “You have to know maths to be able to program” was a three-way split within the expert group as to who agreed, disagreed or was neutral. What was interesting, and what I’ll be looking at in future, is the evidence of self-defeating thought in many answers (no, not questions. The questions weren’t self-defeatist but the answers often were.) What was also interesting is that attitudes seem to get worse in the CLASS instrument after you take the course!
Confidence, as simple as “I think I can do this”, plays a fundamental part in determining how students will act. Given the incredibly difficult decisions that a student faces when selecting their degree or concentration, it is no surprise that anyone who thinks “Computing is too hard for me” or “Computing is no use to me” will choose to do something else.
The authors are looking for volunteers where they can run these trials again so, after you’ve read their paper, if you’re interested, you should probably e-mail them.
“A Statewide Survey on Computing Education Pathways and Influences: Factors in Broadening Participation in Computing” (Mark Guzdial, Barbara Ericson, Tom McKlin and Shelly Engelman)
The final research paper in the conference dealt with the final evaluation of the Georgia Computes! initiative, which had run from October 2006 to August of this year. This multi-year project cannot be contained in my nervous babbling but I can talk about the instrument that was presented. Having run summer camps, weekend workshops, competitions, teacher workshops, a teachers’ lending library, first year engagement and seeded first-year summer camps (whew!), the question was: What had been the impact of Georgia Computes! ? What factors influence undergrad enrolment into intro CS courses?
There were many questions and results presented but I’d like to focus on the top four reasons given, from survey, as to why students weren’t going to undertake a CS Major or Minor:
- I don’t want to do the type of work
- Little interest in the subject matter
- Don’t enjoy Computing Courses
- Don’t have confidence that I would succeed.
Looking at those points, after a state-wide and highly successful campaign over 6 years has finished, it is very, very sobering for me. What these students are saying is that they cannot see the field as attractive, interesting, enjoyable or that they are capable. But these are all aspects that we can work on, although some of these will require a lot of work.
Two further things that Barb said really struck me. Firstly, that if you take into account encouragement and ability, that men will tend to be satisfied and continue on if they receive either or both – the factors are not separable for men – but that women and minorities need encouragement in order to feel satisfied and to convince them to keep going. Secondly, when it comes to giving encouragement, male professors are just as effective as female professors in terms of giving encouragement to women.
As a male lecturer, who is very, very clearly aware of the demographic disgrace that is the under-representation of women in CS, this first fact gives me a partial strategy to increase retention (and reinforces a believe I have held anecdotally for some time) but the second fact gives me the agency to assist in this process, as well as greater hope for a steadily increasing female cohort over time.
Overall, a very positive note on which to finish the session papers!
ICER 2012 Day Research Paper Session 4
Posted: September 15, 2012 Filed under: Education | Tags: education, educational research, feedback, Generation Why, higher education, icer, icer 2012, icer2012, in the student's head, reflection, student perspective, teaching, teaching approaches, thinking, tools Leave a commentThis session kicked off with “Ability to ‘Explain in Plain English’ Linked to Proficiency in Computer-based Programming”, (Laurie Murphy, Sue Fitzgerald, Raymond Lister and Renee McCauley (presenting)). I had seen a presentation along these lines at SIGCSE and this is an excellent example of international collaboration, if you look at the authors list. Does the ability to explain code in plain English correlate with ability to solve programming problems? The correlation appears to be there, whether or not we train students in Explaining in Plain English or not, but is this causation?
This raises a core question, addressed in the talk: Do we need to learn to read (trace) code before we learn to write code or vice versa? The early reaction of the Leeds group was that reading code didn’t amount to testing whether students could actually write code. Is there some unknown factor that must be achieved before either or both of these? This is a vexing question as it raises the spectre of whether we need to factor in some measure of general intelligence, which has not been used as a moderating factor.
Worse, we now return to that dreadful hypothesis of “programming as an innate characteristics”, where you were either born to program or not. Ray (unsurprisingly) believes that all of the skills in this area (EIPE/programming) are not innate and can be taught. This then raises the question of what the most pedagogically efficient way is to do this!
How Do Students Solve Parsons Programming Problems? — An Analysis of Interaction Traces
Juha Helminen (presenting), Petri Ihantola, Ville Karavirta and Lauri Malmi
This presentation was of particular interest to me because I am currently tearing apart my 1,900 student data corpus to try and determine the point at which students will give up on an activity, in terms of mark benefit, time expended and some other factors. This talk, which looked at how students solved problems, also recorded the steps and efforts that they took in order to try and solve them, which gave me some very interesting insights.
A Parsons problem is one where, given code fragments a student selects, arranges and composes a program in response to a question. Not all code fragments present will be required in the final solution. Adding to the difficulty, the fragments require different indentation to assert their execution order as part of block structure. For those whose eyes just glazed over, this means that it’s more than selecting a line to go somewhere, you have to associate it explicitly with other lines as a group. Juha presented a graph-based representation of the students’ traversals of the possible solutions for their Parsons problem. Students could ask for feedback immediately to find out how their programs were working and, unsurprisingly, some opted for a lot of “Am I there yet” querying. Some students queried feedback as much as 62 times for only 7 features, indicative of permutation programming, with very short inter-query intervals. (Are we there yet? No. Are we there yet? No. Are we there yet? No. Are we there yet? No.)
The primary code pattern of development was linear, with block structures forming the first development stages, but there were a lot of variations. Cycles (returning to the same point) also occurred in the development cycle but it was hard to tell if this was a deliberate reset pattern or a one where permutation programming had accidentally returned the programmer to the same state. (Asking the students “WHY” this had occurred would be an interesting survey question.)
There were some good comments from the audience, including the suggestion of correlating good and bad states with good and bad outcomes, using Markov chain analysis to look for patterns. Another improvement suggested was recording the time taken for the first move, to record the impact (possible impact) of cognition on the process. Were students starting from a ‘trial and error’ approach or only after things went wrong?
Tracking Program State: A Key Challenge in Learning to Program
Colleen Lewis (presenting, although you probably could have guessed that)
This paper won the Chairs’ Award for the best paper at the Conference and it was easy to see why. Colleen presented a beautifully explored case study of an 11 year old boy working on a problem in the Scratch programming language and trying to work out why he couldn’t draw a wall of bricks. By capturing Kevin’s actions, in code, his thoughts, from his spoken comments, we are exposed to the thought processes of a high achieving young man who cannot fathom why something isn’t working.
I cannot do justice to this talk by writing about something that was primarily visual, but Colleen’s hypothesis was that Kevin’s attention to the state (variables and environmental settings over which the program acts) within the problem is the determining factor in the debugging process. Once Kevin’s attention was focused on the correct problem, he solved it very quickly because the problem was easy to solve. Locating the correct problem required him to work through and determine which part of the state was at fault.
Kevin has a pile of ideas in his head but, as put by duSessa and Sherin (98), learning is about reliably using the right ideas in the correct context. Which of Kevin’s ideas are being used correctly at any one time? The discussion that followed talked about a lot of the problems that students have with computers, in that many students do not see computers as actually being deterministic. Many students, on encountering a problem, will try exactly the same thing again to see if the error occurs again – this requires a mental model that we expect a different set of outcomes with the same inputs and process, which is a loose definition of either insanity or nondeterminism. (Possibly both.)
I greatly enjoyed this session but the final exemplar, taking apart a short but incredibly semantically rich sequence and presenting it with a very good eye for detail, made it unsurprising that this paper won the award. Congratulations again, Colleen!
ICER 2012 Day 2 Discussion Papers Session 2
Posted: September 15, 2012 Filed under: Education | Tags: authenticity, education, educational research, hawthorne effect, higher education, icer, icer 2012, icer2012, student perspective, teaching, teaching approaches, tools Leave a commentThis is a brief note on this session as these papers are presented to the community with the intention of sparking discussion and, in this case, one of the most interesting issues that arose was the use for reference of the first of a pair of papers, where the first paper asserted a finding and the second then retracted it. This is not to say that the actual papers presented themselves weren’t interesting (far from it) but you can read about them in the proceedings and this particular session raised yet another reason to come to ICER: because this is where a lot of the authors are.
In this case, one of the authors on the retraction paper very politely identified himself and then pointed out why the paper in question that the presenting authors were referring to had then been followed-up with a paper that illustrated some of the problems in the original work. (I am trying quite hard to avoid potentially embarrassing anyone so please excuse how circumspect I am being.)
The reception of the actual discussion paper was, unsurprisingly, framed in the revelation that a supporting paper had been undermined and the questions revolved around issues with metrics and how the authors had addressed the (so-called) possible Hawthorne effect issues.
But this is exactly what these kind of paper sessions are for. This is a place to present ideas for the community and now the authors can go back, rework their approach on stronger soil and come back with something stronger. Yes, there is no doubt that they would much rather have not built upon that paper but imagine how much worse it would have been had this made it (undetected) to the journal stage!
ICER 2012 Day 2 Research Session 3
Posted: September 15, 2012 Filed under: Education | Tags: collaboration, community, education, educational problem, educational research, feedback, higher education, icer, icer 2012, icer2012, in the student's head, shotgun debugging, teaching, teaching approaches, tools, universal principles of design Leave a commentThe session kicked off with “The Abstraction Transition Taxonomy: Developing Desired Learning Outcomes through the Lens of Situated Cognition”, (Quintin Cutts (presenting), Sarah Esper, Marlena Fecho, Stephen Foster and Beth Simon) and the initial question: “Do our learning outcomes for programming classes match what we actually do as computational thinkers and programmers?” To answer this question, we looked Eric Mazur’s Peer Instruction, an analysis of PU questions as applied to a CS principles pilot course, and then applied the Abstraction Transition Taxonomy (ATT) to published exams, with a wrap of observations and ‘where to from here’.
Physicists have, some time ago, noticed that their students can plug numbers into equations (turn the handle, so to speak) but couldn’t necessarily demonstrate that they understood things: they couldn’t demonstrate that that they thought as physicists should. (The Force Concept Inventory was mentioned here and, if you’re not familiar, it’s a very interesting thing to look up.) To try and get students who thought as physicists, Mazur developed Peer Instruction (PI), which had pre-class prep work, in-class questions, followed by voting, discussion and re-voting, with an instructor leading class-wide discussion. These activities prime the students to engage with the correct explanations – that is, the way that physicists think about and explain problems.
Looking at Computer Science, many CS people use the delivery of a working program as a measure of the correct understanding and appropriate use of programming techniques.
Given that generating a program is no guarantee of understanding, which is sad but true given the existence of the internet, other students and books. We could try and force a situation where students are isolated from these support factors but this then leads us back to permutation programming, voodoo code and shotgun debugging unless the students actually understand the task and how to solve it using our tools. In other words, unless they think as Computer Scientists.
UCSD had a CS Principles Pilot course that used programming to foster computational thinking that was aimed at acculturation into the CS ‘way’ rather than trying to create programmers. The full PI implementation asked students to reason about their programs, through exploratory homework and a PI classroom, with some limited time traditional labs as well. While this showed a very positive response, the fear was that this may have been an effect of the lecturers themselves so analysis was required!
By analysing the PI questions, a taxonomy was developed that identified abstraction levels and the programming concepts within them. The abstraction levels were “English”, “Computer Science Speak” and “Code”. The taxonomy was extended with the transitions between these levels (turning an English question into code for example is a 1-3 transition, if English is abstraction level 1 and Code 3. Similarly, explain this code in English is 3-1). Finally, they considered mechanism (how does something work) and rationale (why did we do it this way)?
Analysing the assignment and assessment questions to determine what was being asked, in terms of abstraction level and transitions, and whether it was mechanism or rationale, revealed that 21% of the in-class multiple choice questions were ‘Why?’ questions but there actually very few ‘Why?’ questions in the exam. Unsurprisingly, almost every question asked in the PI framework is a ‘Why?’ question, so there should be room for improvement in the corresponding examinations. PI emphasises the culture of the discipline through the ‘Why?’ framing because it requires acculturation and contextualisation to get yourself into the mental space where a Rationale becomes logical.
The next paper “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications”, Lauren Margulieux, Mark Guzdial and Richard Catrambone, dealt with mental models and how the cognitive representation of an action will affect both the problem state and how well we make predictions. Students have so much to think about – how do they choose?
The problem with just waiting for a student to figure it out is high cognitive load, which I’ve referred to before as helmet fire. If students become overwhelmed they learn nothing, so we can explicitly tell students and/or provide worked examples. If we clearly label the subgoals in a worked example, students remember the subgoals and the transition from one to another. The example given here was an Android App Inventor worked example, one example of which had no labels, the other of which had subgoal labels added as overlay callouts to the movie as the only alteration. The subgoal points were identified by task analysis – so this was a very precise attempt to get students to identify the important steps required to understand and complete the task.
(As an aside, I found this discussion very useful. It’s a bit like telling a student that they need comments and so every line has things like “x=3; //x is set to 3” whereas this structured and deliberate approach to subgoal definition shows students the key steps.)
In the first experiment that was run, the students with the subgoals (and recall that this was the ONLY difference in the material) had attempted more, achieved more and done it in less time. A week later, they still got things right more often. In the second experiment, a talk-aloud experiment, the students with the subgoals discussed the subgoals more, tried random solution strategies less and wasted less effort than the other group. This is an interesting point. App Inventor allows you to manipulate blocks of code and the subgoal group were less likely to drag out a useless block to solve the problem. The question, of course, is why. Was it the video? Was it the written aspects? Was it both?
Students appear to be remembering and using the subgoals and, as was presented, if performance is improving, perhaps the exact detail of why it’s happening is something that we wish to pursue but, in the short term, we can still use the approach. However, we do have to be careful with how many labels we use as overloading visual cues can lead to confusion, thwarting any benefit.
The final paper in the session was “Using collaboration to overcome disparities in Java experience”, Colleen Lewis (presenting), Nathaniel Titterton and Michael Clancy. This presented the transformation of a a standard 3 Lecture, 2 hours of lab and 1 discussion hour course into a 1 x 1 hour lecture with 2 x 3 hour labs, with the labs now holding the core of the pedagogy. Students are provided feedback through targeted tutoring, using on-line multiple choices for the students to give feedback and assist the TAs. Pair programming gives you someone to talk to before you talk to the TA but the TA can monitor the MCQ space and see if everyone is having a problem with a particular problem.
This was addressing a problem in a dual speed entry course, where some students had AP CS and some didn’t, therefore the second year course was either a review for those students who had Java (from AP CS) or was brand new. Collaboration and targeted support was aimed at reducing the differences between the cohorts and eliminate disadvantage.
Now, the paper has a lot of detail on the different cohorts, by intake, by gender, by retention pattern, but the upshot is that the introduction of the new program reduced the differences between those students who did and did not have previous Java experience. In other words, whether you started at UCB in CS 1 (with no AP CS) or CS 1.5 (with AP CS), the gap between your cohorts shrank – which is an excellent result. Once this high level of collaboration was introduced, the only factor that retained any significant difference was the first exam, but this effect disappeared throughout the course as students received more exposure to collaboration.
I strongly recommend reading all three of these papers!
ICER 2012 Research Paper Session 2
Posted: September 13, 2012 Filed under: Education | Tags: advocacy, collaboration, community, community sharing resources, education, educational research, higher education, icer, icer 2012, icer2012, teaching, teaching approaches, tools 3 CommentsOk, true confession time. My (and Katrina’s) paper was in this session and I’ll write this up separately. So this session consisted of “Adapting Disciplinary Commons Model: Lessons and Results from Georgia” (Brianna Morrison, Lijun Ni and Mark Guzdial) and… another paper. 🙂
- To document and share knowledge about student learning in CS classrooms
- To establish practices for the scholarship of teaching by making it public, peer-reviewed and amenable for public use. (portfolio model)
- Creating community
- Sharing resources and knowledge of how things are taught in other contexts.
- Supporting student recruitment within the high school environment.
ICER 2012 Day 1 Keynote: How Are We Thinking?
Posted: September 10, 2012 Filed under: Education | Tags: community, curriculum, education, educational problem, educational research, higher education, icer, icer 2012, in the student's head, reflection, teaching, teaching approaches, thinking, threshold concepts, tools, workload 3 CommentsWe started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):
- Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
- Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
- This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
- This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
- Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.
Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?
How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?
Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.
A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.
ICER 2012: So Good I Don’t Have Time To Blog!
Posted: September 10, 2012 Filed under: Education | Tags: education, educational research, higher education, icer, icer 2012 Leave a commentI’m going to try and post when I can but the conference is so good that there’s nothing I can skip. Apologies, I shall try and dump my notes from today when I have a chance!
ICER 2012: Day 0 (Workshops)
Posted: September 10, 2012 Filed under: Education | Tags: collaboration, community, design, education, educational problem, educational research, feedback, Generation Why, higher education, icer, icer 2012, in the student's head, learning, principles of design, student perspective, teaching, teaching approaches, workload 1 CommentWell, it’s Sunday so it must be New Zealand (or at least it was Sunday yesterday). I attended that rarest of workshops, one where every session was interesting and made me think – a very good sign for the conference to come.
We started with an on-line workshop on Bloom’s taxonomy, classifying exam questions, with Raymond Lister from UTS. One of the best things about this for me was the discussion about the questions where we disagreed: is this application or synthesis? It really made me think about how I write my examinations and how they could be read.
We then segued into a fascinating discussion of neo-Piagetian theory, where we see the development stages that we usually associate with children in adults as they learn new areas of knowledge. In (very rough) detail, we look at whether we have enough working memory to carry out a task and, if not, weird things happen.
Students can indulge in some weird behaviours when they don’t understand what’s going on. For example, permutation programming, where they just type semi-randomly until their program compiles or works. Other examples include shotgun debugging and voodoo programming and what these amount to are the student not having a good consistent model of what works and, as a result, they are basically dabbling in a semi-magic approach.
My notes from the session contain this following excerpt:
“Bizarro” novice programmer behaviours are actually normal stages of intellectual development.Accept this and then work with this to find ways of moving students from pre-op, to concrete op, to formal operational. Don’t forget the evaluation. Must scaffold this process!
What this translates to is that the strange things we see are just indications that students having moved to what we would normally associate with an ‘adult’ (formal operational) understanding of the area. This shoots several holes in the old “You’re born a programmer” fallacy. Those students who are more able early may just have moved through the stages more quickly.
There was also an amount of derisive description of folk pedagogy, those theories that arise during pontification in the tea room, with no basis in educational theory or formed from a truly empirical study. Yet these folk pedagogies are very hard to shake and are one of the most frustrating things to deal with if you are in educational research. One “I don’t think so” can apparently ignore the 70 years since Dewey called the classrooms prisons.
The worst thought is that, if we’re not trying to help the students to transition, then maybe the transition to concrete operation is happening despite us instead of because of us, which is a sobering thought.
I thought that Ray Lister finished the session with really good thought regarding why students struggle sometimes:
The problem is not a student’s swimming skill, it’s the strength of the torrent.
As I’ve said before, making hard things easier to understand is part of the job of the educator. Anyone will fail, regardless of their ability, if we make it hard enough for them.
Conference Blogging! (Redux)
Posted: September 8, 2012 Filed under: Education | Tags: blogging, education, educational problem, educational research, feedback, Generation Why, higher education, icer, icer 2012, in the student's head, learning, measurement, student perspective, teaching, teaching approaches, time banking, workload 1 CommentI’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.
I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.
Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.
For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)
I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.
I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.
Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.
Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?
I’ll leave the rest of this discussion for after my talk on Monday.