CSEDU Day 1, Opening Panel, “Shaping the Future Learning Environment – Smart, Digital and Open?”

Only 32 papers out of 250 were accepted for the conference as full papers (12.8% acceptance, highly respectable) and were identified as being of “outstanding quality” – good work, PhD Student T and the CSER team! After the opening address, we went to the first panel, chaired by James Uhomobihi and Markus Helfert. The four keynote speakers are also on the panel but I’ll add more on them during their sessions.However, in summary, we had an academic, a maths instruction evangelist, a psychologist and a representative of engineering society bureaucracy (not as bad as it sounds). Everyone was saying how happy they were to be in Barcelona! (And who can blame them? This is one of my favourite cities and should be on everyone’s bucket list.)

Larissa Fradkin: “Mathematics Teaching: Is the future syncretic?”.

From Wikipedia: Syncretism /ˈsɪŋkrətɪzəm/ is the combining of different, often seemingly contradictory beliefs, while melding practices of various schools of thought.

Everyone was asked to give a position statement, which is a different take on panels for me, very interesting. Larissa had slides and she identified the old problem of the difficulty of teaching mathematics, alluding to the US mathematics wars. Are MOOCs the solution? We were promised a great deal yet much of it has not yet appeared. Well, Coursera materials appear to be useful, or useful in principle, but they don’t work in some classroom so they have to be localised. What is the role of the faculty member in this space? It’s a difficult question and the answer depends on time, teacher interest and student sophistication. We inherit the students that our preceding teachers have produced so text books and curricula have a big impact on the students that get turned out, if educational resources are presented in an unquestioned way. Trudging through exercises and content is one way to get through but what does it do? Does it teach? Does it prepare students for tests? Texts and resources are often, despite what publishers and authors claim, unaligned with the curriculum.

Schmidt’s last study (ref?) shows that quality teachers and quality materials are the top two considerations for inhaling student learning. We can produce well-crafted eBooks and MOOCs with editable and updatable content to give a flexible product – but the “rush to market” product doesn’t meet this requirement. The cognitive tutor was mentioned as AI-enriched educational software, under John Anderson’s model, but they are incredibly difficult to develop. There are tools out there that combine cognitive psychology and AI, CIRCLE and AutoTutor. However, most tool development is driven by psychologists and cognitive scientists rather than discipline experts and this can be a problem.

The speaker finished with a discussion on a semi-traditionalist semi-constructivist approach to rigorous instruction that required much less memorisation, focusing on conceptual understanding and developing more master than a straight constructivist approach. (Not quite sure of the details here but this will be extended in a later talk.)

Erik de Graffe gave his first slide from tomorrow on Team Learning in Engineering Education, starting with the statement “people are not born to work in a team”, which is an interesting statement given the entirety of human society. Students do like working together but the first meeting of the group can be challenging and they don’t know how to start. (Is this a cultural artefact based on the isolation and protection inherent in privilege? Time to get our Hofstader’s cultural dimensions hats on because this sounds a lot like a communal/separate categorical separation.) Erik noted that “team” is a word for animals harnessed together to apply more force in one direction – which is not what we want in a human team. (Although this is perhaps another cultural insight). Erik’s second statement is that communication is highly inaccurate: cog scientists estimate that the amount of information we process from the potential information around us is less than 1%. Most of the information that reaches our sense are ignored and yet we still make decisions. (Given I’m listening to the speaker while summarising another discussion and some slides, I’m wondering if this is the best way to express this while not identifying the difference in task focus and activities that are relevant to the task at hand.) Erik believes that on-line will make this worse. (I really need to go to his talk to see the evidence and caveats in all of this.) Erik then projected his own behaviour in on-line meetings and low attention to the general case – if Erik is in a meeting with you, get him to leave his camera on or he will be off making coffee. 🙂 Erik then moved on to virtual identities that allow people to do things in the alternative reality that they wouldn’t do in real life. Urm. I need to see the talk but this all seems a little dated to me but, hey, what do I know until I’ve seen the talk?

The next speaker, Steve, asked us if we shut the door while teaching (I don’t usually unless the noise get stop bad because it improves air flow, rather than for pedagogical or privacy reasons) but segued into a discussion on openness. Steve then referred to the strange issue of us providing free content to journal publishers that we then pay for. (I wrote a little something on this years ago for SIGCSE.) Steve then referred to the different time in closed and open access journals – the open access journal was “in print” in four weeks, versus the 18 months for the closed access, and the citation counts were more than an order of magnitude different. You can also measure the readers in an open access format. Open access materials are also crucial to scholars – Steve licenses all of his materials under Creative Commons so you can use his work and all you have to do is to acknowledge the source. When you open your content, you become an educator for the world. If people need education, then who are we to not provide the materials that they need? When you become an open scholar, you must prepare for criticism because many people will read things. You be also be ready for dialogue and discourse. It’s not always easy but it is very valuable. (I agree with this wholeheartedly.)

The next speaker, José Carlos Quadrado, President of IFEES, tries to infect people with good ideas about Engineering Education as part of his role. Is the future learning environment smart, digital and open? Talking about smart, an example is the new smart watch, a watch that is also a phone or linked to a device – what do we mean? When you have a smart phone, you have approximately 1.4 tons of technology from the 1980, which makes us wonder if smart means leaner? When we are going digital is this replacing paper with silicon? How do we handle factual authority when we have so much openly available where the traditional peer review and publishing oversight mechanisms are eroding and changing rapidly. (Another slightly creaky perspective although with a great deal of self-awareness.)

Teaching and learning tools have changed a great deal over the same time but have our pedagogical approaches also changed or been truly enhanced by this? There were some pretty broad generational (X vs Z) comparisons that I question the validity of. The notion that students of today couldn’t sit throughout this session is not something I agree with. Again, the message from the panel, with a couple of exceptions, is pretty dated and I have a bit of an urge to get off a lawn. Oh, and we finished with an Einstein quote after a name drop to a famous scientist. Look, I accept a lot of the things that are being said on the stage but we have to stop acting as if natural selection works in 18 months and that the increasing sophistication of later generations is anything more than an ability to make better choices because more and better choices are available. Oh, another Einstein quote.

I realise that I have started editorialising here, which I try not to do, but I am being bombarded with position statements that are leaden in their adherence to received wisdom on young people and those smart young. These issues are just clouding the real focus of the systems that we could use, the approaches we could take and the fact that for every highly-advanced Z Westerner in a low power-distance, highly self-centred approach, we have 100-1000 Pre-X non-Westerners in a high power-distance and communal environment.

We’re in question time. Erik asked about the ship analogy – when ships were made of wood, men were made of iron, and by moving to iron ships, we weakened people. Erik then went on to the question “Do smart phones make stupid people?” which basically nails down the coffin lid on all of the problems I’ve had with this opening. Steve, in response to another question, raised the connectivist argument for networking and distributed knowledge storage, which smart phones of course facilitate. Sadly, this foray into common sense was derailed by some sophistry on young people trying to be smart before they are clever.

There was a good point made that Universities will continue to be involved in quality education but they are no longer the bastions of information – that particular ship has sailed. Oh, there we go, we’ve dipped down again. Apparently we have now changed the way we buy things because we are now all concerned with perception. People are buying smart phones, not because they are smart, but because they want instant gratification. Generation Z are apparently going to be the generation that will reject everything and walk away. Eh, maybe.

Perhaps I shall come back to this later.


SIGCSE Day 3, “What We Say, What They Do”, Saturday, 9-10:15am, (#SIGCSE2014)

The first paper was “Metaphors we teach by” presented by Ben Shapiro from Tufts. What are the type of metaphors that CS1 instructors use and what are the wrinkles in these metaphors. What do we mean by metaphors? Ben’s talking about conceptual metaphors, linguistic devices to allow us to understand one idea in terms o another idea that we already know. Example: love is a journey – twists and turns, no guaranteed good ending, The structure of a metaphor is that you have a thing we’re trying to explain (the target) in terms of something we already know (the source).  Conceptual metaphors are explanatory devices to assist us in understanding new things.

Metaphors are widely used in teaching in CS, pointers, stacks and loops – all metaphorical aspects of computer science, but that’s not the focus of this study. How do people teach with metaphor? The authors couldn’t find any studies on general metaphor use in CS and its implication on student learning. An example from a birds-of-a-feather session held at this conference, a variable is like a box. A box can hold many different things but it holds things. (This has been the subject of a specific study.) Ben also introduced the “Too much milk” metaphor. This metaphor is laid out as follows. Jane comes home from work, goes to get milk from the fridge but her roommate has already drunk it (bad roommate!). Jane goes out to get more milk. While she’s out, her roommate comes back with milk, then Jane comes back with milk. Now they have too much milk! This could be used to explain race conditions in CS. Another example is the use of bus lockers mapping to virtual memory.
Ben returned to boxes again? One of the problems is that boxes can hold many things but a variable can only hold one thing – which appears to be a confusing point for learners who knew how boxes work. Is this a common problem? Metaphors have some benefits but come with this kind of baggage? Metaphors are partial mappings – they don’t match every aspect of the target to the source. (If it was a complete mapping they’d be the same thing.)
The research questions that the group considered were:
  • What metaphors do CS1 instructors use for teaching?
  • What are the trying to explain?
  • What are the sources that they use?
Learners don’t know where the mappings start and stop – where do the metaphors break down for students? What mistakes do they make because of these misunderstandings? Why does this matter? We all have knowledge on how to explain but we don’t have good published collections of the kind of metaphors that we use to teach CS, which would be handy for new teachers. We could study these and work out which are more effective. What are the most enduring and universal metaphors?
The study was interview-based, interviewing Uni-level CS1 instructors, ended up with 10 people, with an average of 13 years of teaching.  The interview questions given to these instructors were (paraphrased):
  • Levels taught and number of years
  • Tell me about a metahpor
  • Target to source mapping
  • Common questions students have
  • Where the metaphor breaks down
  • How to handle the breakdown in teaching.
Ben then presented the results. (We had a brief discussion of similes versus metaphors but I’ll leave that to you.) An instructor discussed using the simile of a portkey from Harry Potter to explain return statements in functions, because students had trouble with return existing immediately. The group of 10 people provided 18 different CS Concepts (Targets) and 19 Metaphorical Explanations (Sources).
What’s the target for “Card Catalogs”? Memory addressing and pointers. The results were interesting – there’s a wide range of ways to explain things! (The paper contains a table of a number of targets and sources.)
Out of date cultural references were identified as a problem and you have to be aware of the students’ cultural context. (Card desk and phone booths are nowhere near as widely used as they used to be.) Where do students make inferences beyond the metaphor? None of the 10 participants could give a single example of this happening! (This is surprising – Ben called it weird.) Two hypotheses – our metaphors are special and don’t get overextended (very unlikely) OR CS1 instructors poorly understand student thinking (more likely).
The following experimental studies may shed some light on this:
  • Which metaphors work better?
  • Cognitive clinical internviews, exploring how students think with metaphors and where incorrect inferences are drawn.
There was also a brief explanation of PCK (teachers’ pedagogical content knowledge) but I don’t have enough knowledge to fully flesh this out. Ben, if you’re reading this, feel free to add a beautifully explanatory comment. 🙂
The next walk was “‘Explain in Plain English’ Questions Revisited: Data Structures Problems” presented by Sue Fitzgerald and Laurie. This session opened with a poll to find out what the participants wanted and we all wanted to find out how to get students to use plain English. An Explain in Plain English  (EiPE) question asks you to describe what a chunk of code does, but not in a line by line discussion. A student’s ability to explain what a chink of code does correlates with a student’s ability to write and read code. The study wanted to investigate if this was just a novice phenomenon or if this advanced during the years and expertise. This study looked at 120 undergraduates in a CS2 course in data structures and algorithms using C++, with much more difficult questions than in earlier studies: linked lists, recursive calls and so on.
The students were given two questions in an exam with some preamble to describe the underlying class structure with a short example and a diagram. The students then had to look at a piece of code and determine what would happen in order to answer in the question as a plain English response. (There’s always a problem where you throw to an interactive response system where the question isn’t repeated, perhaps we need two screens.)
The SOLO taxonomy was used to analyse the problems (more Neo-Piagetian goodness!). Four of the SOLO categories were used: relational (summarises the code), multistructural (line by line explanation of the code) , unistructural (only describes one portion rather than the whole idea), and pre structural (misses it completely, gibberish). I was interested to see the examples presented, with pointers and mutual function calling, because it quickly became apparent that the room I was in (which had a lot of CS people in it) were having to think relatively hard about the answer to the second example. One of the things about working memory is that it’s not very deep and none of us were quite ready to work in a session 🙂 but a lot of good discussion ensued. The students would have had ready access to the preamble code but I do wonder how much obfuscation is really required here. The speaker made a parenthetical comment that experts usually doodle but where was our pen and paper! (As someone else said, reinforcing the point that we didn’t come prepared to work, nobody told us we had to bring paper. 🙂 ) We then got to classify a student response that was quite “student-y”. (A question came up as to whether an answer can be relational if it’s wrong – the opinion appears to be that a concise, complete and incorrect answer could be considered relational. A point for later discussion.) The answer we saw was multistructural because it was a line-by-line answer – it wasn’t clear, concise and abstract. We then saw another response that was much more terse but far less accurate. THe group tossed up between unistructural and pre structural. (The group couldn’t see the original code or the question, so this uncertainty make sense. Again, a problem with trying to have an engaging on-line response system and a presentation on the same screen. The presenters did a great job of trying to make it work but it’s not ideal.)
What about correlations? For the first question asked, students who gave relational and multistructural answers generally passed, with a 58% grade. Those who answered at the uni or pre level generally failed with an average grade of 38%. In the second test question, the relational and multi group generally passed with a grade of 61.2%, the uni and pre group generally failed with an achieved grade of 42%.
So these correlations hold for no-novice programmers. A mix of explaining, writing and reading code is an effective way to develop good programming skills and EiPE questions give students good practice in the valuable skills of explaining code. Instructors can overestimate how well students understand presented code – asking them to explain it back is very useful for student self-assessment. The authors’ speculation is that explaining code to peers is probably part of the success of peer instruction and pair programming.
The final talk was “A Formative Study of Influences on Student Testing Behaviours” presented by Kevin Buffardi, from VT. In their introductory CS1 and CS2 courses they use Test-Driven Development (TDD) – code a little, test a little, for incremental development. It’s popular in industry, so students come out with relevant experience, but some previous studies have found improvement in student work when they closely adhered to TDD philosophy. BUT a lot of students didn’t follow it at all! So the authors were looking for ways to encourage students to follow this, especially when they were on their own and programming by themselves. Because it’s a process, you can tell what happened by looking at the final program but they use WebCAT and so can track the developmental stages of the program as students submit their work for partial grading. These snapshots provide clear views of what the students are doing over time. (I really have to look at what we could do with WebCAT. Our existing automarker is getting a bit creaky.) Students also received hints back when they submitted their work, general and instructor level.
The first time students achieved something with any type of testing, they would get a “Good Start” feedback and be entitled to a free hint. If you kept up with your testing, you would ‘buy’ more hints. If your test coverage was good, you got more hints. If your coverage was poor, you got general feedback. (Prior to this, WebCAT only gave 3 hints. Now there are no free hints but you can buy an unlimited number.) This is an adaptive feedback mechanism, to encourage testing with hints as incentives. The study compared reinforcement treatments:
  • Constant – even time a goal achieve, you got a hint (Consistently rewards target behaviour)
  • Delayed – Hints when earned, at most one hint per hour (less inceptive for hammering the system)
  • Random – 50% chance of hints when goal is met. (Should reduce dependency on extrinsic behaviours)
Should you show them the goal or not? This was an additional factor – the goals were either visual (concrete goal) or obscured (suggest improvement without specified target). These were a paired treatment.
What was the impact? There were no differences in the number of lines written, but the visual goal lead to students getting better test coverage than obscured goal. There didn’t appear to be a long term effect but there is an upcoming ITiCSE talk that will discuss this further. There were some changes from one submission to another but this wasn’t covered in detail.
The authors held formative group interviews where the students explained their development process and interaction with WebCAT. They said that they valued several types of evaluation, they paid attention to RED progress bars (visualisation and dash boarding – I’d argue that is is more about awareness than motivation), and noticed when they earned a hint but didn’t get it. The students drew their individual developmental process as a diagram and, while everyone had a unique approach, but there were two general approaches. Test last approach showed up: write a solution, submit a solution to WebCAT, take a break, do some testing, then submit to WebCAT again. Periodic testing approach was the other pattern seen, where they wrote solutions, WebCAT, write tests, submit to WebCAT, then revise solution and tests, and iterate.
Going forward, the automated evaluation became part of their development strategy. There were conflicting interests: the correctness reports from WebCAT were actually reducing the need to write their own tests because they were getting an indication of how well it was working. This is an important point for me, because from the examples I saw, I really couldn’t see what I would call test-driven development, especially for test last, so the framework is not encouraging the right behaviour. Kevin handled my question on this well, because it’s a complicated issue, and I’m really looking forward to seeing the ITiCSE paper follow-up! Behavioural change is difficult and, as Kevin rightly noted, it’s optimistic to think that we can achieve it in the short term.
Everyone wants to get students doing the right thing but it’s a very complicated issue. Much food for thought and a great session!

SIGCSE Day 2, “Software Engineering: Courses”, Thursday, 1:45-3:00pm, (#SIGCSE2014)

Regrettably, despite best efforts, I was a bit late getting back from the lunch and I missed the opening session, so my apologies to Andres Neyem, Jose Benedetto and Andres Chacon, the authors of the first paper. From the discussion I heard, their course sounds interesting so I have to read their paper!

The next paper was “Selecting Open Source Software Projects to Teach Software Engineering” presented by Robert McCartney from University of Connecticut. The overview is why would we do this, the characteristics of the students, the projects and the course, finding good protects, what we found, how well it worked and what the conclusions were.

In terms of motivation, most of their SE course is in project work. The current project approach emphasises generative aspects. However, in most of SE, the effort involves maintenance and evolution. (Industry SE’s regularly tweak and tune, rather than build from the bottom.) The authors wanted to change focus to software maintenance and evolution, have the students working on an existing system, understanding it, adding enhancements, implementing, testing and documenting their changes. But if you’re going to do this, where do you get code from?

There are a lot of open source projects, available on0line, in a variety of domains and languages and at different stages of development. There should* be a project that fits every group. (*should not necessarily valid in this Universe.) The students are not actually being embedded in the open source community, the team is forking  the code and not planning to reintegrate it. The students themselves are in 2nd and 3rd year, with courses in OO and DS in Java, some experience with UML diagrams and Eclipse.

For each team of students, they get to pick a project from a set, try to understand the code, propose enhancements, describe and document  all o their plans, build their enhancements and present the results back. This happens over about 14 weeks. The language is Java and the code size has to be challenging but not impossible (so about 10K lines). The build time had to fit into a day or two of reasonable effort (which seems a little low to me – NF). Ideally, it should be a team-based project, where multiple developed could work in parallel. An initial look at the open source repositories on these criteria revealed a lot of issues: not many Java programs around 10K but Sourceforge showed promise. Interestingly, there were very few multi-developer projects around 10K lines. Choosing candidate projects located about 1000 candidates, where 200 actually met the initial size criterion. Having selected some, they added more criteria: had to be cool, recent, well documented, modular and have capacity to be built (no missing jar files, which turned out to be a big problem). Final number of projects: 19, size range 5.2-11 k lines.

That’s not a great figure. The takeaway? If you’re going to try and find projects for students, it’s going to take a while and the final yield is about 2%. Woo. The class ended up picking 16 projects and were able to comprehend the code (with staff help). Most of the enhancements, interestingly, involved GUIs. (Thats not so great, in my opinion, I’d always prefer to see functional additions first and shiny second.)

In concluding, Robert said that it’s possible to find OSS projects but it’s a lot of work. A search capability for OSS repositories would be really nice. Oh – now he’s talking about something else. Here it comes!

Small projects are not built and set up to the same standard as larger projects. They are harder to build, less-structured and lower quality documentation, most likely because it’s one person building it and they don’t notice the omissions. Thes second observation is that running more projects is harder for the staff. The lab supervisor ends up getting hammered. The response in later offerings was to offer fewer but larger projects (better design and well documented) and the lab supervisor can get away with learning fewer projects. On the next offering, they increased the project size (40-100K lines), gave the students the build information that was required (it’s frustrating without being amazingly educational). Overall, even with the same projects, teams produced different enhancements but with a lot less stress on the lab instructor.

Rather unfortunately, I had to duck out so I didn’t see Claudia’s final talk! I’ll write it up as a separate post later. (Claudia, you should probably re-present it at home. 🙂 )


Three Stories: #3 Taking Time for Cats

There are a number of draft posts sitting on this blog. Posts, which for one reason or another, I’ve either never finished, because the inspiration ran out, or I’ve never published, because I decided not to share them. Most of them were written when I was trying to make sense of being too busy, while at the same time I was taking on more work and feeling bad about not being able to commit properly to everything. I probably won’t ever share many of these posts but I still want to talk about some of the themes.

So, let me tell you a story about  cats.

One of the things about cats is that they can be mercurial, creatures of fancy and rapid mood changes. You can spend all day trying to get a cat to sit on your lap and, once you’ve given up and sat back down, 5 minutes later you find a cat on your lap. That’s just the way of cats.

When I was very busy last year, and the year before, I started to see feedback comments from my students that said things like “Nick is great but I feel interrupting him” or I’d try and squeeze them into the 5 minutes I had between other things. Now, students are not cats, but they do have times when they feel they need to come and see you and, sometimes, when that time passes, the opportunity is lost. This isn’t just students, of course, this is people. That’s just the way of people, too. No matter how much you want them to be well organised, predictable and well behaved, sometimes they’re just big, bipedal, mostly hairless cats.

One day, I decided that the best way to make my change my frantic behaviour was to set a small goal, to make me take the time I needed for the surprising opportunities that occurred in a day.

I decided that every time I was walking around the house, even if I was walking out to go to work and thought I was in a hurry, if one of the cats came up to me, I would pay attention to it: scratch it, maybe pick it up, talk to it, and basically interact with the cat.

Over time, of course, what this meant was that I saw more of my cats and I spent more time with them (cats are mercurial but predictable about some things). The funny thing was that the 5 minutes or so I spent doing this made no measurable difference to my day. And making more time for students at work started to have the same effect. Students were happier to drop in to see if I could spend some time with them and were better about making appointments for longer things.

Now, if someone comes to my office and I’m not actually about to rush out, I can spend that small amount of time with them, possibly longer. When I thought I was too busy to see people, I was. When I thought I had time to spend with people, I could.

Yes, this means that I have to be a little more efficient and know when I need to set aside time and do things in a different way, but the rewards are enormous.

I only realised the true benefit of this recently. I flew home from a work trip to Melbourne to discover that my wife and one of our cats, Quincy, were at the Animal Emergency Hospital, because Quincy couldn’t use his back legs. There was a lot of uncertainty about what was wrong and what could be done and, at one point, he stopped eating entirely and it was… not good there for a while.

The one thing that made it even vaguely less awful in that difficult time was that I had absolutely no regrets about the time that we’d spent together over the past 6 months. Every time Quincy had come up to say ‘hello’, I’d stopped to take some time with him. We’d lounged on the couch. He’d napped with me on lazy Sunday afternoons. We had a good bond and, even when the vets were doing things to him, he trusted us and that counted for a lot.

Quincy is now almost 100% and is even more of a softie than before, because we all got even closer while we were looking after him. By spending (probably at most) another five minutes a day, I was able to be happier about some of the more important things in my life and still get my “real” work done.

Fortunately, none of my students are very sick at the moment, but I am pretty confident that I talk to them when they need to (most of the time, there’s still room for improvement) and that they will let me know if things are going badly – with any luck at a point when I can help.

Your time is rarely your own but at least some of it is. Spending it wisely is sometimes not the same thing as spending it carefully. You never actually know when you won’t get the chance again to spend it on something that you value.