Getting Gephi running on OS X Yosemite #gephi @Elijah_Meeks
Posted: December 18, 2014 Filed under: Education | Tags: data visualisation, education, education research, Elijah Meeks, Gephi, network visualisation, os x, visualisation, yosemite Leave a commentA really quick one. Gephi 0.8.2 (beta) is a great tool but it’s very picky about the Java version it uses. If you’re on OS X and went to Yosemite then it probably doesn’t work anymore.
This link gives you some very, very simple instructions for getting it working again. Thank you, Sumnous!
SIGCSE 2014: Collecting and Analysing Student Data 1, Paper 3, Thursday 3:15 – 5:00pm (#SIGCSE2014)
Posted: March 7, 2014 Filed under: Education | Tags: codebrowser, education, education research, higher education, learning, novices, SIGCSE2014, teaching, thinking Leave a commentOk, this is the last paper, with any luck I can summarise it in four words so you don’t die of work poisoning. The final talk was “Using CodeBrowser to Seek Difference Between Novice Programmers” by Kenny Heinonen, Kasper Hirvikoski, Matti Luukkainen, and Arto Vihavainen, from the University of Helsinki. I regret to say that due to some battery issues, this blog is probably going to be cut short. My apologies to the speakers!
The takeaway from the talk is that CodeBrowser is a fancy tool for identifying challenges that students face as they are learning to program. it sues your snapshot data and, if you have lots of students, course outcomes and another measures should be used to find a small number of students to analyse first. (Oh, and penguins are cool.)
Helsinki has hundreds of locals and thousands of MOOC participants learning to program, recording student progress as they learn to program. The system is built on top of NetBeans and provides scaffolding for students as they learn to program. Ok, so were recording the students’ progress but so what? Well, we have snapshots with time and source and we can use this to identify students at risk of dropping CS1 and a parallel maths course. (Retention and early drop-out? Subjects close to my heart!) It can also be used to seek insight into the problems that students are facing. There are not a great many systems that allow you to analyse and visualise code snapshots, apparently.
Looks interesting, I’ll have to go and check it out!
Sorry, battery is going, committing before this all goes away!
SIGCSE 2014: Collecting and Analysing Student Data 1, Paper 2, Thursday 3:15 – 5:00pm (#SIGCSE2014)
Posted: March 7, 2014 Filed under: Education | Tags: blackbox, BlueJ, education, education research, higher education, learning, novices, programming, sigcse, SIGCSE2014 Leave a commentWhoo! I nearly burnt out a digit writing up the first talk but it’s a subject close to my heart. I’ll try to be a little more terse for these next two talks.
The second talk in this session was “Blackbox: A Large Scale Repository of Novice Programmers’ Activity” by the amazing Blackbox team at Kent, Neil Brown, Michael Kölling, Davin McCall, and Ian Utting. The Blackbox data is the anonymised student data from students coding into the BlueJ Java programming environment. It’s a rich source of information on how students code and Mark and I have been scheming to do something with the Blackbox data for some time. With Ian and Neil here, it’s a good opportunity to steal their brains. I tried to get Ian to agree to doing all the work but it turns out that he’s been in the game long enough to not say “yes” when someone asks him to without context. (Maybe it’s just me.)
Michael was presenting, with some help from Neil, and reviewed the relationship between Blackbox and BlueJ. BlueJ is an educational programming environment for CS education using Java, dating back to the original Blue in 1996. (For those who don’t know, that’s old for this kind of thing. We should throw it a party.) BlueJ is a graphically operated development environment so novice programmers can drag things out to build programs. It’s a well-established and widely used environment.
(Hey, that means BlueJ is 18. Someone buy BlueJ a beer.)
BlueJ has about 2,000,000 users in 2013, who use it for about three months and then move on (it’s not a production tool, it’s a learning environment). The idea of Blackbox came out of SIGCSE sessions about three years ago where some research questions were raised, nice set-up, good design and really small student groups. One of our common problems is having enough students to actually do a big study and, frankly, all of us are curious about how students code. (It’s really hard to tell this from the final program, trust me.) So BlueJ has lots of users, can we look at their data and then share this with people?
Of course, the first question is “what do we collect?” Normally, we’d collect what we need to answer a research question but this data was going to be used to support lots of different (and currently unasked) research questions. The community was consulted at SIGCSE in 2012 but there has been an evolution of this over time. There are a lot of things collected – go and look at them in the paper because Michael flicked past that slide! 🙂
From an ethical standpoint, participation is an explicit decision made by the student to have their data collected or not. (This does raise the spectre of bias, especially as all the students must be over 16 for legal reasons.) So it’s opt in and THEN anonymised just to make it totally tasty from an ethical perspective.
Session data is collected for each session: start time, end time, project, path and userID (centrally anonymised for tracking).
So much for keeping it short, hey? Here’s a quick picture to give you a break.
Other things that can be captured are object creation and invocation among many other useful measures. For me, the fact that you can see how and when students are testing is fascinating, as it allows us to evaluate the whole expectation, observation and reflection scientific cycle in action.
The Blackbox project has already been running for 9 months. The opt-in rate is 40% (higher than I or anyone else expected). This means that there’s data from 250,000 users, recording roughly 11 events per second, over more than 1,000,000 projects and 20,000,000 compilations. What a fantastic resource! Michael then handed over to Neil to talk about the challenges.
Neil talked about tracking users, starting front he problem that one machine profile does not necessarily correspond to one user. Another problem is anonymisation, stripping project paths and the code where possible. You can’t guarantee anonymisation, because people sometimes use their own names as variable or class names, but they do what they can. It’s made clear on opt-in what’s going to happen. Data integrity is another challenge. Is it complete? No – it’s client side and there’s no guarantee of completeness, or even connectivity. But the Data that you do have for each session you is consistent. So the data is consistent but not complete. If you want, locally, you can tie your local data to the Blackbox data that they have on your students but then ethics becomes your problem. This can be done by Experiment and Participant Identifiers as part of the set-up so your students can be grouped. More example mini analyses are in the paper.
Looking at Error Frequency, Neil talked about certain errors and how their frequency changes over the weeks of 2013 (Semicolon expected, unknown variable). Over time, the syntax errors decreased (suggesting a learning effect) but others stay more constant.
The data is not completely open, and you need to request access as a researcher, sign a privacy and access restriction agreement. Students need not apply! There’s a SIGCSE workshop on this Saturday but I can’t go as my Puzzle Based Learning workshop is on at the same time. Great resource, go and check it out!
The final talk was “Using CodeBrowser to Seek Difference Between Novice Programmers” by Kenny Heinonen, Kasper Hirvikoski, Matti Luukkainen, and Arto Vihavainen, University of Helsinki,
SIGCSE 2014: Research: Concept Inventories and Neo-Piagetian Theory, Thursday 1:45-3:00pm (#SIGCSE2014)
Posted: March 7, 2014 Filed under: Education | Tags: concept inventory, education, education research, higher education, learning, peer instruction, SIGCSE2014, teaching, thinking Leave a commentThe first talk was “Developing a Pre- and Post- Course Concept Inventory to Gauge Operating Systems Learning” presented by Kevin Webb.
Kevin opened by talking about the difficulties we have in sharing our comparison of student learning behaviour and performance. Assessment should be practical, technical, comprehensive, and, most critically, comparable so you can compare these results across instructors, courses and institutions. It is, as we know, difficult to compare homework and lab assignments, student surveys and exam results, for a wide range of reasons. Concept inventories, according to Kevin, give us a mechanism for combining the technical and comparable aspects.
Concept inventories are short, standardised exempts to deal with high-levbe conceptual take-awaks to reveal systematic misconceptions, MCQ format, deployed before and after courses. You can supplement your courses with the small exam to see how student learning is progressing and you can use this to compare performance and learning between classes. The one you’ve probably heard of is the Physics Force Concept Inventory, which Mazur talks about a lot as it was the big motivator for Peer Instruction to address shallow conceptual learning.
There are two Concept Inventories for CS but they’re not publicly available or even maintained anymore but, when they were run, students were less successful than expected – 40-60% of the course was concepts were successfully learned AFTER the course. If your students were struggling with 40% of the key concepts, wouldn’t you like to know?
This work hopes to democratise CI development, using open source principles. (There is an ITiCSE paper coming soon, apparently.) This work has some preliminary development of a CI for Operating Systems.
Goals and challenges included dealing with the diversity of OS courses and trading off which aspects would best fit into the CI. The researchers also wanted it to be transparent and flexible to make questions available immediately and provide a path (via GitHub) for collaboration and iteration. From an accessibility perspective, developing questions for a universal pre-test is hard, and the work is based in the real world where possible.
An example of this is paging/caching replacement, because of the limited capacity of some of these storage mechanism, so the key concept is locality, with an “evict oldest” policy. What happens if the students don’t have the vocabulary of a page table or staleness yet? How about an example of books on your desk, via books on a shelf, via books in the library? (We used similar examples in our new course to explain memory structures in C++ with a supermarket and the various shelves.)
Results so far indicate that taking the OS course improved performance (good) but not all concepts showed an equal increase – some concepts appear to be less intuitive than others. Student confidence increased, even where they weren’t getting the right answers. Scenario “word problems” appear to be challenging to students and opted for similar, less efficient solutions. (This may be related to the “long document hard to read” problem that we’ve observed locally.)
The next example was on indirection with pointers where simplifying the pointer chain was something students intuitively did, even where the resulting solution was sub-optimal. This was tested by asking two similar questions on the exam, where the first was neutrally stated as a “should we” and the second asked them to justify the complexity of something, which gave them a tip as to where the correct answer lay.
Another example, using input/output and polling, presenting the device without a name deprived the students of the ability to use a common pattern. When, in an exam, the device was named (as a disk) then the correct answer was chosen, but the reasoning behind the answer was still lacking – so they appear to be pattern matching, rather than thinking to the answer. From some more discussion, students unsurprisingly appear to choose solutions that match what they have already seen – so they will apply mutexes even in applications where it’s not needed because we drown them in locks. Presenting the same problem without “constricting” names as a code examples, the students could then solve the problem correctly, without locks, despite almost all of them wanting to use locks earlier.
Interesting talk with a fair bit to think about. I need to read the paper! The concept inventory can be found at https://github.com/osconceptinventory” and the group welcome collaboration so go and … what’s the verb for “concept inventory” – inventorise? Anyway, go and do it! (There was a good reminder in question time to mine your TAs for knowledge about what students come to talk to them about – those areas of uncertainty might be ripe for redevelopment!)
The next talk was “Misconceptions and Concept Inventory Questions for Hash Tables and Binary Search Trees” presented by Kuba Karpierz ( a senior Computer Science student at the University of British Columbia). Kuba reviewed the concept inventory concept for newcomers to the room. (Poor Kuba was slightly interrupted by a machine shutdown that nearly broke his presentation but carried on with little evidence of problem and recovered it well.) The core properties of concept inventories are that they must be brief and multiple choice at least.
Students found hash table resizing to be difficult so this was nominated as a CI question. Students would sketch the wrong graph for resizing, ignoring the resize cost and exaggerating the curve shape of what should be a linear increase.The team used think aloud exercises to explain why students picked the wrong solution. Regrettably, the technical problems continued and made it harder to follow the presentation.
A large number of students had no idea how to resize the hash table (for reasons I won’t explain) but this was immediately obvious after the concept inventory exam, rather than having to dig it out of the exams. The next example was on Binary Search Trees and the misconception that they are are always balanced. (It turns out that students are conflating them with heaps.) Looking at the CI MCQs for this, it’s apparent that we were teaching with these exemplars in lectures, but not as an MCQ short exam. Food for thought. The example shown did make me think because it was deliberately ambiguous. I wondered if it would be better if it were slightly less challenging and the students could pick the right answer. Apparently they are looking at this in a different question.
The final talk was “Neo-Piagetian Theory as a Guide to Curriculum Analysis”, presented by Claudia Szabo, from our Computer Science Education Research group. This is the work that we’re using as the basis for the course redesign of our local Object Oriented Programming course so I know this work quite well! (It’s nice to see theory being put into practice, though, isn’t it?)
Claudia started with a discussion of curriculum analyse – the systematic processes that we use to guide teachers to identify instructional goals and learning objectives. We develop, we teach, we observe and we refine, but this refinement may lead to diversion from the originally stated goals. The course loses focus and structure, and possibly even lose its scaffolding. Claudia’s paper has lots of good references for the various theory areas so I won’t reproduce it here but, to get back to the talk, Claudia covered the Piagetian stages of cognitive development in the child: sensorimotor, pre-operational, concrete operational and formal operational. In short, you can handle concepts in pre-, can perform logic and solve for specific situations in concrete but only get to abstract thought and true problem-sovling in the formal operational mode. (Pre-operations is ages 2-7, concrete is 7-11 and formal is 11-15 by the time it is achieved. This is not a short process but also explains why we teach things differently at different age groups.)
Fundamentally, Neo-Piagetian theory starts from the premise that the cognitive developmental stages that humans go through during childhood are seen again as we learn very new and different concepts in new contexts, including mathematics and computer science, exhibited in the same stages. Ultimately, this means places limitations on the amount of abstraction versus concrete reasoning that students can apply. (Without trying to start an Internet battle, neo-Piagetian theory is one of the theories in this space, with the other two that I generally associate being Threshold Concepts and Learning Edge Momentum – we’re going to hold a workshop in Australia shortly to talk about how these intersect, conflict and agree, but I digress.)
So this peer is looking to analyse learning and teaching activities to determine the level at which we are teaching it and the level at which we are assessing it – this should allow us to determine prerequisite concepts (concept is tested before being taught) and assessment leaps (concept is assessed at a level higher than we taught it). The approach uses an ACM CS curriculum basis, combined with course-secific materials, and a neo-Piaget taxonomy to classify teaching activities to work out if we have not provided the correct pre-requisite material or whether we are assessing at a higher level than we taught students (or we provided a learning environment for them to reach that level, if we’re being precise). There’s a really good write-up in the paper to show you how conceptual handling and abstraction changes over the developmental stages.
For example, in representational systems a concrete explanation of memory allocation is “memory allocation is when you use the keyword new to create a variable”. In a familiar Single Abstraction, we could rely upon knowledge of the programming language and the framework to build upon the memory allocation knowledge to explain how memory allocation dynamically requests memory from the free store, initialises it and returns a pointer to the allocated space. If the student was able to carry out Single Abstraction on the global level, they would be able to map their knowledge of memory allocation in C++ into a new language such as Java. As the student developed, they can map abstractions to a global level, so class hierarchies in C++ can be mapped into similar understanding in Java, for example.
The course that was analysed, Object Oriented Programming, had a high failure rate, and students were struggling in the downstream course with fundamental concepts that we thought we had covered in OOP. So a concept definition document was produced to give a laundry list of concepts (Pro tip: concept inventories get big quickly. Be ruthless in your trimming.) For the selected concepts, the authors looked to see where it was taught, how it was taught and then how it was assessed. This quickly identified problems that needed to be fixed. One example is that the important C++ concept of Strings, assessment had been carried out before the concrete operational teaching had taken place! We start to see why the failure rate had been creeping up over time.
As the developer, in association with the speaker, of the new OOP course, this framework is REALLY handy because you are aways thinking “How am I teaching this? Can I assess it at this level yet?” If you do this up front then you can design a much better course, in my opinion, as you can move around the course to get things in the right order at the right time and have enough time to rewrite materials to match the levels. It doesn’t actually take that long to run over the course and it clearly visualises where our pitfalls are.
Next on the table is looking at second and third year courses and improving the visualisation – but I suspect I may have to get involved in that one, personally.
Good session! Lots of great information. Seriously, if you’re not at SIGCSE, why aren’t you here?
SIGCSE 2014: Automated Assessment Session, Thursday 10:45-12:00
Posted: March 7, 2014 Filed under: Education | Tags: automated assessment, education, education research, higher education, learning, SIGCSE2014, teaching Leave a commentThis session was the one I spoke in and I think it went well. Lots of good questions, which is always handy, and I can only hope that the answers made sense! The next talk was “Adaptively Identifying Non-Terminating Code when Testing Student Programs” presented by Stephen Edwards.
How do we handle infinite loops in student testing? Killing the process works but what happens to later tests if we use a timeout-based termination? What happens to the data from earlier tests? What we’re doing is wasting time up to the timeout. Stephen put the wasted time at 99.2 hours of cumulative delay in the 2012-2013 academic year, over nearly 9,000 loop cases. Coarse timeout would have resulted in the loss of any results from these programs.
(This is a problem close to my heart, so I was listening intently!) Stephen talked about using JUnit 4 rules, where you can add timeouts to a given rule, but these have to be added to every test class, it’s only in 4 not JUnit 3 and a single flat timeout can still cause delays. So, sadly, we can’t use this solution to address our key concerns. So they built off the JUnit 4 rules but wanted to:
- create adaptive timeout rules
- extend Junit to run Junit3-style tests under JUnit4
- Automatically inject the timeout rule in every test class transparently
The adaptive rule starts with a fixed timeout and then adapt it. I didn’t quite follow some of this so I’ll have to read the paper. There are hard upper and lower bounds on the time limits and are customisable, with the time taken being roughly equivalent to that of the slowest terminating code. They’ve now developed the unit and integrated it with their existing code.
To evaluate it, they deleted a single data structures programming assignment with 4,214 program submissions and regraded them using the new approach. 82 instructor-written references tests (!!!) resulting in 345,456 test executions (that’s a very funny number!). A very small number of tests caused very large problems for students – 2 students had previously received no feedback at all because everything that they did had an infinite loop in it!
One of the questions asked how you bootstrap the initial timeout periods – data driven would be ideal but, without any data, there’s a problem. Stephen wants to do this experiment ut hasn’t had a chance to do it yet.
The next talk was “Can Computers Compare Student Code Solutions as Well as Teachers?” presented by Matheus Gaudencio, from the Software Practices Laboratory. They use a lot of automatic tests and code comparison so their first question was whether they, as teachers, had a similar way of examining and comparing code (the old “how many different marks can you get for the same essay” chestnut). They evaluated 11 teachers and generate a reference solution which the teachers had to compare to two sample solutions, based on which was the best approximation to the reference code. Results varied to a low of 62% agreement. From eyeballing his data, it looks like 75-80% agreement is the average.
Matheus then looked at other strategies, including token-based and tree-based approaches (out of 7 different strategies), for computational comparison of code. There has to be a threshold (which the paper refers to as Delta) which allows some rubberiness in the similarity equations. The produced a hierarchal clustering tool, which can be found at http://relatedecode.appsot.com. If you’re interested in this you can contact Matheus at matheusgr@gmail.com
SIGCSE Best Paper Award Winner – Dr Claudia Szabo, University of Adelaide
Posted: March 7, 2014 Filed under: Education | Tags: education, education research, higher education, neopiaget, sigcse, SIGCSE2014, teaching, teaching approaches 1 CommentCongratulations to my colleague, friend and running partner, Dr Claudia Szabo on winning the SIGCSE Best Paper Award for a paper entitled “Student Projects are Not Throwaways: Teaching Practical Software Maintenance in a Software Engineer Course”. Claudia has three papers here because overachievement, but more seriously this is a fantastic achievement, especially for her first SIGCSE. This is also really useful research that has direct practical applications for people who are teaching Software Engineering AND we’re working together to build courses based on some of her earlier work on Neo-Piagetian analysis of existing courses.
Here’s a picture! Yay, Claudia!