Looking back on 2015
Posted: January 17, 2016 Filed under: Education | Tags: ACARA, computer science education, CSER MOOCs, digital technologies Leave a commentKatrina has a fantastic summary and f what’s been going on in our research group. This will be a nice relief from my stuff so enjoy!
2015 has been an amazing year for the CSER group at the University of Adelaide. We started the year with the Australian Curriculum Digital Technologies learning area stalled in parliament, with no idea of if or when it might be approved. This was a difficult time for educators across Australia, as individual teacher organisations and […]
https://katrinafalkner.wordpress.com/2016/01/15/looking-back-on-2015/
Designing a MOOC: how far did it reach? #csed
Posted: June 10, 2015 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, collaboration, community, computer science education, constructivist, contributing student pedagogy, curriculum, data visualisation, design, education, educational problem, educational research, ethics, feedback, higher education, in the student's head, learning, measurement, MOOC, moocs, principles of design, reflection, resources, students, teaching, teaching approaches, thinking, tools Leave a commentMark Guzdial posted over on his blog on “Moving Beyond MOOCS: Could we move to understanding learning and teaching?” and discusses aspects (that still linger) of MOOC hype. (I’ve spoken about MOOCs done badly before, as well as recording the thoughts of people like Hugh Davis from Southampton.) One of Mark’s paragraphs reads:
“The value of being in the front row of a class is that you talk with the teacher. Getting physically closer to the lecturer doesn’t improve learning. Engagement improves learning. A MOOC puts everyone at the back of the class, listening only and doing the homework”
My reply to this was:
“You can probably guess that I have two responses here, the first is that the front row is not available to many in the real world in the first place, with the second being that, for far too many people, any seat in the classroom is better than none.
But I am involved in a, for us, large MOOC so my responses have to be regarded in that light. Thanks for the post!”
Mark, of course, called my bluff and responded with:
“Nick, I know that you know the literature in this space, and care about design and assessment. Can you say something about how you designed your MOOC to reach those who would not otherwise get access to formal educational opportunities? And since your MOOC has started, do you know yet if you achieved that goal — are you reaching people who would not otherwise get access?”
So here is that response. Thanks for the nudge, Mark! The answer is a bit long but please bear with me. We will be posting a longer summary after the course is completed, in a month or so. Consider this the unedited taster. I’m putting this here, early, prior to the detailed statistical work, so you can see where we are. All the numbers below are fresh off the system, to drive discussion and answering Mark’s question at, pretty much, a conceptual level.
First up, as some background for everyone, the MOOC team I’m working with is the University of Adelaide‘s Computer Science Education Research group, led by A/Prof Katrina Falkner, with me (Dr Nick Falkner), Dr Rebecca Vivian, and Dr Claudia Szabo.
I’ll start by noting that we’ve been working to solve the inherent scaling issues in the front of the classroom for some time. If I had a class of 12 then there’s no problem in engaging with everyone but I keep finding myself in rooms of 100+, which forces some people to sit away from me and also limits the number of meaningful interactions I can make to individuals in one setting. While I take Mark’s point about the front of the classroom, and the associated research is pretty solid on this, we encountered an inherent problem when we identified that students were better off down the front… and yet we kept teaching to rooms with more student than front. I’ll go out on a limb and say that this is actually a moral issue that we, as a sector, have had to look at and ignore in the face of constrained resources. The nature of large spaces and people, coupled with our inability to hover, means that we can either choose to have a row of students effectively in a semi-circle facing us, or we accept that after a relatively small number of students or number of rows, we have constructed a space that is inherently divided by privilege and will lead to disengagement.
So, Katrina’s and my first foray into this space was dealing with the problem in the physical lecture spaces that we had, with the 100+ classes that we had.
Katrina and I published a paper on “contributing student pedagogy” in Computer Science Education 22 (4), 2012, to identify ways for forming valued small collaboration groups as a way to promote engagement and drive skill development. Ultimately, by reducing the class to a smaller number of clusters and making those clusters pedagogically useful, I can then bring the ‘front of the class’-like experience to every group I speak to. We have given talks and applied sessions on this, including a special session at SIGCSE, because we think it’s a useful technique that reduces the amount of ‘front privilege’ while extending the amount of ‘front benefit’. (Read the paper for actual detail – I am skimping on summary here.)
We then got involved in the support of the national Digital Technologies curriculum for primary and middle school teachers across Australia, after being invited to produce a support MOOC (really a SPOC, small, private, on-line course) by Google. The target learners were teachers who were about to teach or who were teaching into, initially, Foundation to Year 6 and thus had degrees but potentially no experience in this area. (I’ve written about this before and you can find more detail on this here, where I also thanked my previous teachers!)
The motivation of this group of learners was different from a traditional MOOC because (a) everyone had both a degree and probable employment in the sector which reduced opportunistic registration to a large extent and (b) Australian teachers are required to have a certain number of professional development (PD) hours a year. Through a number of discussions across the key groups, we had our course recognised as PD and this meant that doing our course was considered to be valuable although almost all of the teachers we spoke to were furiously keen for this information anyway and my belief is that the PD was very much ‘icing’ rather than ‘cake’. (Thank you again to all of the teachers who have spent time taking our course – we really hope it’s been useful.)
To discuss access and reach, we can measure teachers who’ve taken the course (somewhere in the low thousands) and then estimate the number of students potentially assisted and that’s when it gets a little crazy, because that’s somewhere around 30-40,000.
In his talk at CSEDU 2014, Hugh Davis identified the student groups who get involved in MOOCs as follows. The majority of people undertaking MOOCs were life-long learners (older, degreed, M/F 50/50), people seeking skills via PD, and those with poor access to Higher Ed. There is also a small group who are Uni ‘tasters’ but very, very small. (I think we can agree that tasting a MOOC is not tasting a campus-based Uni experience. Less ivy, for starters.) The three approaches to the course once inside were auditing, completing and sampling, and it’s this final one that I want to emphasise because this brings us to one of the differences of MOOCs. We are not in control of when people decide that they are satisfied with the free education that they are accessing, unlike our strong gatekeeping on traditional courses.
I am in total agreement that a MOOC is not the same as a classroom but, also, that it is not the same as a traditional course, where we define how the student will achieve their goals and how they will know when they have completed. MOOCs function far more like many people’s experience of web browsing: they hunt for what they want and stop when they have it, thus the sampling engagement pattern above.
(As an aside, does this mean that a course that is perceived as ‘all back of class’ will rapidly be abandoned because it is distasteful? This makes the student-consumer a much more powerful player in their own educational market and is potentially worth remembering.)
Knowing these different approaches, we designed the individual subjects and overall program so that it was very much up to the participant how much they chose to take and individual modules were designed to be relatively self-contained, while fitting into a well-designed overall flow that built in terms of complexity and towards more abstract concepts. Thus, we supported auditing, completing and sampling, whereas our usual face-to-face (f2f) courses only support the first two in a way that we can measure.
As Hugh notes, and we agree through growing experience, marking/progress measures at scale are very difficult, especially when automated marking is not enough or not feasible. Based on our earlier work in contributing collaboration in the class room, for the F-6 Teacher MOOC we used a strong peer-assessment model where contributions and discussions were heavily linked. Because of the nature of the cohort, geographical and year-level groups formed who then conducted additional sessions and produced shared material at a slightly terrifying rate. We took the approach that we were not telling teachers how to teach but we were helping them to develop and share materials that would assist in their teaching. This reduced potential divisions and allows us to establish a mutually respectful relationship that facilitated openness.
(It’s worth noting that the courseware is creative commons, open and free. There are people reassembling the course for their specific take on the school system as we speak. We have a national curriculum but a state-focused approach to education, with public and many independent systems. Nobody makes any money out of providing this course to teachers and the material will always be free. Thank you again to Google for their ongoing support and funding!)
Overall, in this first F-6 MOOC, we had higher than usual retention of students and higher than usual participation, for the reasons I’ve outlined above. But this material was for curriculum support for teachers of young students, all of whom were pre-programming, and it could be contained in videos and on-line sharing of materials and discussion. We were also in the MOOC sweet-spot: existing degreed learners, PD driver, and their PD requirement depended on progressive demonstration on goal achievement, which we recognised post-course with a pre-approved certificate form. (Important note: if you are doing this, clear up how the PD requirements are met and how they need to be reported back, as early on as you can. It meant that we could give people something valuable in a short time.)
The programming MOOC, Think. Create. Code on EdX, was more challenging in many regards. We knew we were in a more difficult space and would be more in what I shall refer to as ‘the land of the average MOOC consumer’. No strong focus, no PD driver, no geographically guaranteed communities. We had to think carefully about what we considered to be useful interaction with the course material. What counted as success?
To start with, we took an image-based approach (I don’t think I need to provide supporting arguments for media-driven computing!) where students would produce images and, over time, refine their coding skills to produce and understand how to produce more complex images, building towards animation. People who have not had good access to education may not understand why we would use programming in more complex systems but our goal was to make images and that is a fairly universally understood idea, with a short production timeline and very clear indication of achievement: “Does it look like a face yet?”
In terms of useful interaction, if someone wrote a single program that drew a face, for the first time – then that’s valuable. If someone looked at someone else’s code and spotted a bug (however we wish to frame this), then that’s valuable. I think that someone writing a single line of correct code, where they understand everything that they write, is something that we can all consider to be valuable. Will it get you a degree? No. Will it be useful to you in later life? Well… maybe? (I would say ‘yes’ but that is a fervent hope rather than a fact.)
So our design brief was that it should be very easy to get into programming immediately, with an active and engaged approach, and that we have the same “mostly self-contained week” approach, with lots of good peer interaction and mutual evaluation to identify areas that needed work to allow us to build our knowledge together. (You know I may as well have ‘social constructivist’ tattooed on my head so this is strongly in keeping with my principles.) We wrote all of the materials from scratch, based on a 6-week program that we debated for some time. Materials consisted of short videos, additional material as short notes, participatory activities, quizzes and (we planned for) peer assessment (more on that later). You didn’t have to have been exposed to “the lecture” or even the advanced classroom to take the course. Any exposure to short videos or a web browser would be enough familiarity to go on with.
Our goal was to encourage as much engagement as possible, taking into account the fact that any number of students over 1,000 would be very hard to support individually, even with the 5-6 staff we had to help out. But we wanted students to be able to develop quickly, share quickly and, ultimately, comment back on each other’s work quickly. From a cognitive load perspective, it was crucial to keep the number of things that weren’t relevant to the task to a minimum, as we couldn’t assume any prior familiarity. This meant no installers, no linking, no loaders, no shenanigans. Write program, press play, get picture, share to gallery, winning.
As part of this, our support team (thanks, Jill!) developed a browser-based environment for Processing.js that integrated with a course gallery. Students could save their own work easily and share it trivially. Our early indications show that a lot of students jumped in and tried to do something straight away. (Processing is really good for getting something up, fast, as we know.) We spent a lot of time testing browsers, testing software, and writing code. All of the recorded materials used that development environment (this was important as Processing.js and Processing have some differences) and all of our videos show the environment in action. Again, as little extra cognitive load as possible – no implicit requirement for abstraction or skills transfer. (The AdelaideX team worked so hard to get us over the line – I think we may have eaten some of their brains to save those of our students. Thank you again to the University for selecting us and to Katy and the amazing team.)
The actual student group, about 20,000 people over 176 countries, did not have the “built-in” motivation of the previous group although they would all have their own levels of motivation. We used ‘meet and greet’ activities to drive some group formation (which worked to a degree) and we also had a very high level of staff monitoring of key question areas (which was noted by participants as being very high for EdX courses they’d taken), everyone putting in 30-60 minutes a day on rotation. But, as noted before, the biggest trick to getting everyone engaged at the large scale is to get everyone into groups where they have someone to talk to. This was supposed to be provided by a peer evaluation system that was initially part of the assessment package.
Sadly, the peer assessment system didn’t work as we wanted it to and we were worried that it would form a disincentive, rather than a supporting community, so we switched to a forum-based discussion of the works on the EdX discussion forum. At this point, a lack of integration between our own UoA programming system and gallery and the EdX discussion system allowed too much distance – the close binding we had in the R-6 MOOC wasn’t there. We’re still working on this because everything we know and all evidence we’ve collected before tells us that this is a vital part of the puzzle.
In terms of visible output, the amount of novel and amazing art work that has been generated has blown us all away. The degree of difference is huge: armed with approximately 5 statements, the number of different pieces you can produce is surprisingly large. Add in control statements and reputation? BOOM. Every student can write something that speaks to her or him and show it to other people, encouraging creativity and facilitating engagement.
From the stats side, I don’t have access to the raw stats, so it’s hard for me to give you a statistically sound answer as to who we have or have not reached. This is one of the things with working with a pre-existing platform and, yes, it bugs me a little because I can’t plot this against that unless someone has built it into the platform. But I think I can tell you some things.
I can tell you that roughly 2,000 students attempted quiz problems in the first week of the course and that over 4,000 watched a video in the first week – no real surprises, registrations are an indicator of interest, not a commitment. During that time, 7,000 students were active in the course in some way – including just writing code, discussing it and having fun in the gallery environment. (As it happens, we appear to be plateauing at about 3,000 active students but time will tell. We have a lot of post-course analysis to do.)
It’s a mistake to focus on the “drop” rates because the MOOC model is different. We have no idea if the people who left got what they wanted or not, or why they didn’t do anything. We may never know but we’ll dig into that later.
I can also tell you that only 57% of the students currently enrolled have declared themselves explicitly to be male and that is the most likely indicator that we are reaching students who might not usually be in a programming course, because that 43% of others, of whom 33% have self-identified as women, is far higher than we ever see in classes locally. If you want evidence of reach then it begins here, as part of the provision of an environment that is, apparently, more welcoming to ‘non-men’.
We have had a number of student comments that reflect positive reach and, while these are not statistically significant, I think that this also gives you support for the idea of additional reach. Students have been asking how they can save their code beyond the course and this is a good indicator: ownership and a desire to preserve something valuable.
For student comments, however, this is my favourite.
I’m no artist. I’m no computer programmer. But with this class, I see I can be both. #processingjs (Link to student’s work) #code101x .
That’s someone for whom this course had them in the right place in the classroom. After all of this is done, we’ll go looking to see how many more we can find.
I know this is long but I hope it answered your questions. We’re looking forward to doing a detailed write-up of everything after the course closes and we can look at everything.
The Part and the Whole
Posted: October 13, 2014 Filed under: Education | Tags: advocacy, analytics, authenticity, computer science, computer science education, GPA, learning, learning analytics, student, synecdoche, teaching, thinking Leave a commentI like words a lot but I also love words that introduce me to whole new ways of thinking. I remember first learning the word synecdoche (most usually pronounced si-NEK-de-kee), where you used the word for part of something to refer to that something as a whole (or the other way around). Calling a car ‘wheels’ or champagne ‘bubbles’ are good examples of this. It’s generally interesting which parts people pick for synecdoche, because it emphasises what is important about something. Cars have many parts but we refer to it in parts as wheelsI and motor. I could bore you to tears with the components of champagne but we talk about the bubbles. In these cases, placing emphasis upon one part does not diminish the physical necessity of the remaining components in the object but it does tell us what the defining aspect of each of them is often considered to be.
There are many ways to extract a defining characteristic and, rather than selecting an individual aspect for relatively simple structures (and it is terrifying that a car is simple in this discussion), we use descriptive statistics to allow us to summarise large volumes of data to produce measures such as mean, variance and other useful things. In this case, the characteristic we obtain is not actually part of the data that we’re looking at. This is no longer synecdoche, this is statistics, and while we can use these measures to arrive at an understanding (and potentially move to the amazing world of inferential statistics), we run the risk of talking about groups and their measurements as if the measurements had as much importance as the members of the group.
I have been looking a lot at learning analytics recently and George Siemens makes a very useful distinction between learning analytics, academic analytics and data mining. When we analyse the various data and measures that come out of learning, we want to use this to inform human decision making to improve the learning environment, the quality of teaching and the student experience. When we look at the performance of the academy, we worry about things like overall pass rates, recruitment from external bodies and where our students go on to in their careers. Again, however, this is to assist humans in making better decisions. Finally, and not pejoratively but distinctly, data mining delves deep into everything that we have collected, looking for useful correlations that may or may not translate into human decision making. By separating our analysis of the teaching environment from our analysis of the academic support environment, we can focus on the key aspects in the specific area rather than doing strange things that try to drag change across two disparate areas.
When we start analysis, we start to see a lot of numbers: acceptable failure rates, predicted pass rates, retention figures, ATARs, GPAs. The reason that I talk about data analytics as a guide to human decision making is that the human factor reminds us to focus on the students who are part of the figures. It’s no secret that I’m opposed to curve grading because it uses a clear statement of numbers (70% of students will pass) to hide the fact that a group of real students could fail because they didm’ perform at the same level as their peers in the same class. I know more than enough about the ways that a student’s performance can be negatively affected by upbringing and prior education to know that this is not just weak sauce, but a poisonous and vicious broth to be serving to students under the guide of education.
I can completely understand that some employers want to employ people who are able to assimilate information quickly and put it into practice. However, let’s be honest, an ability to excel at University is not necessarily an indication of that. They might coincide, certainly, but it’s no guarantee. When I applied for Officer Training in the Army, they presented me with a speed and accuracy test, as part of the battery of psychological tests, to see if I could do decision making accurately at speed while under no more stress than sitting in a little room being tested. Later on, I was tested during field training, over and over again, to see what would happen. The reason? The Army knows that the skills they need in certain areas need specific testing.
Do you want detailed knowledge? Well, the numbers conspire again to undermine you because a focus on numerical grade measures to arrive at a single characteristic value for a student’s performance (GPA) makes students focus on getting high marks rather than learning. The GPA is not the same as the wheels of the car – it has no relationship to the applicable ability of the student to arbitrary tasks nor, if I may wax poetic, does it give you a sense of the soul of the student.
We have some very exciting tools at our disposal and, with careful thought and the right attitude, there is no doubt that analytics will become a valuable way to develop learning environments, improve our academies and find new ways to do things well. But we have to remember that these aggregate measures are not people, that “10% of students” represented real, living human beings who need to be counted, and that we have a long way to go before have an analytical approach that has a fraction of the strength of synecdoche.
ITiCSE 2014, Closing Session, #ITiCSE #ITiCSE2014
Posted: June 26, 2014 Filed under: Education | Tags: advocacy, blogging, community, computer science education, education, higher education, ITiCSE, ITiCSE 2014, ITiCSE2014, learning, teaching, thinking 1 CommentWell, thanks for reading over the last three days, I hope it’s been interesting. I’ve certainly enjoyed it and, tadahh, here we are at the finish line to close off the conference. Mats opened the session and is a bit sad because we’re at the end but reflected on the work that has gone into it with Åsa, his co-chair. Tony and Arnold were thanked for being the Program Chairs and then Arnold insisted upon thanking us as well, which is nice. I have to start writing a paper for next year, apparently. Then there were a lot of thanks, with the occasional interruption of a toy car being dropped. You should go to the web site because there are lots of people mentioned there. (Student volunteers got done twice to reflect their quality and dedication.)
Some words on ITiCSE 2015, which will be held next year in Vilnius, Lithuania from the 6th of July. There are a lot of lakes in Lithuania, apparently, and there’s something about the number of students in Sweden which I didn’t get. So, come to Lithuania because there are lots of students and a number of lakes.
The conference chairs got a standing ovation, which embarrassed me slightly because I had my laptop out so I had to give them a crouching ovation to avoid tipping the machine on to the floor that nearly stripped a muscle off the bone, so kudos, organisers.
That’s it. We’re done. See you later, everyone!
ITiCSE 2014, Day 3, Session 7B, Peer Instruction, #ITiCSE2014 #ITiCSE
Posted: June 25, 2014 Filed under: Education | Tags: community, computer science education, curriculum, design, education, educational problem, educational research, flipped classroom, higher education, in the student's head, inverted classroom, ITiCSE, ITiCSE 2014, learning, peer instruction, teaching, teaching approaches, thinking Leave a commentThe first talk was “Peer Instruction: a Link to the Exam” presented by Daniel Zingaro from University of Toronto. Peer Instruction (PI) is an active learning pedagogy developed for physics and now heavily used in computing. Students complete a reading quiz prior to class and teachers use multiple-choice quizzes to assess knowledge. (You can look this one up in a number of places but I’ve discussed it here before a bit.) There’s a lot of research that shows gains between individual and group vote, with enduring improvements in student learning. (We can use isomorphic questions to reduce the likelihood of copying.) Both students and instructors value the learning.
PI appears to demonstrate improved learning outcomes on the final exam grades, as well as perceived depth of learning. (Couple of studies here from Beth Simon et al, and Daniel himself, checking Beth’s results.) But what leads to this improved outcome? The peer discussion. The class wide discussion? Both? If one part isn’t useful then we can adapt it to make it more useful to Computer Scientists. Daniel is going to use isomorphic questions to investigate relationships between PI components and final exam grades.
The isomorphic questions test the same concept with different questions, where if they get the first one right, we hope that they get the second one right – and if people learn how to do one, then that knowledge flows on to the other. (The example given was of loop complexity in nested loops depending on different variables.)
Daniel has two question modes in this experiment, which are slightly different. Both modes include the PI components, but the location of the isomorphic questions vary between the two approaches in the second question. In the Peer ℗ mode, the isomorphic question comes directly after the group vote and the second mode (Combined – C), the Q2 isomorphic questions occur direct after the instructor has had a chance to influence the class.
Are the questions really isomorphic and of the same difficulty? An external ranker was used to verity this and then the question pairs and mode were randomised. The difficulty of the questions was found to be statistically equivalent, based on the percentage of Q1 that were found to be correct.
Daniel had two hypotheses. Firstly, that peer scores will correlate to final exam scores. Secondly, that combined scores will also correlate with final exam scores, but the correlation should be stronger than for Peer, with the Combined questions representing learning from the full PI cycle. In terms of the final exam, there were three measures of the final exam grades: total exam score, score on the tracing question (similar to PI questions) and score on a code-writing question (very different to PI questions).
The implementation was a CS1 course with 3 lectures/week, with reading quizzes worthy 4% submitted prior to each lecture, clicker responses worth 5%, where the lectures on average contained three PI cycles- one cycle per lecture contained the follow-up isomorphic question. Multiple regression was used to test relationships between PI and final exam scores.
All of the results were statistically significant. For code-tracing, hat students know before exam explains 13% of their scores in the final exam. With the peer questions, it goes up to 16%. With combined as well, it goes up to 19%. Is this practically significant? Daniel raised this question because it doesn’t rise very much.
In terms of code writing, Baseline is 16%, + Peers is 22% and +Combined is 25%, so we’re starting to see more contribution from peers than instructor in this case. Are we measuring the different difficulty of a problem that peers couldn’t correct, which is why the instructor does less?
Overall? Baseline 21%, Peer 30% and then Combined is 34%. (Any questions about the stats, please read the paper. 🙂 )
Maybe adding combined questions to peer questions increases our predictive accuracy, just because we’re adding more data and this being able to produce a better model?
In discussion, PI performance related to final exam scores (as expected). Peer learning alone is important and the instructor-led discussion is important, over and above peer learning. This validates the role of the instructor in a “student-centred” classroom. Given that PI uses MCQs, we might expect it to only correlate with code-tracing but it does appear to correlate with code-writing problems as well – there may be deep conceptual similarities between PI questions and programming skills. But would the students that learned from PI also the students that would have learned from any other form of instruction? Still an open question and there’s a lot of ongoing work still to do.
The next paper was “Comparing Outcomes in Inverted and Traditional CS1” presented by Diane Horton from U Toronto. I’ve been discussing early intervention and student attendance issues in inverted/hybrid courses with Jennifer Campbell and Michelle Craig, also from U Toronto and also on this paper, as part of an attempt to get some good answers so I’d just come straight out of a lunch, discussing inverted classrooms and their outcomes. (Again, this is why we come to conferences – much of the value is in the meetings and discussion that are just so hard to have when you’re doing your day job or fitting a Skype meeting into the wee small hours to bridge the continental time gap.)
As a reminder, in inverted teaching, some or all of the material is delivered outside the classroom. Work typically done as homework is done in the lecture with the help of instructor or TAs. There were three research questions. Would the inverted offerings have better outcomes? Would inverted teaching affect students’ behaviour or experience? Would particular subgroups respond differently, especially English-language learners and beginner programmers?
The CS1 course at Toronto is a 12 week course in Python, objects-early, classes-late, with most students in 1st year and less than half looking to major in CS. The lectures are roughly 200 students, with 5 of these lecture sections.
Before the lecture, students prepared by watching videos, from the instructors, mostly screencasts of live programming with voice over, credit for attempting quizzes embedded in videos is 0.5% per week. (It’s scary how small that fraction has to be and really rather sad, from a behavioural perspective.)
During the lecture, the instructors used a worked example and students worked on worksheet-based exercises for most of the lectures, with assistance, solo or in pairs. This was a responsive teaching approach because the instructor could draw the class together as required. There was no mark reward or penalty that depended on attendance. If you were solid on the material, it was okay to miss the lecture. The mark scheme reflected some marks for lecture preparation with an increased number of online exercises and decreased weighting on labs.
The inverted CS1 course had gone well in the pilot in January 2013, which was published in a peer at SIGCSE ’14, but it was hard to compare this with the previous class as the make-up of the cohort varies from September to January courses. The study was run again in a more similar cohort in September 2014. The data presented here is for a similar cohort with a high overlap of instructors, compared the traditional offering.
For the present study, there were pre- and post-course surveys about attitude and behaviour, competed on-paper in the lecture. Weekly lecture attendance counts were made and standard university course valuations collected. In terms of attendance, the inverted pilot was the lowest, but the inverted class had lower attendance most of the time – an effect that we have also seen under some circumstances and are still thinking about. Interestingly, students thought that the inverted lectures weren’t seen as being as useful as face-to-face lectures but the online support materials were seen to be very helpful. As a package, this seems to be an overall positive experience.
The hypothesis was that students in the inverted offering would self-report a higher quality of learning experience and greater enjoyment but this wasn’t supported in the data, nor was it for beginners in particular. However, when asked if they wanted more inverted courses, there was a very strong positive response to this question.
The authors expected that beginners would benefit more because they need more helps and the gap between beginners and experiences students would reduce – this wasn’t supported by the data. Also, there was no reduction of gaps for English language learners, either.
Would the inverted course help people stay on and pass the course? Well, however success was defined, the pass rate was remarkably consistent, even when the beginners were isolated. However, it does appear that the overall level of knowledge, as measured by the final exam grades, actually improved in the inverted offerings, across two exams of similar difficult, with a jump from an average grade of 66% to 74% between the terms. Is this just due to the inverted teaching?
Maybe students learned more in the inverted offering because they spent more time on task? Based on self-reported student time, this doesn’t appear to be true. Maybe the beginners got killed off early to reduce their numbers and raise the mark? No, the drop rates among beginners were the same. It appears that the 8 percentage point increase may be related to the inverted mode, although, obviously, more work is required.
Is it worth it? They used no additional TA resources but the development time was enormous. You may not be ready for this investment. There are other options, you don’t need to use videos and you can use pre-existing materials to reduce costs.
Future work involves looking at dropping patterns – who drops when – and student who stumble and recover. They’re also looking at a full online CS1 course for course credit.
The final talk was on “Making Group Processes Explicit to Students: A Case for Justice” presented by Ville Isomöttönen. Project courses have a very long history in Computer Science, as capstones, using authentic customer projects, and the intention is to provide a realistic experience. (Editor’s note: It’s worth noting that some of this may be coming from the “We got punished like this so you can be too.”) What do students actually learn from this? Are they learning what we want them to learn or they are learning something very different and, potentially, much darker?
(This sounds like the kind of philosophical paper I’d give, let’s see where it goes! 🙂 )
If we have tertiary students, why can’t we just place them into a workplace for work experience? They’re adults – maybe we can separate this aspect and the pedagogue. The author’s study wants to look at how to promote conceptual learning in the response of realistic course work. Parker (1999) proposes that students are spending their effort of building wiring products, rather than actually learning about and reflecting upon the professional issues we consider important. The conjecture is that just because the situation is realistic doesn’t mean that the conceptual learning is happening as we intended.
The study is based around a fairly straight forward project-based learning structure, but had a Pass/Fail grade, with no distinction grading as far as I could tell. The teaching was baed on weekly group discussions, with self/peer evaluations, also housed in a group situation, and technical supervision offered by teaching assistant. Throughout the course, students are prompted to think about their operation at a conceptual level. Hmm. I’m not sure what the speaker means by this as, without a very detailed description of what is going on, this could have many different implementations.
We then cut to a diagram of justice conceptualised – I may have missed something as I’m not quite sure how this sits with the group work. I can’t find the diagram online but it involves participation, involving and negotiating with others – fused together as the skill of justice. This sits above statuses, norms and roles. Some of the related work deals with fairness (Richards 2009) as a key attribute of successful group work, Clear 2002 uses it in diagnostic technique, and Pieterse and Thompson 2010 mentioned ‘social loafers’ and ‘diligent isolates’.
I’m dreadfully sorry, dear reader, but I’m not following this properly so this may be a bit sketchy. Go and read the paper and I’ll try to get this together. Everyone else in the room appears to be getting this so I may just be tired or struggling with my (not very good) hearing and someone who is speaking rather quietly.
The underlying pedagogy comes from the social realist mindset (Moore,2000, Maton and Moore, 2010) and “avoids the dilemma between constructivist relativism and positivist absolutism”. We should also look at the Integrative Pedagogy (Tynjana (sp)), where the speaker feels that what they are describing is a realist version of this.
The course was surveyed with a preliminary small study (N=21/26, which is curious. Which one is it? Ah, 21 out of 26 enrolled, there we go.). The survey questions were… rather loose and very open to influence, unfortunately, from my quick glance at them but I will have to read the original paper.
Justice is a difficult topic to address, especially where it’s reified as a professional skill that can be developed, and discussing the notion of justice in terms of the ways that a group can work together fairly is very important. I suppose I’m not 100% convinced how much is added in this context through the use of a new term that is an apparent parent to communication and negotiation, with the desired outcome of fairness, because the amalgamation seems to obscure the individual components that would be improved upon to return to a fair state. The very small study, and a small survey, is a valid approach for a case study or phenomenographic approach, but I get the feeling that I was seeing a grounded theory argument. We do have to expose our desired processes to students if we’re going to achieve cognitive apprenticeship and there is a great deal of tension between industrial practice and key concepts, so this is a very interesting area to work in. I completely agree with the speaker that our heavy technical focus often precludes discussions of the empathic, philosophical and intangible, but I’m yet to see how this approach contributes.
The discussions mentioned as important are very important but group reports and discussion are a built-in part of many SE process models so I wonder how the justice theme amplifies this aspects. Again, getting students to engage in a dialogue that they do not expect to have in CS can be very challenging but we could be discussing issues such as critical thinking and ethics, which are often equally alien and orthogonal to the technical, without forming a compound concept that potentially obscures the underlying component mechanisms.
Simon asked a very good question: you didn’t present anything that showed a problem where the students would have needed the concept of justice. Apparently, this is in the writings that are yet to be analysed. The answer to the question ended up as an unlabelled graph on the blackboard which was focused on a skill difference with more experienced peers. I still can’t see how justice ties into this. I have to go and get my hearing checked.
ITiCSE 2014, Day 3, Session 6A, “Digital Fluency”, #ITiCSE2014 #ITiCSE
Posted: June 25, 2014 Filed under: Education | Tags: ALICE, arm the princess, Bologna model, competency, competency-based assessment, computational thinking, computer science education, Duke, education, educational problem, educational research, empowering minorities, empowering women, higher education, ITiCSE, ITiCSE 2014, key competencies, learning, middle school, non-normative approaches, pattern analysis, principles of design, reflection, teaching, thinking, tools, women in computing Leave a commentThe first paper was “A Methodological Approach to Key Competences in Informatics”, presented by Christina Dörge. The motivation for this study is moving educational standards from input-oriented approaches to output-oriented approaches – how students will use what you teach them in later life. Key competencies are important but what are they? What are the definitions, terms and real meaning of the words “key competencies”? A certificate of a certain grade or qualification doesn’t actually reflect true competency is many regards. (Bologna focuses on competencies but what do really mean?) Competencies also vary across different disciplines as skills are used differently in different areas – can we develop a non-normative approach to this?
The author discussed Qualitative Content Analysis (QCA) to look at different educational methods in the German educational system: hardware-oriented approaches, algorithm-oriented, application-oriented, user-oriented, information-oriented and, finally, system-oriented. The paradigm of teaching has shifted a lot over time (including the idea-oriented approach which is subsumed in system-oriented approaches). Looking across the development of the paradigms and trying to work out which categories developed requires a coding system over a review of textbooks in the field. If new competencies were added, then they were included in the category system and the coding started again. The resulting material could be referred to as “Possible candidates of Competencies in Informatics”, but those that are found in all of the previous approaches should be included as Competencies in Informatics. What about the key ones? Which of these are found in every part of informatics: theoretical, technical, practical and applied (under the German partitioning)? A key competency should be fundamental and ubiquitous.
The most important key competencies, by ranking, was algorithmic thinking, followed by design thinking, then analytic thinking (must look up the subtle difference here). (The paper contains all of the details) How can we gain competencies, especially these key ones, outside of a normative model that we have to apply to all contexts? We would like to be able to build on competencies, regardless of entry point, but taking into account prior learning so that we can build to a professional end point, regardless of starting point. What do we want to teach in the universities and to what degree?
The author finished on this point and it’s a good question: if we view our progression in terms of competency then how we can use these as building blocks to higher-level competencies? THis will help us in designing pre-requsitites and entry and exit points for all of our educational design.
The next talk was “Weaving Computing into all Middle School Disciplines”, presented by Susan Rodger from Duke. There were a lot of co-authors who were undergraduates (always good to see). The motivation for this project was there are problems with CS in the K-12 grades. It’s not taught in many schools and definitely missing in many high schools – not all Unis teach CS (?!?). Students don’t actually know what it is (the classic CS identify problem). There are also under-represented groups (women and minorities). Why should we teach it? 21st century skills, rewordings and many useful skills – from NCWIT.org.
Schools are already content-heavy so how do we convince people to add new courses? We can’t really so how about trying to weave it in to the existing project framework. Instead of doing a poster or a PowerPoint prevention, why not provide an animations that’s interactive in some way and that will involve computing. One way to achieve this is to use Alice, creating interactive stories or games, learning programming and computation concepts in a drag-and-drop code approach. Why Alice? There are many other good tools (Greenfoot, Lego, Scratch, etc) – well, it’s drag-and-drop, story-based and works well for women. The introductory Alice course in 2005 started to attract more women and now the class is more than 50% women. However, many people couldn’t come in because they didn’t have the prerequisites so the initiative moved out to 4th-6th grade to develop these skills earlier. Alice Virtual Worlds excited kids about computing, even at the younger ages.
The course “Adventures in Alice Programming” is aimed at grades 5-12 as Outreach, without having to use computing teachers (which would be a major restriction). There are 2-week teacher workshops where, initially, the teachers are taught Alice for a week, then the following week they develop lesson plans. There’s a one-week follow-up workshop the following summer. This initiative is funded until Summer, 2015, and has been run since 2008. There are sites: Durham, Charleston and Southern California. The teachers coming in are from a variety of disciplines.
How is this used on middle and high schools by teachers? Demonstrations, examples, interactive quizzes and make worlds for students to view. The students may be able to undertake projects, take and build quizzes, view and answer questions about a world, and the older the student, the more they can do.
Recruitment of teachers has been interesting. Starting from mailing lists and asking the teachers who come, the advertising has spread out across other conferences. It really helps to give them education credits and hours – but if we’re going to pay people to do this, how much do we need to pay? In the first workshop, paying $500 got a lot of teachers (some of whom were interested in Alice). The next workshop, they got gas money ($50/week) and this reduced the number down to the more interested teachers.
There are a lot of curriculum materials available for free (over 90 tutorials) with getting-started material as a one-hour tutorial showing basic set-up, placing objects, camera views and so on. There are also longer tutorials over several different stories. (Editor’s note: could we get away from the Princess/Dragon motif? The Princess says “Help!” and waits there to be rescued and then says “My Sweet Prince. I am saved.” Can we please arm the Princess or save the Knight?) There are also tutorial topics on inheritance, lists and parameter usage. The presenter demonstrated a lot of different things you can do with Alice, including book reports and tying Alice animations into the real world – such as boat trips which didn’t occur.
It was weird looking at the examples, and I’m not sure if it was just because of the gender of the authors, but the kitchen example in cooking with Spanish language instruction used female characters, the Princess/Dragon had a woman in a very passive role and the adventure game example had a male character standing in the boat. It was a small sample of the materials so I’m assuming that this was just a coincidence for the time being or it reflects the gender of the creator. Hmm. Another example and this time the Punnett Squares example has a grey-haired male scientist standing there. Oh dear.
Moving on, lots of helper objects are available for you to use if you’re a teacher to save on your development time which is really handy if you want to get things going quickly.
Finally, on discussing the impact, one 200 teachers have attend the workshops since 2008, who have then go on to teach 2900 students (over 2012-2013). From Google Analytics, over 20,000 users have accessed the materials. Also, a number of small outreach activities, Alice for an hour, have been run across a range of schools.
The final talk in this session was “Early validation of Computational Thinking Pattern Analysis”, presented by Hilarie Nickerson, from University of Colorado at Boulder. Computational thinking is important and, in the US, there have been both scope and pedagogy discussions, as well as instructional standards. We don’t have as much teacher education as we’d like. Assuming that we want the students to understand it, how can we help the teachers? Scalable Game Design integrates game and simulation design into public school curricula. The intention is to broaden participation for all kinds of schools as after-scjool classes had identified a lot of differences in the groups.
What’s the expectation of computational thinking? Administrators and industry want us to be able to take game knowledge and potentially use it for scientific simulation. A good game of a piece of ocean is also a predator-prey model, after all. Does it work? Well, it’s spread across a wide range of areas and communities, with more than 10,000 students (and a lot of different frogger games). Do they like it? There’s a perception that programming is cognitively hard and boring (on the congnitive/affective graph ranging from easy-hard/exciting-boring) We want it to be easy and exciting. We can make it easier with syntactic support and semantic support but making it exciting requires the students to feel ownership and to be able to express their creativity. And now they’re looking at the zone of proximal flow, which I’ve written about here. It’s good see this working in a project first, principles first model for these authors. (Here’s that picture again)
The results? The study spanned 10,000 students, 45% girls and 55% boys (pretty good numbers!), 48% underrepresented, with some middle schools exposing 350 students per year. The motivation starts by making things achievable but challenging – starting from 2D basics and moving up to more sophisticated 3D games. For those who wish to continue: 74% boys, 64% girls and 69% of minority students want to continue. There are other aspects that can raise motivation.
What about the issue of Computing Computational Thinking? The authors have created a Computational Thinking Pattern Analysis (CTPA) instrument that can track student learning trajectories and outcomes. Guided discovery, as a pedagogy, is very effective in raising motivation for both genders, where direct instruction is far less effective for girls (and is also less effective for boys).
How do we validate this? There are several computational thinking patterns grouped using latent semantic analysis. One of the simpler patterns for a game is the pair generation and absorption where we add things to the game world (trucks in Frogger or fish in predator/prey) and then remove them (truck gets off the screen/fish gets eaten). We also need collision detection. Measuring skill development across these skills will allow you to measure it in comparison to the tutorial and to other students. What does CTPA actually measure? The presence of code patterns that corresponded to computational thinking constructs suggest student skill with computational thinking (but doesn’t prove it) and is different from measuring learning. The graphs produced from this can be represented as a single number, which is used for validation. (See paper for the calculation!)
This has been running for two years now, with 39 student grades for 136 games, with the two human graders shown to have good inter-rater consistency. Frogger was not very heavily correlated (Spearman rank) but Sokoban, Centipede and the Sims weren’t bad, and removing design aspects of rubrics may improve this.
Was their predictive validity in the project? Did the CTPA correlate with the skill score of the final game produced? Yes, it appears to be significant although this is early work. CTPA does appear to be cabal of measuring CT patterns in code that correlate with human skill development. Future work on this includes the refinement of CTPA by dealing with the issue of non-orthogonal constructs (collisions that include generative and absorptive aspects), using more information about the rules and examining alternative calculations. The group are also working not oils for teachers, including REACT (real-time visualisations for progress assessment) and recommend possible skill trajectories based on their skill progression.
ITiCSE 2014, Day 3, Keynote, “Meeting the Future Challenges of Education and Digitization”, #ITiCSE2014 #ITiCSE @jangulliksen
Posted: June 25, 2014 Filed under: Education | Tags: advocacy, community, computer science education, digital learning, digitisation, education, educational problem, educational research, higher education, ITiCSE, ITiCSE 2014, Jan Gulliksen, learning, measurement, Professor Gulliksen, teaching, teaching approaches, thinking Leave a commentThis keynote was presented by the distinguished Professor Jan Gulliksen (@jangulliksen) of KTH. He started with two strange things. He asked for a volunteer and, of course, Simon put his hand up. Jan then asked Simon to act as a support department to seek help with putting on a jacket. Simon was facing the other way so had to try and explain to Jan the detailed process of orientating and identifying the various aspects of the jacket in order. (Simon is an exceedingly thoughtful and methodical person so he had a far greater degree of success than many of us would.) We were going to return to this. The second ‘strange thing’ was a video of President Obama speaking on Computer Science. Professor Gulliksen asked us how often a world leader would speak to a discipline community about the importance of their discipline. He noted that, in his own country, there was very little discussion in the political parties on Computer Science and IT. He noted that Chancellor Merkel had expressed a very surprising position, in response to the video, as the Internet being ‘uncharted territory‘.
Professor Gulliksen then introduced himself as the Dean of the School of Computer Science and communication in KTH, Stockholm, but he had 25 years of previous experience at Uppsala. Within this area, he had more than 20 years of experience working with the introduction of user-centred systems in public organisations. He showed two pictures, over 20 years apart, which showed how little the modern workspace has changed in that time, except that the number of post-it colours have increased! He has a great deal of interest in how we can improve the design for all users. Currently, he is looking at IT for mental and psychological disabilities, finder by Vinnova and PTS, which is not a widely explored area and can be of great help to homeless people. His team have been running workshops with these people to determine the possible impact of increased IT access – which included giving them money to come to the workshop. But they didn’t come. So they sent railway tickets. But they still didn’t come. But when they used a mentor to talk them through getting up, getting dressed, going to the station – then they came. (Interesting reflection point for all teachers here.) Difficult to work within the Swedish social security system because the homeless can be quite paranoid about revealing their data and it can be hard to work with people who have no address, just a mobile number. This is, however, a place where our efforts can have great societal impact.
Professor Gulliksen asks his PhD students: What is really your objective with this research? And he then gives them three options: change the world, contribute new knowledge or you want your PhD. The first time he asked this in Sweden, the student started sweating and asked if they could have a fourth option. (Yes, but your fourth is probably one of the three.) The student then said that they wanted to change the world, but on thinking about it (what have you done), wanted to change to contribute new knowledge, then thought about it some more (ok, but what have you done), after further questioning it devolved to “I think I want my PhD”. All of these answers can be fine but you have to actually achieve your purpose.
Our biggest impact is on the people that we produce, in terms of our contribution to the generation and dissemination of knowledge. Jan wants to know how we can be more aware of this role in society. How can we improve society through IT? This led to the committee for Digitisation, 2012-2015: Sweden shall be the best country in the world when it comes to using the opportunities for digitisation. Sweden produced “ICT for Everyone”, a Digital Agenda for Sweden, which preceded the European initiative. There are 170 different things to be achieved with IT politics but less than a handful of these have not been met since October, 2011. As a researcher, Professor Gulliksen had to come to an agreement with the minister to ensure that his academic freedom, to speak truth to power, would not be overly infringed – even though he was Norwegian. (Bit of Nordic humour here, which some of you may not get.)
The goal was that Sweden would be the best country in the world when it came to seizing these opportunities. That’s a modest goal (The speaker is a very funny man) but how do we actually quantify this? The main tasks for the commission were to develop the action plan, analyse progress in relate to goals, show the opportunities available, administer the organisations that signed the digital agenda (Nokia, Apple and so on) and collaborate with the players to increase digitisation. The committee itself is 7 people, with an ‘expert’ appointed because you have to do this, apparently. To extend the expertise, the government has appointed the small commission, a group of children aged 8-18, to support the main commission with input and proposals showing opportunities for all ages.
The committee started with three different areas: digital inclusion and equal opportunities; school, education and digital competence; and entrepreneurship and company development. The digital agenda itself has four strategic areas in terms of user participation:
- Easy and safe to use
- Services that create some utility
- Need for infrastructure
- IT’s role for societal development.
And there are 22 areas of mission under this that map onto the relevant ministries (you’ll have to look that up for yourself, I can’t type that quickly.) Over the year and a half that the committee has been running, they have achieved a lot.
The government needs measurements and ranking to show relative progress, so things like the World Economics Forum’s Networked Readiness Index (which Sweden topped) is often trotted out. But in 2013, Sweden had dropped to third, with Finland and Singapore going ahead – basically, the Straits Tiger is advancing quickly unsurprisingly. Other measures include the ICT development Index (ID) where Sweden is also doing well. You can look for this on the Digital Commisson’s website (which is in Swedish but translates). The first report has tried to map out the digital Swedend – actions and measures carried, key players and important indicators. Sweden is working a lot in the space but appears to be more passive in re-use than active in creativity but I need to read the report on this (which is also in Swedish). (I need to learn another language, obviously.) There was an interesting quadrant graph of organisations ranked by how active they were and how powerful their mandate was, which started a lot of interesting discussion. (This applies to academics in Unis as well, I realise.) (Jag behöver lära sig ett annat språk, uppenbarligen.)
The second report was released in March this year, focusing on the school system. How can Sweden produce recommendations on how the school system will improve? If the school system isn’t working well, you are going to fall behind in the rankings. (Please pay attention, Australian Government!) In Sweden, there’s a range of access to schools across Sweden but access is only one thing, actual use of the resources is another. Why should we do this? (Arguments to convince politicians). Reduce digital divide, economy needs IT-skilled labours, digital skills are needed to be an active citizen, increased efficiency and speed of learning and many other points! Sweden’s students are deteriorating on the PISA-survey rankings, particularly for boys, where 30% of Swedish boys are not reaching basic literacy in the first 9 years of schools, which is well below the OECD average. Interestingly, Swedish teachers are among the lowest when it comes to work time spent on skills development in the EU. 18% of teachers spend more than 6 days, but 9% spend none at all and is the second worst in European countries (Malta takes out the wooden spain).
The concrete proposals in the SOU were:
- Revised regulatory documents with a digital perspective
- Digitally based national tests in primary/secondary
- web based learning in elementary ands second schools
- digital skilling of teachers
- digital skilling for principals
- clarifying the digital component of teacher education programs
- research, method development and impact measurement
- innovation projects for the future of learning
Universities are also falling behind so this is an area of concern.
Professor Gulliksen also spoke about the digital champions of the EU (all European countries had one except Germany, until recently, possibly reflecting the Chancellor’s perspective) where digital champion is not an award, it’s a job: a high profile, dynamic and energetic individual responsible for getting everyone on-line and improving digital skills. You need to generate new ideas to go forward, for your country, rather than just copying things that might not fit. (Hello, Hofstede!)
The European Digital Champions work for digital inclusion and help everyone, although we all have responsibility. This provides strategic direction for government but reinforces that the ICT competence required for tomorrow’s work life has to be put in place today. He asked the audience who their European digital champions were and, apart from Sweden, no-one knew. The Danish champion (Lars Frelle-Petersen) has worked with the tax office to force everyone on-line because it’s the only way to do your tax! “The only way to conduct public services should be on the Internet” The digital champion of Finland (Linda Liukas, from Rails girls) wants everyone to have three mandatory languages: English, Chinese and JavaScript. (Groans and chuckles from the audience for the language choice.) The digital champion of Bulgaria (Gergeana Passy) wants Sofia to be the first free WiFi capital of Europe. Romania’s champion (Paul André Baran) is leading the library and wants libraries to rethink their role in the age of ICT. Ireland’s champion (Sir David Puttnam) believes that we have to move beyond triage mentality in education to increase inclusion.
In Sweden, 89% of the population is on-line and it’s plateaued at that. Why? Of those that are not on the Internet, most of them are more than 76% years old. This is a self-correcting problem, most likely. (50% of two year olds are on the Internet in Sweden!) The 1.1 million Swedes not online are not interested (77%) and 18% think it’s too complicated.
Jan wanted to leave us with two messages. The first is that we need to increase the amount of ICT practitioners. Demand is growing at 3% a year and supply is not keeping pace for trained, ICT graduates. If the EU want to stay competitive, they either have to grow them (education) or import them. (Side note: The Grand Coalition for Digital Jobs)
The second thought is the development of digital competence and improvement of digital skills among ICT users. 19% of the work force is ICT intensive, 90% of jobs require some IT skills but 53% of the workforce are not confident enough in their IT skills to seek another job in that sphere. We have to build knowledge and self-confidence. Higher Ed institutions have to look beyond the basic degree to share the resources and guidelines to grow digital competence across the whole community. Push away from the focus on exams and graduation to concentrate on learning – which is anathema to the usual academic machine. We need to work on new educational and business models to produced mature, competent and self-confident people with knowledge and make industry realise that this is actually what they want.
Professor Gulliksen believes that we need to recruit more ICT experience by bringing experts in to the Universities to broaden academia and pedagogy with industry experience. We also really, really need to balance the gender differences which show the same weird cultural trends in terms of self-deception rather than task description.
Overall, a lot of very interesting ideas – thank you, Professor Gulliksen!
Arnold Pears, Uppsala, challenged one of the points on engaging with, and training for, industry in that we prepare our students for society first, and industrial needs are secondary. Jan agreed with this distinction. (This followed on from a discussion that Arnold and I were having regarding the uncomfortable shoulder rubbing of education and vocational training in modern education. The reason I come to conferences is to have fascinating discussions with smart people in the breaks between interesting talks.)
The jacket came back up again at the end. When discussing Computer Science, Jan feels the need to use metaphors – as do we all. Basically, it’s easy to fall into the trap of thinking you can explain something as being simple when you’re drawing down on a very rich learned context for framing the knowledge. CS people can struggle with explaining things, especially to very new students, because we build a lot of things up to reach the “operational” level of CS knowledge and everything, from the error messages presented when a program doesn’t work to the efficiency of long-running programs, depends upon understanding this rich context. Whether the threshold here is a threshold concept (Meyer and Land), neo-Piaegtian, Learning Edge Momentum or Bloom-related problem doesn’t actually matter – there’s a minimum amount of well-accepted context required for certain metaphors to work or you’re explaining to someone how to put a jacket on with your eyes closed. 🙂
One of the final questions raised the issue of computing as a chore, rather than a joy. Professor Gulliksen noted that there are only two groups of people who are labelled as users, drug users and computer users, and the systematic application of computing as a scholastic subject often requires students to lock up more powerful computer (their mobile phones) to use locked-down, less powerful serried banks of computers (based on group purchasing and standard environments). (Here’s an interesting blog on a paper on why we should let students use their phones in classes.)
ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSE
Posted: June 24, 2014 Filed under: Education | Tags: collaboration, computer science education, education, educational problem, educational research, higher education, industry, ITiCSE, ITiCSE 2014, learning, mentoring, pair programming, pedagogy, principles of design, project based learning, small group learning, student perspective, studio based learning, teaching, teaching approaches, thinking, time, time factors, time management, universal principles of design, work-life balance Leave a comment- People: learning community
teachers and learners - Process: creative , reflective
- interactions
- physical space
- collaboration
- Product: designed object – a single focus for the process
- Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
- Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.
ITiCSE 2014: Working Groups Reports #ITiCSE2014 #ITiCSE
Posted: June 23, 2014 Filed under: Education | Tags: access, accessibility, computational thinking, computer science education, CT, education, higher education, ITiCSE, ITiCSE 2014, learning, learning technologies, methodology, peer review, teaching, thinking, Workgroups Leave a commentUnfortunately, there are too many working groups, reporting at too high a speed, for me to capture it here. All of the working groups are going to release reports and I suggest that you have a look into some of the areas covered. The topics reported on today were:
- Methodology and Technology for In-Flow Peer Review
In-flow peer review is the review of an exercise as it is going on. Providing elements to review can be difficult as it may encourage plagiarism but there are many benefits to this, which generally justifies the decision to do review. Picking who can review what for maximum benefit is also very difficult.
We’ve tried to do a lot of work here but it’s really challenging because there are so many possibly right ways.
- Computational Thinking in K-9 Education
Given that there are national, and localised, definitions of what “Computational Thinking” is, this is challenging to identify. Many K-12 teachers are actually using CT techniques but wouldn’t know to answer “yes” if asked if they were. Many issues in play here but the working group are a multi-national and thoughtful group who have lots of ideas.
As a note, K-9 refers to Kindergarten to Year 9, not dogs. Just to be clear.
- Increasing Accessibility and Adoption of Smart Technologies for Computer Science Education
How can you integrate all of the whizz-bang stuff into the existing courses and things that we already use everyday? The working group have proposed an architecture to help with the adoption. It’s a really impressive, if scary, slide but I’ll be interested to see where this goes. (Unsurprisingly, it’s a three-tier model that will look familiar to anyone with a networking or distributed systems background.) Basically, let’s not re-invent the wheel when it comes to using smarter technologies but let’s also find out the best ways to build these systems and then share that, as well as good content and content delivery. Identity management is, of course, a very difficult problem for any system so this is a core concern.
There’s a survey you can take to share your knowledge with this workgroup. (The feared and dreaded Simon noted that it would be nice if their survey was smarter.) A question from the floor was that, while the architecture was nice and standards were good, what impact would this have on the chalkface? (This is a neologism I’ve recently learned about, the equivalent of the coalface for the educational teaching edge.) This is a good question. You only have to look at how many standards there are to realise that standard construction and standard adoption are two very different beasts. Cultural change is something that has to be managed on top of technical superiority. The working group seems to be on top of this so it will be interesting to see where it goes.
- Strengthening Methodology Education in Computing
Unsurprisingly, computing is a very broad field and is methodologically diverse. There’s a lot of ‘borrowing’ from other fields, which is a nice way of saying ‘theft’. (Sorry, philosophers, but ontologies are way happier with us.) Our curricular have very few concrete references to methodology, with a couple of minor exceptions. The working group had a number of objectives, which they reduced down to fewer and remove the term methodology. Literature reviews on methodology education are sparse but there is more on teaching research methods. Embarrassingly, the paper that shows up for this is a 2006 report from a working group from this very conference. Oops. As Matti asked, are we really this disinterested in this topic that we forget that we were previously interested in it? The group voted to change direction to get some useful work out of the group. They voted not to produce a report as it was too challenging to repurpose things at this late stage. All their work would be toward annotating the existing paper rather than creating a new one.
One of the questions was why the previous paper had so few citations, cited 5 times out of 3000 downloads, despite the topic being obviously important. One aspect mentioned is that CS researchers are a separate community and I reiterated some early observations that we have made on the pathway that knowledge takes to get from the CS Ed community into the CS ‘research’ community. (This summarises as “Do CS Ed research, get it into pop psychology, get it into the industrial focus and then it will sneak into CS as a curricular requirement, at which stage it will be taken seriously.” Only slightly tongue-in-cheek.)
- A Sustainable Gamification Strategy for Education
Sadly, this group didn’t show up, so this was disbanded. I imagine that they must have had a very good reason.
Interesting set of groups – watch for the reports and, if you use one, CITE IT! 🙂
ITiCSE 2014, Monday, Session 1A, Technology and Learning, #ITiCSE2014 #ITiCSE @patitsel @guzdial
Posted: June 23, 2014 Filed under: Education | Tags: badges, computer science education, data visualisation, digital education, digital technologies, education, game development, gamification, higher education, ITiCSE, ITiCSE 2014, learning, moocs, PBL, projected based learning, SPOCs, teaching, technology, thinking, visualisation Leave a comment(The speakers are going really. really quickly so apologies for any errors or omissions that slip through.)
The chair had thanked the Spanish at the opening for the idea of long coffee breaks and long lunches – a sentiment I heartily share as it encourages discussions, which are the life blood of good conferences. The session opened with “SPOC – supported introduction to Programming” presented by Marco Piccioni. SPOCs are Small Private On-line Courses and are part of the rich tapestry of hand-crafted terminology that we are developing around digital delivery. The speaker is from ETH-Zurich and says that they took a cautious approach to go step-by-step in taking an existing and successful course and move it into the on-line environment. The classic picture from University of Bologna of the readers/scribes was shown. (I was always the guy sleeping in the third row.)
We want our teaching to be interesting and effective so there’s an obis out motivation to get away from this older approach. ETH has an interesting approach where the exam is 10 months after the lecture, which leads to interesting learning strategies for students who can’t solve the instrumentality problem of tying work now into success in the future. Also, ETH had to create an online platform to get around all of the “my machine doesn’t work” problems that would preclude the requirement to install an IDE. The final point of motivation was to improve their delivery.
The first residential version of the course ran in 2003, with lectures and exercise sessions. The lectures are in German and the exercise sessions are in English and German, because English is so dominant in CS. There are 10 extensive home assignments including programming and exercise sessions groups formed according to students’ perceived programming proficiency level. (Note on the last point: Hmmm, so people who can’t program are grouped together with other people who can’t program? I believe that the speaker clarifies this as “self-perceived” ability but I’m still not keen on this kind of streaming. If this worked effectively, then any master/apprentice model should automatically fail) Groups were able to switch after a week, for language or not working with the group.
The learning platform for the activity was Moodle and their experience with it was pretty good, although it didn’t do everything that they wanted. (They couldn’t put interactive sessions into a lecture, so they produced a lecture-quiz plug-in for Moodle. That’s very handy.) This is used in conjunction with a programming assessment environment, in the cloud, which ties together the student performance at programming with the LMS back-end.
The SPOC components are:
- lectures, with short intros and video segments up to 17 minutes. (Going to drop to 10 minutes based on student feedback),
- quizzes, during lectures, testing topic understanding immediately, and then testing topic retention after the lecture,
- programming exercises, with hands-on practice and automatic feedback
Feedback given to the students included the quizzes, with a badge for 100% score (over unlimited attempts so this isn’t as draconian as it sounds), and a variety of feedback on programming exercises, including automated feedback (compiler/test suite based on test cases and output matching) and a link to a suggested solution. The predefined test suite was gameable (you could customise your code for the test suite) and some students engineered their output to purely match the test inputs. This kind of cheating was deemed to be not a problem by ETH but it was noted that this wouldn’t scale into MOOCs. Note that if someone got everything right then they got to see the answer – so bad behaviour then got you the right answer. We’re all sadly aware that many students are convinced that having access to some official oracle is akin to having the knowledge themselves so I’m a little cautious about this as a widespread practice: cheat, get right answer, is a formula for delayed failure.
Reporting for each student included their best attempt and past attempts. For the TAs, they had a wider spread of metrics, mostly programmatic and mark-based.
On looking at the results, the attendance to on-line lectures was 71%, where the live course attendance remained stable. Neither on-line quizzes nor programming exercises counted towards the final grade. Quiz attempts were about 5x the attendance and 48% got 100% and got the badge, significantly more than the 5-10% than would usually do this.
Students worked on 50% of the programming exercises. 22% of students worked on 75-100% of the exercises. (There was a lot of emphasis on the badge – and I’m really not sure if there’s evidence to support this.)
The lessons learned summarised what I’ve put above: shortening video lengths, face-to-face is important, MCQs can be creative, ramification, and better feedback is required on top of the existing automatic feedback.
The group are scaling from SPOC to MOOC with a Computing: Art, Magic, Science course on EdX launching later on in 2014.
I asked a question about the badges because I was wondering if putting in the statement “100% in the quiz is so desirable that I’ll give you a badge” was what had led to the improved performance. I’m not sure I communicated that well but, as I suspected, the speaker wants to explore this more in later offerings and look at how this would scale.
The next session was “Teaching and learning with MOOCs: Computing academics’ perspectives and engagement”, presented by Anna Eckerdal. The work was put together by a group composed from Uppsala, Aalto, Maco and Monash – which illustrates why we all come to conferences as this workgroup was put together in a coffee-shop discussion in Uppsala! The discussion stemmed from the early “high hype” mode of MOOCs but they were highly polarising as colleagues either loved it or hated it. What was the evidence to support either argument? Academics’ experience and views on MOOCs were sought via a questionnaire sent out to the main e-mail lists, to CS and IT people.
The study ran over June-JUly 2013, with 236 responses, over > 90 universities, and closed- and open-ended questions. What were the research questions: What are the community views on MOOC from a teaching perspective (positive and negative) and how have people been incorporating them into their existing courses? (Editorial note: Clearly defined study with a precise pair of research questions – nice.)
Interestingly, more people have heard concern expressed about MOOCs, followed by people who were positive, then confused, the negative, then excited, then uninformed, then uninterested and finally, some 10% of people who have been living in a time-travelling barrel in Ancient Greece because in 2013 they have heard no MOOC discussion.
Several themes were identified as prominent themes in the positive/negative aspects but were associated with the core them of teaching and learning. (The speaker outlined the way that the classification had been carried out, which is always interesting for a coding problem.) Anna reiterated the issue of a MOOC as a personal power enhancer: a MOOC can make a teacher famous, which may also be attractive to the Uni. The sub themes were pedagogy and learning env, affordance of MOOCs, interaction and collaboration, assessment and certificates, accessibility.
Interestingly, some of the positive answers included references to debunked approaches (such as learning styles) and the potential for improvements. The negatives (and there were many of them) referred to stone age learning and ack of relations.
On affordances of MOOCs, there were mostly positive comments: helping students with professional skills, refresh existing and learn new skills, try before they buy and the ability to transcend the tyranny of geography. The negatives included the economic issues of only popular courses being available, the fact that not all disciplines can go on-line, that there is no scaffolding for identity development in the professional sense nor support development of critical thinking or teamwork. (Not sure if I agree with the last two as that seems to be based on the way that you put the MOOC together.)
I’m afraid I missed the slide on interaction and collaboration so you’ll (or I’ll) have to read the paper at some stage.
There was nothing positive about assessment and certificates: course completion rates are low, what can reasonably be assessed, plagiarism and how we certify this. How does a student from a MOOC compete with a student from a face-to-face University.
1/3 of the respondents answered about accessibility, with many positive comments on “Anytime. anywhere, at one’s own pace”. We can (somehow) reach non-traditional student groups. (Note: there is a large amount of contradictory evidence on this one, MOOCs are even worse than traditional courses. Check out Mark Guzdial’s CACM blog on this.) Another answer was “Access to world class teachers” and “opportunity to learn from experts in the field.” Interesting, given that the mechanism (from other answers) is so flawed that world-class teachers would barely survive MOOC ification!
On Academics’ engagement with MOOCs, the largest group (49%) believed that MOOCs had had no effect at all, about 15% said it had inspired changes, roughly 10% had incorporated some MOOCs. Very few had seen MOOCs as a threat requiring change: either personally or institutionally. Only one respondent said that their course was a now a MOOC, although 6% had developed them and 12% wanted to.
For the open-ended question on Academics’ engagement, most believed that no change was required because their teaching was superior. (Hmm.) A few reported changes to teaching that was similar to MOOCs (on line materials or automated assessment) but wasn’t influenced by them.
There’s still no clear vision of the role of MOOCs in the future: concerned is as prominent as positive. There is a lot of potential but many concerns.
The authors had several recommendations of concern: focusing on active learning, we need a lot more search in automatic assessment and feedback methods, and there is a need for lots of good policy from the Universities regarding certification and the role of on-site and MOOC curricula. Uppsala have started the process of thinking about policy.
The first question was “how much of what is seen here would apply to any new technology being introduced” with an example of the similar reactions seen earlier to “Second Life”. Anna, in response, wondered why MOOC has such a global identity as a game-changer, given its similarity to previous technologies. The global discussion leads to the MOOC topic having a greater influence, which is why answering these questions is more important in this context. Another issue raised in questions included the perceived value of MOOCs, which means that many people who have taken MOOCs may not be advertising it because of the inherent ranking of knowledge.
@patitsel raised the very important issue that under-represented groups are even more under-represented in MOOCs – you can read through Mark’s blog to find many good examples of this, from cultural issues to digital ghettoisation.
The session concluded with “Augmenting PBL with Large Public Presentations: A Case Study in Interactive Graphics Pedagogy”. The presenter was a freshly graduated student who had completed the courses three weeks ago so he was here to learn and get constructive criticism. (Ed’s note: he’s in the right place. We’re very inquisitive.)
Ooh, brave move. He’s starting with anecdotal evidence. This is not really the crowd for that – we’re happy with phenomenographic studies and case studies to look at the existence of phenomena as part of a study, but anecdotes, even with pictures, are not the best use of your short term in front of a group of people. And already a couple of people have left because that’s not a great way to start a talk in terms of framing.
I must be honest, I slightly lost track of the talk here. EBL was defined as project-based learning augmented with constructively aligned public expos, with gamers as the target audience. The speaker noted that “gamers don’t wait” as a reason to have strict deadlines. Hmm. Half Life 3 anyone? The goal was to study the pedagogical impact of this approach. The students in the study had to build something large, original and stable, to communicate the theory, work as a group, demonstrate in large venues and then collaborate with a school of communication. So, it’s a large-scale graphics-based project in teams with a public display.
…
Grading was composed of proposals, demos, presentation and open houses. Two projects (50% and 40%) and weekly assignments (10%) made up the whole grading scheme. The second project came out after the first big Game Expo demonstration. Project 1 had to be interactive groups, in groups of 3-4. The KTH visualisation studio was an important part of this and it is apparently full of technology, which is nice and we got to hear about a lot of it. Collaboration is a strong part of the visualisation studio, which was noted in response to the keynote. The speaker mentioned some of the projects and it’s obvious that they are producing some really good graphics projects.
I’ll look at the FaceUp application in detail as it was inspired by the idea to make people look up in the Metro rather than down at their devices. I’ll note that people look down for a personal experience in shared space. Projecting, even up, without capturing the personalisation aspect, is missing the point. I’ll have to go and look at this to work out if some of these issues were covered in the FaceUp application as getting people to look up, rather than down, needs to have a strong motivating factor if you’re trying to end digitally-inspired isolation.
The experiment was to measure the impact on EXPOs on ILOs, using participation, reflection, surveys and interviews. The speaker noted that doing coding on a domain of knowledge you feel strongly about (potentially to the point of ownership) can be very hard as biases creep in and I find it one of the real challenges in trying to do grounded theory work, personally. I’m not all that surprised that students felt that the EXPO had a greater impact than something smaller, especially where the experiment was effectively created with a larger weight first project and a high-impact first deliverable. In a biological human sense, project 2 is always going to be at risk of being in the refectory period, the period after stimulation during which a nerve or muscle is less able to be stimulated. You can get as excited about the development, because development is always going to be very similar, but it’s not surprising that a small-scale pop is not as exciting as a giant boom, especially when the boom comes first.
How do we grade things like this? It’s a very good question – of course the first question is why are we grading this? Do we need to be able to grade this sort of thing or just note that it’s met a professional standard? How can we scale this sort of thing up, especially when the main function of the coordinator is as a cheerleader and relationships are essential. Scaling up relationships is very, very hard. Talking to everyone in a group means that the number of conversations you have is going to grow at an incredibly fast rate. Plus, we know that we have an upper bound on the number of relationships we can actually have – remember Dunbar’s number of 120-150 or so? An interesting problem to finish on.