When Does Collaborative Work Fall Into This Trap?
Posted: September 11, 2014 Filed under: Education | Tags: advocacy, blogging, collaboration, community, crowdsourcing, curriculum, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, interested parties, learning, principles of design, student perspective, students, teaching, teaching approaches, thinking, universal principles of design, University of Southampton, Victor Naroditskiy Leave a commentA recent study has shown that crowdsourcing activities are prone to bringing out the competitors’ worst competitive instincts.
“[T]he openness makes crowdsourcing solutions vulnerable to malicious behaviour of other interested parties,” said one of the study’s authors, Victor Naroditskiy from the University of Southampton, in a release on the study. “Malicious behaviour can take many forms, ranging from sabotaging problem progress to submitting misinformation. This comes to the front in crowdsourcing contests where a single winner takes the prize.” (emphasis mine)
You can read more about it here but it’s not a pretty story. Looks like a pretty good reason to be very careful about how we construct competitive challenges in the classroom!
Knowing the Tricks Helps You To Deal With Assumptions
Posted: September 10, 2014 Filed under: Education | Tags: authenticity, blogging, card shouting, collaboration, community, curriculum, data visualisation, design, education, educational problem, educational research, Heads Heads, higher education, Law of Small Numbers, random numbers, random sequence, random sequences, randomness, reflection, resources, students, teaching, teaching approaches, tools Leave a commentI teach a variety of courses, including one called Puzzle-Based Learning, where we try to teach think and problem-solving techniques through the use of simple puzzles that don’t depend on too much external information. These domain-free problems have most of the characteristics of more complicated problems but you don’t have to be an expert in the specific area of knowledge to attempt them. The other thing that we’ve noticed over time is that a good puzzle is fun to solve, fun to teach and gets passed on to other people – a form of infectious knowledge.
Some of the most challenging areas to try and teach into are those that deal with probability and statistics, as I’ve touched on before in this post. As always, when an area is harder to understand, it actually requires us to teach better but I do draw the line at trying to coerce students into believing me through the power of my mind alone. But there are some very handy ways to show students that their assumptions about the nature of probability (and randomness) so that they are receptive to the idea that their models could need improvement (allowing us to work in that uncertainty) and can also start to understand probability correctly.
We are ferociously good pattern matchers and this means that we have some quite interesting biases in the way that we think about the world, especially when we try to think about random numbers, or random selections of things.
So, please humour me for a moment. I have flipped a coin five times and recorded the outcome here. But I have also made up three other sequences. Look at the four sequences for a moment and pick which one is most likely to be the one I generated at random – don’t think too much, use your gut:
- Tails Tails Tails Heads Tails
- Tails Heads Tails Heads Heads
- Heads Heads Tails Heads Tails
- Heads Heads Heads Heads Heads
Have you done it?
I’m just going to put a bit more working in here to make sure that you’ve written down your number…
I’ve run this with students and I’ve asked them to produce a sequence by flipping coins then produce a false sequence by making subtle changes to the generated one (turns heads into tails but change a couple along the way). They then write the two together on a board and people have to vote on which one is which. As it turns out, the chances of someone picking the right sequence is about 50/50, but I engineered that by starting from a generated sequence.
This is a fascinating article that looks at the overall behaviour of people. If you ask people to write down a five coin sequence that is random, 78% of them will start with heads. So, chances are, you’ve picked 3 or 4 as you’re starting sequence. When it comes to random sequences, most of us equate random with well-shuffled, and, on the large scale, 30 times as many people would prefer option 3 to option 4. (This is where someone leaps into the comments to say “A-ha” but, it’s ok, we’re talking about overall behavioural trends. Your individual experience and approach may not be the dominant behaviour.)
From a teaching point of view, this is a great way to break up the concepts of random sequences and some inherent notion that such sequences must be disordered. There are 32 different ways of flipping 5 coins in a strict sequence like this and all of them are equally likely. It’s only when we start talking about the likelihood of getting all heads versus not getting all heads that the aggregated event of “at least one head” starts to be more likely.
How can we use this? One way is getting students to write down their sequences and then asking them to stand up, then sit down when your ‘call’ (from a script) goes the other way. If almost everyone is still standing at heads then you’ve illustrated that you know something about how their “randomisers” work. A lot of people (if your class is big enough) should still be standing when the final coin is revealed and this we can address. Why do so many people think about it this way? Are we confusing random with chaotic?
The Law of Small Numbers (Tversky and Kahneman), also mentioned in the post, which is basically that people generalise too much from small samples and they expect small samples to act like big ones. In your head, if the grand pattern over time could be resorted into “heads, tails, heads, tails,…” then small sequences must match that or they just don’t look right. This is an example of the logical fallacy called a “hasty generalisation” but with a mathematical flavour. We are strongly biassed towards the the validity of our experiences, so when we generate a random sequence (or pick a lucky door or win the first time at poker machines) then we generalise from this small sample and can become quite resistant to other discussions of possible outcomes.
If you have really big classes (367 or more) then you can start a discussion on random numbers by asking people what the chances are that any two people in the room share a birthday. Given that there are only 366 possible birthdays, the Pigeonhole principle states that two people must share a birthday as, in a class of 367, there are only 366 birthdays to go around so one must be repeated! (Note for future readers: don’t try this in a class of clones.) There are lots of other, interesting thinking examples in the link to Wikipedia that helps you to frame randomness in a way that your students might be able to understand it better.
I’ve used a lot of techniques before, including the infamous card shouting, but the new approach from the podcast is a nice and novel angle to add some interest to a class where randomness can show up.
CodeSpells! A Kickstarter to make a difference. @sesperu @codespells #codespells
Posted: September 9, 2014 Filed under: Education, Opinion | Tags: advocacy, blockly, blogging, Code Spells, codespells, community, education, educational problem, educational research, games, Generation Why, higher education, in the student's head, learning, principles of design, resources, Sarah Esper, Stephen Foster, student perspective, students, teaching, teaching approaches, thinking, tools, UCSD, universal principles of design Leave a commentI first met Sarah Esper a few years ago when she was demonstrating the earlier work in her PhD project with Stephen Foster on CodeSpells, a game-based project to start kids coding. In a pretty enjoyable fantasy game environment, you’d code up spells to make things happen and, along the way, learn a lot about coding. Their team has grown and things have come a long way since then for CodeSpells, and they’re trying to take it from its research roots into something that can be used to teach coding on a much larger scale. They now have a Kickstarter out, which I’m backing (full disclosure), to get the funds they need to take things to that next level.
Teaching kids to code is hard. Teaching adults to code can be harder. There’s a big divide these days between the role of user and creator in the computing world and, while we have growing literary in use, we still have a long way to go to get more and more people creating. The future will be programmed and it is, honestly, a new form of literacy that our children will benefit from.
If you’re one of my readers who likes the idea of new approaches to education, check this out. If you’re an old-timey Multi-User Dungeon/Shared Hallucination person like me, this is the creative stuff we used to be able to do on-line, but for everyone and with cool graphics in a multi-player setting. If you have kids, and you like the idea of them participating fully in the digital future, please check this out.
To borrow heavily from their page, 60% of jobs in science, technology,engineering and maths are computing jobs but AP Computer Science is only taught at 5% of schools. We have a giant shortfall of software people coming up and this will be an ugly crash when it comes because all of the nice things we have become used to in the computing side will slow down and, in some cases, pretty much stop. Invest in the future!
I have no connection to the project apart from being a huge supporter of Sarah’s drive and vision and someone who would really like to see this project succeed. Please go and check it out!
I have a new book out: A Guide to Teaching Puzzle-based learning. #puzzlebasedlearning #education
Posted: September 5, 2014 Filed under: Education, Opinion | Tags: blogging, colleagues, curriculum, design, Ed Meyer, education, educational problem, Generation Why, higher education, in the student's head, raja sooriamurthi, reflection, resources, shameless self-promotion, student perspective, students, teaching, teaching approaches, thinking, universal principles of design, work/life balance, workload, Zbyszek Michalewicz Leave a commentTime for some pretty shameless self-promotion. Feel free to stop reading if that will bother you.
My colleagues, Ed Meyer from BWU, Raja Sooriamurthi from CMU and Zbyszek Michalewicz (emeritus from my own institution) and I have just released a new book, called “A Guide to Teaching Puzzle-based learning.” What a labour of love this has been and, better yet, we are still still talking to each other. In fact, we’re planning some follow-up events next year to do some workshops around the book so it’ll be nice to work with the team again.
(How to get it? This is the link to Springer, paperback and e-Book. This is the link to Amazon, paperback only I believe.)
Here’s a slightly sleep-deprived and jet-lagged picture of me holding the book as part of my “wow, it got published” euphoria!
The book is a resource for the teacher, although it’s written for teachers from primary to tertiary and it should be quite approachable for the home school environment as well. We spent a lot of time making it approachable, sharing tips for students and teachers alike, and trying to get all of our knowledge about how to teach well with puzzles down into the one volume. I think we pretty much succeeded. I’ve field-tested the material here at Universities, schools and businesses, with very good results across the board. We build on a good basis and we love sound practical advice. This is, very much, a book for the teaching coalface.
It’s great to finally have it all done and printed. The Springer team were really helpful and we’ve had a lot of patience from our commissioning editors as we discussed, argued and discussed again some of the best ways to put things into the written form. I can’t quite believe that we managed to get 350 pages down and done, even with all of the time that we had.
If you or your institution has a connection to SpringerLink then you can read it online as part of your subscription. Otherwise, if you’re keen, feel free to check out the preview on the home page and then you may find that there are a variety of prices available on the Web. I know how tight budgets are at the moment so, if you do feel like buying, please buy it at the best price for you. I’ve already had friends and colleagues ask what benefits me the most and the simple answer is “if people read it and find it useful”.
To end this disgraceful sales pitch, we’re actually quite happy to run workshops and the like, although we are currently split over two countries (sometimes three or even four), so some notice is always welcome.
That’s it, no more self-promotion to this extent until the next book!
Talking Ethics with the Terminator: Using Existing Student Experience to Drive Discussion
Posted: September 5, 2014 Filed under: Education | Tags: authenticity, community, curriculum, education, educational problem, ethical issues, ethics, feedback, Generation Why, higher education, in the student's head, learning, principles of design, reflection, resources, student perspective, students, teaching, teaching approaches, thinking Leave a commentOne of the big focuses at our University is the Small-Group Discovery Experience, an initiative from our overall strategy document, the Beacon of Enlightenment. You can read all of the details here, but the essence is that a small group of students and an experienced research academic meet regularly to start the students down the path of research, picking up skills in an active learning environment. In our school, I’ve run it twice as part of the professional ethics program. This second time around, I think it’s worth sharing what we did, as it seems to be working well.
Why ethics? Well, this is first year and it’s not all that easy to do research into Computing if you don’t have much foundation, but professional skills are part of our degree program so we looked at an exploration of ethics to build a foundation. We cover ethics in more detail in second and third year but it’s basically a quick “and this is ethics” lecture in first year that doesn’t give our students much room to explore the detail and, like many of the more intellectual topics we deal with, ethical understanding comes from contemplation and discussion – unless we just want to try to jam a badly fitting moral compass on to everyone and be done.
Ethical issues present the best way to talk about the area as an introduction as much of the formal terminology can be quite intimidating for students who regard themselves as CS majors or Engineers first, and may not even contemplate their role as moral philosophers. But real-world situations where ethical practice is more illuminating are often quite depressing and, from experience, sessions in medical ethics, and similar, rapidly close down discussion because it can be very upsetting. We took a different approach.
The essence of any good narrative is the tension that is generated from the conflict it contains and, in stories that revolve around artificial intelligence, robots and computers, this tension often comes from what are fundamentally ethical issues: the machine kills, the computer goes mad, the AI takes over the world. We decided to ask the students to find two works of fiction, from movies, TV shows, books and games, to look into the ethical situations contained in anything involving computers, AI and robots. Then we provided them with a short suggested list of 20 books and 20 movies to start from and let them go. Further guidance asked them to look into the active ethical agents in the story – who was doing what and what were the ethical issues?
I saw the students after they had submitted their two short paragraphs on this and I was absolutely blown out of the water by their informed, passionate and, above all, thoughtful answers to the questions. Debate kept breaking out on subtle points. The potted summary of ethics that I had given them (follow the rules, aim for good outcomes or be a good person – sorry, ethicists) provided enough detail for the students to identify issues in rule-based approaches, utilitarianism and virtue ethics, but I could then introduce terms to label what they had already done, as they were thinking about them.
I had 13 sessions with a total of 250 students and it was the most enjoyable teaching experience I’ve had all year. As follow-up, I asked the students to enter all of their thoughts on their entities of choice by rating their autonomy (freedom to act), responsibility (how much we could hold them to account) and perceived humanity, using a couple of examples to motivate a ranking system of 0-5. A toddler is completely free to act (5) and completely human (5) but can’t really be held responsible for much (0-1 depending on the toddler). An aircraft autopilot has no humanity or responsibility but it is completely autonomous when actually flying the plane – although it will disengage when things get too hard. A soldier obeying orders has an autonomy around 5. Keanu Reeves in the Matrix has a humanity of 4. At best.
They’ve now filled the database up with their thoughts and next week we’re going to discuss all of their 0-5 ratings as small groups, then place them on a giant timeline of achievements in literature, technology, AI and also listing major events such as wars, to see if we can explain why authors presented the work that they did. When did we start to regard machines as potentially human and what did the world seem like them to people who were there?
This was a lot of fun and, while it’s taken a little bit of setting up, this framework works well because students have seen quite a lot, the trick is just getting to think about with our ethical lens. Highly recommended.
ITiCSE 2014, Day 3, Final Session, “CS Ed Research”, #ITiCSE2014 #ITiCSE
Posted: June 26, 2014 Filed under: Education | Tags: curriculum, education, educational problem, educational research, higher education, ITiCSE, ITiCSE2014, Paul Denny, programming, reflection, Scratch, student perspective, syntax errors, teaching, teaching approaches, universal principles of design, vygotsky, Zone of proximal development Leave a commentThe first paper, in the final session, was the “Effect of a 2-week Scratch Intervention in CS1 on Learners with Varying Prior Knowledge”, presented by Shitanshu Mirha, from IIT Bombay. The CS1 course context is a single programming course for all freshmen engineer students, thus it has to work for novice and advanced learners. It’s the usual problem: novices get daunted and advanced learners get bored. (We had this problem in the past.) The proposed solution is to use Scratch, because it’s low-floor (easy to get started), high-ceiling (can build complex projects) and wide-walls (applies to a wide variety of topics and themes). Thus it should work for both novice and advanced learners.
The theoretical underpinning is that novice learners reach cognitive overload while trying to learn techniques for programming and a language at the same time. One way to reduce cognitive load is to use visual programming environments such as Scratch. For advanced learners, Scratch can provide a sufficiently challenging set of learning material. From the perspective of Flow theory, students need to reach equilibrium between challenge level and perceived skill.
The research goal was to investigate the impact of a two-week intervention in a college course that will transition to C++. What would novices learn in terms of concepts and C++ transition? What would advanced students learn? What was the overall impact on students?
The cohort was 450 students, no CS majors, with a variety of advanced and novice learners, with a course objective of teaching programming in C++ across 14 weeks. The Scratch intervention took place over the first four weeks in terms of teaching and assessment. Novice scaffolding was achieved by ramping up over the teaching time. Engagement for advanced learners was achieved by starting the project early (second week). Students were assessed by quizzes, midterms and project production, with very high quality projects being demonstrated as Hall of Fame projects.
Students were also asked to generate questions on what they learned and these could be used for other students to practice with. A survey was given to determine student perception of usefulness of the Scratch approach.
The results for Novices were presented. While the Novices were able to catch up in basic Scratch comprehension (predict output and debug code), this didn’t translate into writing code in Scratch or debugging programs in C++. For question generation, Novices were comparable to advanced learners in terms of number of questions generated on sequences, conditionals and data. For threads, events and operators, Novices generated more questions – although I’m not sure I see the link that demonstrates that they definitely understood the material. Unsurprisingly, given the writing code results, Novices were weaker in loops and similar programming constructs. More than 53% of Novices though the Scratch framing was useful.
In terms of Advanced learner engagement, there were more Advanced projects generated. Unsurprisingly, Advanced projects were far more complicated. (I missed something about Most-Loved projects here. Clarification in the comments please!) I don’t really see how this measures engagement – it may just be measuring the greater experience.
Summarising, Scratch seemed to help Novices but not with actual coding or working with C++, but it was useful for basic concepts. The author claims that the larger complexity of Advanced user projects shows increased engagement but I don’t believe that they’ve presented enough here to show that. The sting in the tail is that the Scratch intervention did not help the Novices catch up to the Advanced users for the type of programming questions that they would see in the exam – hence, you really have to question its utility.
The next paper is “Enhancing Syntax Error Messages Appears Ineffectual” presented by Paul Denny, from The University of Auckland. Apparently we could only have one of Paul or Andrew Luxton-Reilly, so it would be churlish to say anything other than hooray for Paul! (Those in the room will understand this. Sorry we missed you, Andrew! Catch up soon.) Paul described this as the least impressive title in the conference but that’s just what science is sometimes.
Java is the teaching language at Auckland, about to switch to Python, which means no fancy IDEs like Scratch or Greenfoot. Paul started by discussing a Java statement with a syntax error in it, which gave two different (but equally unhelpful) error messages for the same error.
if (a < 0) || (a > 100) error=true; // The error is in the top line because there should be surrounding parentheses around conditions // One compiler will report that a ';' is required at the ||, which doesn't solve the right problem. // The other compiler says that another if statement is required at the || // Both of these are unhelpful - as well as being wrong. It wasn't what we intended.
The conclusion (given early) is simple: enhancing the error messages with a controlled empirical study found no significant effect. This work came from thinking about an early programming exercise that was quite straightforward but seemed to came students a lot of grief. For those who don’t know, programs won’t run until we fix the structural problems in how we put the program elements together: syntax errors have to be fixed before the program will run. Until the program runs, we get no useful feedback, just (often cryptic) error messages from the compiler. Students will give up if they don’t make progress in a reasonable interval and a lack of feedback is very disheartening.
The hypothesis was that providing more useful error messages for syntax errors would “help” users, help being hard to quantify. These messages should be:
- useful: simple language, informal language and targeting errors that are common in practice. Also providing example code to guide students.
- helpful: reduce the number of non-compiling submissions in total, reduce number of consecutive non-compiling submissions AND reduce the number of attempts to resolve a specific error.
In related work, Kummerfeld and Kay (ACE 2003), “The neglected battle fields of Syntax Errors”, provided a web-based reference guide to search for the error text and then get some examples. (These days, we’d probably call this Stack Overflow. 🙂 ) Flowers, Carver and Jackson, 2004, developed Gauntlet to provide more informal error messages with user-friendly feedback and humour. The paper was published in Frontiers in Education, 2004, “Empowering Students and Building Confidence in Novice Programmers Through Gauntlet.” The next aspect of related work was from Tom Schorsch, SIGCSE 1995, with CAP, making specific corrections in an environment. Warren Toomey modified BlueJ to change the error subsystem but there’s no apparent published work on this. The final two were Dy and Rodrigo, Koli Calling 2010, with a detector for non-literal Java errors and Debugging Tutor: Preliminary evaluation, by Carter and Blank, KCSC, January 2014.
The work done by the authors was in CodeWrite (written up in SIGCSE 2011 and ITiCSE 2011, both under Denny et al). All students submit non-compiling code frequently. Maybe better feedback will help and influence existing systems such as Nifty reflections (cloud bat) and CloudCoder. In the study, student had 10 problems they could choose from, with a method, description and return result. The students were split in an A/B test, where half saw raw feedback and half saw the enhanced message. The team built an error recogniser that analysed over 12,000 submissions with syntax errors from a 2012 course and the raw compiler message identified errors 78% of the time. (“All Syntax Errors are Not Equal”, ITiCSE 2012). In other cases, static analysis was used to work out what the error was. Eventually, 92% of the errors were classifiable from the 2012 dataset. Anything not in that group was shown as raw error message to the student.
In the randomised controlled experiment, 83 students had to complete the 10 exercises (worth 1% each), using the measures of:
- number of consecutive non-compiing submissions for each exercise
- Total number of non-compiling submissions
- … and others.
Do students even read the error messages? This would explain the lack of impact. However, examining student code change there appears to be a response to the error messages received, although this can be a slow and piecemeal approach. There was a difference between the groups, but it wasn’t significant, because there was a 17% reduction in non-compiling submissions.
I find this very interesting because the lack of significance is slightly unexpected, given that increased expressiveness and ease of reading should make it easier for people to find errors, especially with the provision of examples. I’m not sure that this is the last word on this (and I’m certainly not saying the authors are wrong because this work is very rigorous) but I wonder what we could be measuring to nail this one down into the coffin.
The final talk was “A Qualitative Think-Aloud Study of Novice Programmers’ Code Writing Strategies”, which was presented by Tony Clear, on behalf of the authors. The aim of the work was to move beyond the notion of levels of development and attempt to explore the process of learning, building on the notion of schemas and plans. Assimilation (using existing schemas to understand new information) and accommodation (new information won’t fit so we change our schema) are common themes in psychology of learning.
We’re really not sure how novice programmers construct new knowledge and we don’t fully understand the cognitive process. We do know that learning to program is often perceived as hard. (Shh, don’t tell anyone.) At early stages, movie programmers have very few schemas to draw on, their knowledge is fragile and the cognitive load is very high.
Woohoo, Vygotsky reference to the Zone of Proximal Development – there are things students know, things that can learn with help, and then the stuff beyond that. Perkins talked about attitudinal factors – movers, tinkerers and stoppers. Stoppers stop and give up in the face of difficulty, tinkers fiddle until it works and movers actually make good progress and know what’s going on. The final aspect of methodology was inductive theory construction, while I’ll let you look up.
Think-aloud protocol requires the student to clearly vocalise what they were thinking about as they completed computation tasks on a computer, using retrospective interviews to address those points in the videos where silence, incomprehensibility or confused articulation made interpreting the result impossible. The scaffolding involve tutoring, task performance and follow-up. The programming tasks were in a virtual world-based pogromming environment to solve tasks of increasing difficulty.
How did they progress? Jacquie uses the term redirection to mean that the student has been directed to re-examine their work, but is not given any additional information. They’re just asked to reconsider what they’ve done. Some students may need a spur and then they’re fine. We saw some examples of students showing their different progression through the course.
Jacquie has added a new category, PLANNERS, which indicates that we can go beyond the Movers to explain the kind of behaviour we see in advanced students in the top quartile. Movers who stretch themselves can become planners if they can make it into the Zone of Proximal Development and, with assistance, develop their knowledge beyond what they’d be capable of by themselves. The More Competent Other plays a significant role in helping people to move up to the next level.
Full marks to Tony. Presenting someone else’s work is very challenging and you’d have to be a seasoned traveller to even reasonably consider it! (It was very nice to see the lead author recognising that in the final slide!)
ITiCSE 2014, Day 3, Session 7B, Peer Instruction, #ITiCSE2014 #ITiCSE
Posted: June 25, 2014 Filed under: Education | Tags: community, computer science education, curriculum, design, education, educational problem, educational research, flipped classroom, higher education, in the student's head, inverted classroom, ITiCSE, ITiCSE 2014, learning, peer instruction, teaching, teaching approaches, thinking Leave a commentThe first talk was “Peer Instruction: a Link to the Exam” presented by Daniel Zingaro from University of Toronto. Peer Instruction (PI) is an active learning pedagogy developed for physics and now heavily used in computing. Students complete a reading quiz prior to class and teachers use multiple-choice quizzes to assess knowledge. (You can look this one up in a number of places but I’ve discussed it here before a bit.) There’s a lot of research that shows gains between individual and group vote, with enduring improvements in student learning. (We can use isomorphic questions to reduce the likelihood of copying.) Both students and instructors value the learning.
PI appears to demonstrate improved learning outcomes on the final exam grades, as well as perceived depth of learning. (Couple of studies here from Beth Simon et al, and Daniel himself, checking Beth’s results.) But what leads to this improved outcome? The peer discussion. The class wide discussion? Both? If one part isn’t useful then we can adapt it to make it more useful to Computer Scientists. Daniel is going to use isomorphic questions to investigate relationships between PI components and final exam grades.
The isomorphic questions test the same concept with different questions, where if they get the first one right, we hope that they get the second one right – and if people learn how to do one, then that knowledge flows on to the other. (The example given was of loop complexity in nested loops depending on different variables.)
Daniel has two question modes in this experiment, which are slightly different. Both modes include the PI components, but the location of the isomorphic questions vary between the two approaches in the second question. In the Peer ℗ mode, the isomorphic question comes directly after the group vote and the second mode (Combined – C), the Q2 isomorphic questions occur direct after the instructor has had a chance to influence the class.
Are the questions really isomorphic and of the same difficulty? An external ranker was used to verity this and then the question pairs and mode were randomised. The difficulty of the questions was found to be statistically equivalent, based on the percentage of Q1 that were found to be correct.
Daniel had two hypotheses. Firstly, that peer scores will correlate to final exam scores. Secondly, that combined scores will also correlate with final exam scores, but the correlation should be stronger than for Peer, with the Combined questions representing learning from the full PI cycle. In terms of the final exam, there were three measures of the final exam grades: total exam score, score on the tracing question (similar to PI questions) and score on a code-writing question (very different to PI questions).
The implementation was a CS1 course with 3 lectures/week, with reading quizzes worthy 4% submitted prior to each lecture, clicker responses worth 5%, where the lectures on average contained three PI cycles- one cycle per lecture contained the follow-up isomorphic question. Multiple regression was used to test relationships between PI and final exam scores.
All of the results were statistically significant. For code-tracing, hat students know before exam explains 13% of their scores in the final exam. With the peer questions, it goes up to 16%. With combined as well, it goes up to 19%. Is this practically significant? Daniel raised this question because it doesn’t rise very much.
In terms of code writing, Baseline is 16%, + Peers is 22% and +Combined is 25%, so we’re starting to see more contribution from peers than instructor in this case. Are we measuring the different difficulty of a problem that peers couldn’t correct, which is why the instructor does less?
Overall? Baseline 21%, Peer 30% and then Combined is 34%. (Any questions about the stats, please read the paper. 🙂 )
Maybe adding combined questions to peer questions increases our predictive accuracy, just because we’re adding more data and this being able to produce a better model?
In discussion, PI performance related to final exam scores (as expected). Peer learning alone is important and the instructor-led discussion is important, over and above peer learning. This validates the role of the instructor in a “student-centred” classroom. Given that PI uses MCQs, we might expect it to only correlate with code-tracing but it does appear to correlate with code-writing problems as well – there may be deep conceptual similarities between PI questions and programming skills. But would the students that learned from PI also the students that would have learned from any other form of instruction? Still an open question and there’s a lot of ongoing work still to do.
The next paper was “Comparing Outcomes in Inverted and Traditional CS1” presented by Diane Horton from U Toronto. I’ve been discussing early intervention and student attendance issues in inverted/hybrid courses with Jennifer Campbell and Michelle Craig, also from U Toronto and also on this paper, as part of an attempt to get some good answers so I’d just come straight out of a lunch, discussing inverted classrooms and their outcomes. (Again, this is why we come to conferences – much of the value is in the meetings and discussion that are just so hard to have when you’re doing your day job or fitting a Skype meeting into the wee small hours to bridge the continental time gap.)
As a reminder, in inverted teaching, some or all of the material is delivered outside the classroom. Work typically done as homework is done in the lecture with the help of instructor or TAs. There were three research questions. Would the inverted offerings have better outcomes? Would inverted teaching affect students’ behaviour or experience? Would particular subgroups respond differently, especially English-language learners and beginner programmers?
The CS1 course at Toronto is a 12 week course in Python, objects-early, classes-late, with most students in 1st year and less than half looking to major in CS. The lectures are roughly 200 students, with 5 of these lecture sections.
Before the lecture, students prepared by watching videos, from the instructors, mostly screencasts of live programming with voice over, credit for attempting quizzes embedded in videos is 0.5% per week. (It’s scary how small that fraction has to be and really rather sad, from a behavioural perspective.)
During the lecture, the instructors used a worked example and students worked on worksheet-based exercises for most of the lectures, with assistance, solo or in pairs. This was a responsive teaching approach because the instructor could draw the class together as required. There was no mark reward or penalty that depended on attendance. If you were solid on the material, it was okay to miss the lecture. The mark scheme reflected some marks for lecture preparation with an increased number of online exercises and decreased weighting on labs.
The inverted CS1 course had gone well in the pilot in January 2013, which was published in a peer at SIGCSE ’14, but it was hard to compare this with the previous class as the make-up of the cohort varies from September to January courses. The study was run again in a more similar cohort in September 2014. The data presented here is for a similar cohort with a high overlap of instructors, compared the traditional offering.
For the present study, there were pre- and post-course surveys about attitude and behaviour, competed on-paper in the lecture. Weekly lecture attendance counts were made and standard university course valuations collected. In terms of attendance, the inverted pilot was the lowest, but the inverted class had lower attendance most of the time – an effect that we have also seen under some circumstances and are still thinking about. Interestingly, students thought that the inverted lectures weren’t seen as being as useful as face-to-face lectures but the online support materials were seen to be very helpful. As a package, this seems to be an overall positive experience.
The hypothesis was that students in the inverted offering would self-report a higher quality of learning experience and greater enjoyment but this wasn’t supported in the data, nor was it for beginners in particular. However, when asked if they wanted more inverted courses, there was a very strong positive response to this question.
The authors expected that beginners would benefit more because they need more helps and the gap between beginners and experiences students would reduce – this wasn’t supported by the data. Also, there was no reduction of gaps for English language learners, either.
Would the inverted course help people stay on and pass the course? Well, however success was defined, the pass rate was remarkably consistent, even when the beginners were isolated. However, it does appear that the overall level of knowledge, as measured by the final exam grades, actually improved in the inverted offerings, across two exams of similar difficult, with a jump from an average grade of 66% to 74% between the terms. Is this just due to the inverted teaching?
Maybe students learned more in the inverted offering because they spent more time on task? Based on self-reported student time, this doesn’t appear to be true. Maybe the beginners got killed off early to reduce their numbers and raise the mark? No, the drop rates among beginners were the same. It appears that the 8 percentage point increase may be related to the inverted mode, although, obviously, more work is required.
Is it worth it? They used no additional TA resources but the development time was enormous. You may not be ready for this investment. There are other options, you don’t need to use videos and you can use pre-existing materials to reduce costs.
Future work involves looking at dropping patterns – who drops when – and student who stumble and recover. They’re also looking at a full online CS1 course for course credit.
The final talk was on “Making Group Processes Explicit to Students: A Case for Justice” presented by Ville Isomöttönen. Project courses have a very long history in Computer Science, as capstones, using authentic customer projects, and the intention is to provide a realistic experience. (Editor’s note: It’s worth noting that some of this may be coming from the “We got punished like this so you can be too.”) What do students actually learn from this? Are they learning what we want them to learn or they are learning something very different and, potentially, much darker?
(This sounds like the kind of philosophical paper I’d give, let’s see where it goes! 🙂 )
If we have tertiary students, why can’t we just place them into a workplace for work experience? They’re adults – maybe we can separate this aspect and the pedagogue. The author’s study wants to look at how to promote conceptual learning in the response of realistic course work. Parker (1999) proposes that students are spending their effort of building wiring products, rather than actually learning about and reflecting upon the professional issues we consider important. The conjecture is that just because the situation is realistic doesn’t mean that the conceptual learning is happening as we intended.
The study is based around a fairly straight forward project-based learning structure, but had a Pass/Fail grade, with no distinction grading as far as I could tell. The teaching was baed on weekly group discussions, with self/peer evaluations, also housed in a group situation, and technical supervision offered by teaching assistant. Throughout the course, students are prompted to think about their operation at a conceptual level. Hmm. I’m not sure what the speaker means by this as, without a very detailed description of what is going on, this could have many different implementations.
We then cut to a diagram of justice conceptualised – I may have missed something as I’m not quite sure how this sits with the group work. I can’t find the diagram online but it involves participation, involving and negotiating with others – fused together as the skill of justice. This sits above statuses, norms and roles. Some of the related work deals with fairness (Richards 2009) as a key attribute of successful group work, Clear 2002 uses it in diagnostic technique, and Pieterse and Thompson 2010 mentioned ‘social loafers’ and ‘diligent isolates’.
I’m dreadfully sorry, dear reader, but I’m not following this properly so this may be a bit sketchy. Go and read the paper and I’ll try to get this together. Everyone else in the room appears to be getting this so I may just be tired or struggling with my (not very good) hearing and someone who is speaking rather quietly.
The underlying pedagogy comes from the social realist mindset (Moore,2000, Maton and Moore, 2010) and “avoids the dilemma between constructivist relativism and positivist absolutism”. We should also look at the Integrative Pedagogy (Tynjana (sp)), where the speaker feels that what they are describing is a realist version of this.
The course was surveyed with a preliminary small study (N=21/26, which is curious. Which one is it? Ah, 21 out of 26 enrolled, there we go.). The survey questions were… rather loose and very open to influence, unfortunately, from my quick glance at them but I will have to read the original paper.
Justice is a difficult topic to address, especially where it’s reified as a professional skill that can be developed, and discussing the notion of justice in terms of the ways that a group can work together fairly is very important. I suppose I’m not 100% convinced how much is added in this context through the use of a new term that is an apparent parent to communication and negotiation, with the desired outcome of fairness, because the amalgamation seems to obscure the individual components that would be improved upon to return to a fair state. The very small study, and a small survey, is a valid approach for a case study or phenomenographic approach, but I get the feeling that I was seeing a grounded theory argument. We do have to expose our desired processes to students if we’re going to achieve cognitive apprenticeship and there is a great deal of tension between industrial practice and key concepts, so this is a very interesting area to work in. I completely agree with the speaker that our heavy technical focus often precludes discussions of the empathic, philosophical and intangible, but I’m yet to see how this approach contributes.
The discussions mentioned as important are very important but group reports and discussion are a built-in part of many SE process models so I wonder how the justice theme amplifies this aspects. Again, getting students to engage in a dialogue that they do not expect to have in CS can be very challenging but we could be discussing issues such as critical thinking and ethics, which are often equally alien and orthogonal to the technical, without forming a compound concept that potentially obscures the underlying component mechanisms.
Simon asked a very good question: you didn’t present anything that showed a problem where the students would have needed the concept of justice. Apparently, this is in the writings that are yet to be analysed. The answer to the question ended up as an unlabelled graph on the blackboard which was focused on a skill difference with more experienced peers. I still can’t see how justice ties into this. I have to go and get my hearing checked.
ITiCSE 2014, Day 3, Session 6A, “Digital Fluency”, #ITiCSE2014 #ITiCSE
Posted: June 25, 2014 Filed under: Education | Tags: ALICE, arm the princess, Bologna model, competency, competency-based assessment, computational thinking, computer science education, Duke, education, educational problem, educational research, empowering minorities, empowering women, higher education, ITiCSE, ITiCSE 2014, key competencies, learning, middle school, non-normative approaches, pattern analysis, principles of design, reflection, teaching, thinking, tools, women in computing Leave a commentThe first paper was “A Methodological Approach to Key Competences in Informatics”, presented by Christina Dörge. The motivation for this study is moving educational standards from input-oriented approaches to output-oriented approaches – how students will use what you teach them in later life. Key competencies are important but what are they? What are the definitions, terms and real meaning of the words “key competencies”? A certificate of a certain grade or qualification doesn’t actually reflect true competency is many regards. (Bologna focuses on competencies but what do really mean?) Competencies also vary across different disciplines as skills are used differently in different areas – can we develop a non-normative approach to this?
The author discussed Qualitative Content Analysis (QCA) to look at different educational methods in the German educational system: hardware-oriented approaches, algorithm-oriented, application-oriented, user-oriented, information-oriented and, finally, system-oriented. The paradigm of teaching has shifted a lot over time (including the idea-oriented approach which is subsumed in system-oriented approaches). Looking across the development of the paradigms and trying to work out which categories developed requires a coding system over a review of textbooks in the field. If new competencies were added, then they were included in the category system and the coding started again. The resulting material could be referred to as “Possible candidates of Competencies in Informatics”, but those that are found in all of the previous approaches should be included as Competencies in Informatics. What about the key ones? Which of these are found in every part of informatics: theoretical, technical, practical and applied (under the German partitioning)? A key competency should be fundamental and ubiquitous.
The most important key competencies, by ranking, was algorithmic thinking, followed by design thinking, then analytic thinking (must look up the subtle difference here). (The paper contains all of the details) How can we gain competencies, especially these key ones, outside of a normative model that we have to apply to all contexts? We would like to be able to build on competencies, regardless of entry point, but taking into account prior learning so that we can build to a professional end point, regardless of starting point. What do we want to teach in the universities and to what degree?
The author finished on this point and it’s a good question: if we view our progression in terms of competency then how we can use these as building blocks to higher-level competencies? THis will help us in designing pre-requsitites and entry and exit points for all of our educational design.
The next talk was “Weaving Computing into all Middle School Disciplines”, presented by Susan Rodger from Duke. There were a lot of co-authors who were undergraduates (always good to see). The motivation for this project was there are problems with CS in the K-12 grades. It’s not taught in many schools and definitely missing in many high schools – not all Unis teach CS (?!?). Students don’t actually know what it is (the classic CS identify problem). There are also under-represented groups (women and minorities). Why should we teach it? 21st century skills, rewordings and many useful skills – from NCWIT.org.
Schools are already content-heavy so how do we convince people to add new courses? We can’t really so how about trying to weave it in to the existing project framework. Instead of doing a poster or a PowerPoint prevention, why not provide an animations that’s interactive in some way and that will involve computing. One way to achieve this is to use Alice, creating interactive stories or games, learning programming and computation concepts in a drag-and-drop code approach. Why Alice? There are many other good tools (Greenfoot, Lego, Scratch, etc) – well, it’s drag-and-drop, story-based and works well for women. The introductory Alice course in 2005 started to attract more women and now the class is more than 50% women. However, many people couldn’t come in because they didn’t have the prerequisites so the initiative moved out to 4th-6th grade to develop these skills earlier. Alice Virtual Worlds excited kids about computing, even at the younger ages.
The course “Adventures in Alice Programming” is aimed at grades 5-12 as Outreach, without having to use computing teachers (which would be a major restriction). There are 2-week teacher workshops where, initially, the teachers are taught Alice for a week, then the following week they develop lesson plans. There’s a one-week follow-up workshop the following summer. This initiative is funded until Summer, 2015, and has been run since 2008. There are sites: Durham, Charleston and Southern California. The teachers coming in are from a variety of disciplines.
How is this used on middle and high schools by teachers? Demonstrations, examples, interactive quizzes and make worlds for students to view. The students may be able to undertake projects, take and build quizzes, view and answer questions about a world, and the older the student, the more they can do.
Recruitment of teachers has been interesting. Starting from mailing lists and asking the teachers who come, the advertising has spread out across other conferences. It really helps to give them education credits and hours – but if we’re going to pay people to do this, how much do we need to pay? In the first workshop, paying $500 got a lot of teachers (some of whom were interested in Alice). The next workshop, they got gas money ($50/week) and this reduced the number down to the more interested teachers.
There are a lot of curriculum materials available for free (over 90 tutorials) with getting-started material as a one-hour tutorial showing basic set-up, placing objects, camera views and so on. There are also longer tutorials over several different stories. (Editor’s note: could we get away from the Princess/Dragon motif? The Princess says “Help!” and waits there to be rescued and then says “My Sweet Prince. I am saved.” Can we please arm the Princess or save the Knight?) There are also tutorial topics on inheritance, lists and parameter usage. The presenter demonstrated a lot of different things you can do with Alice, including book reports and tying Alice animations into the real world – such as boat trips which didn’t occur.
It was weird looking at the examples, and I’m not sure if it was just because of the gender of the authors, but the kitchen example in cooking with Spanish language instruction used female characters, the Princess/Dragon had a woman in a very passive role and the adventure game example had a male character standing in the boat. It was a small sample of the materials so I’m assuming that this was just a coincidence for the time being or it reflects the gender of the creator. Hmm. Another example and this time the Punnett Squares example has a grey-haired male scientist standing there. Oh dear.
Moving on, lots of helper objects are available for you to use if you’re a teacher to save on your development time which is really handy if you want to get things going quickly.
Finally, on discussing the impact, one 200 teachers have attend the workshops since 2008, who have then go on to teach 2900 students (over 2012-2013). From Google Analytics, over 20,000 users have accessed the materials. Also, a number of small outreach activities, Alice for an hour, have been run across a range of schools.
The final talk in this session was “Early validation of Computational Thinking Pattern Analysis”, presented by Hilarie Nickerson, from University of Colorado at Boulder. Computational thinking is important and, in the US, there have been both scope and pedagogy discussions, as well as instructional standards. We don’t have as much teacher education as we’d like. Assuming that we want the students to understand it, how can we help the teachers? Scalable Game Design integrates game and simulation design into public school curricula. The intention is to broaden participation for all kinds of schools as after-scjool classes had identified a lot of differences in the groups.
What’s the expectation of computational thinking? Administrators and industry want us to be able to take game knowledge and potentially use it for scientific simulation. A good game of a piece of ocean is also a predator-prey model, after all. Does it work? Well, it’s spread across a wide range of areas and communities, with more than 10,000 students (and a lot of different frogger games). Do they like it? There’s a perception that programming is cognitively hard and boring (on the congnitive/affective graph ranging from easy-hard/exciting-boring) We want it to be easy and exciting. We can make it easier with syntactic support and semantic support but making it exciting requires the students to feel ownership and to be able to express their creativity. And now they’re looking at the zone of proximal flow, which I’ve written about here. It’s good see this working in a project first, principles first model for these authors. (Here’s that picture again)
The results? The study spanned 10,000 students, 45% girls and 55% boys (pretty good numbers!), 48% underrepresented, with some middle schools exposing 350 students per year. The motivation starts by making things achievable but challenging – starting from 2D basics and moving up to more sophisticated 3D games. For those who wish to continue: 74% boys, 64% girls and 69% of minority students want to continue. There are other aspects that can raise motivation.
What about the issue of Computing Computational Thinking? The authors have created a Computational Thinking Pattern Analysis (CTPA) instrument that can track student learning trajectories and outcomes. Guided discovery, as a pedagogy, is very effective in raising motivation for both genders, where direct instruction is far less effective for girls (and is also less effective for boys).
How do we validate this? There are several computational thinking patterns grouped using latent semantic analysis. One of the simpler patterns for a game is the pair generation and absorption where we add things to the game world (trucks in Frogger or fish in predator/prey) and then remove them (truck gets off the screen/fish gets eaten). We also need collision detection. Measuring skill development across these skills will allow you to measure it in comparison to the tutorial and to other students. What does CTPA actually measure? The presence of code patterns that corresponded to computational thinking constructs suggest student skill with computational thinking (but doesn’t prove it) and is different from measuring learning. The graphs produced from this can be represented as a single number, which is used for validation. (See paper for the calculation!)
This has been running for two years now, with 39 student grades for 136 games, with the two human graders shown to have good inter-rater consistency. Frogger was not very heavily correlated (Spearman rank) but Sokoban, Centipede and the Sims weren’t bad, and removing design aspects of rubrics may improve this.
Was their predictive validity in the project? Did the CTPA correlate with the skill score of the final game produced? Yes, it appears to be significant although this is early work. CTPA does appear to be cabal of measuring CT patterns in code that correlate with human skill development. Future work on this includes the refinement of CTPA by dealing with the issue of non-orthogonal constructs (collisions that include generative and absorptive aspects), using more information about the rules and examining alternative calculations. The group are also working not oils for teachers, including REACT (real-time visualisations for progress assessment) and recommend possible skill trajectories based on their skill progression.
ITiCSE 2014, Day 3, Keynote, “Meeting the Future Challenges of Education and Digitization”, #ITiCSE2014 #ITiCSE @jangulliksen
Posted: June 25, 2014 Filed under: Education | Tags: advocacy, community, computer science education, digital learning, digitisation, education, educational problem, educational research, higher education, ITiCSE, ITiCSE 2014, Jan Gulliksen, learning, measurement, Professor Gulliksen, teaching, teaching approaches, thinking Leave a commentThis keynote was presented by the distinguished Professor Jan Gulliksen (@jangulliksen) of KTH. He started with two strange things. He asked for a volunteer and, of course, Simon put his hand up. Jan then asked Simon to act as a support department to seek help with putting on a jacket. Simon was facing the other way so had to try and explain to Jan the detailed process of orientating and identifying the various aspects of the jacket in order. (Simon is an exceedingly thoughtful and methodical person so he had a far greater degree of success than many of us would.) We were going to return to this. The second ‘strange thing’ was a video of President Obama speaking on Computer Science. Professor Gulliksen asked us how often a world leader would speak to a discipline community about the importance of their discipline. He noted that, in his own country, there was very little discussion in the political parties on Computer Science and IT. He noted that Chancellor Merkel had expressed a very surprising position, in response to the video, as the Internet being ‘uncharted territory‘.
Professor Gulliksen then introduced himself as the Dean of the School of Computer Science and communication in KTH, Stockholm, but he had 25 years of previous experience at Uppsala. Within this area, he had more than 20 years of experience working with the introduction of user-centred systems in public organisations. He showed two pictures, over 20 years apart, which showed how little the modern workspace has changed in that time, except that the number of post-it colours have increased! He has a great deal of interest in how we can improve the design for all users. Currently, he is looking at IT for mental and psychological disabilities, finder by Vinnova and PTS, which is not a widely explored area and can be of great help to homeless people. His team have been running workshops with these people to determine the possible impact of increased IT access – which included giving them money to come to the workshop. But they didn’t come. So they sent railway tickets. But they still didn’t come. But when they used a mentor to talk them through getting up, getting dressed, going to the station – then they came. (Interesting reflection point for all teachers here.) Difficult to work within the Swedish social security system because the homeless can be quite paranoid about revealing their data and it can be hard to work with people who have no address, just a mobile number. This is, however, a place where our efforts can have great societal impact.
Professor Gulliksen asks his PhD students: What is really your objective with this research? And he then gives them three options: change the world, contribute new knowledge or you want your PhD. The first time he asked this in Sweden, the student started sweating and asked if they could have a fourth option. (Yes, but your fourth is probably one of the three.) The student then said that they wanted to change the world, but on thinking about it (what have you done), wanted to change to contribute new knowledge, then thought about it some more (ok, but what have you done), after further questioning it devolved to “I think I want my PhD”. All of these answers can be fine but you have to actually achieve your purpose.
Our biggest impact is on the people that we produce, in terms of our contribution to the generation and dissemination of knowledge. Jan wants to know how we can be more aware of this role in society. How can we improve society through IT? This led to the committee for Digitisation, 2012-2015: Sweden shall be the best country in the world when it comes to using the opportunities for digitisation. Sweden produced “ICT for Everyone”, a Digital Agenda for Sweden, which preceded the European initiative. There are 170 different things to be achieved with IT politics but less than a handful of these have not been met since October, 2011. As a researcher, Professor Gulliksen had to come to an agreement with the minister to ensure that his academic freedom, to speak truth to power, would not be overly infringed – even though he was Norwegian. (Bit of Nordic humour here, which some of you may not get.)
The goal was that Sweden would be the best country in the world when it came to seizing these opportunities. That’s a modest goal (The speaker is a very funny man) but how do we actually quantify this? The main tasks for the commission were to develop the action plan, analyse progress in relate to goals, show the opportunities available, administer the organisations that signed the digital agenda (Nokia, Apple and so on) and collaborate with the players to increase digitisation. The committee itself is 7 people, with an ‘expert’ appointed because you have to do this, apparently. To extend the expertise, the government has appointed the small commission, a group of children aged 8-18, to support the main commission with input and proposals showing opportunities for all ages.
The committee started with three different areas: digital inclusion and equal opportunities; school, education and digital competence; and entrepreneurship and company development. The digital agenda itself has four strategic areas in terms of user participation:
- Easy and safe to use
- Services that create some utility
- Need for infrastructure
- IT’s role for societal development.
And there are 22 areas of mission under this that map onto the relevant ministries (you’ll have to look that up for yourself, I can’t type that quickly.) Over the year and a half that the committee has been running, they have achieved a lot.
The government needs measurements and ranking to show relative progress, so things like the World Economics Forum’s Networked Readiness Index (which Sweden topped) is often trotted out. But in 2013, Sweden had dropped to third, with Finland and Singapore going ahead – basically, the Straits Tiger is advancing quickly unsurprisingly. Other measures include the ICT development Index (ID) where Sweden is also doing well. You can look for this on the Digital Commisson’s website (which is in Swedish but translates). The first report has tried to map out the digital Swedend – actions and measures carried, key players and important indicators. Sweden is working a lot in the space but appears to be more passive in re-use than active in creativity but I need to read the report on this (which is also in Swedish). (I need to learn another language, obviously.) There was an interesting quadrant graph of organisations ranked by how active they were and how powerful their mandate was, which started a lot of interesting discussion. (This applies to academics in Unis as well, I realise.) (Jag behöver lära sig ett annat språk, uppenbarligen.)
The second report was released in March this year, focusing on the school system. How can Sweden produce recommendations on how the school system will improve? If the school system isn’t working well, you are going to fall behind in the rankings. (Please pay attention, Australian Government!) In Sweden, there’s a range of access to schools across Sweden but access is only one thing, actual use of the resources is another. Why should we do this? (Arguments to convince politicians). Reduce digital divide, economy needs IT-skilled labours, digital skills are needed to be an active citizen, increased efficiency and speed of learning and many other points! Sweden’s students are deteriorating on the PISA-survey rankings, particularly for boys, where 30% of Swedish boys are not reaching basic literacy in the first 9 years of schools, which is well below the OECD average. Interestingly, Swedish teachers are among the lowest when it comes to work time spent on skills development in the EU. 18% of teachers spend more than 6 days, but 9% spend none at all and is the second worst in European countries (Malta takes out the wooden spain).
The concrete proposals in the SOU were:
- Revised regulatory documents with a digital perspective
- Digitally based national tests in primary/secondary
- web based learning in elementary ands second schools
- digital skilling of teachers
- digital skilling for principals
- clarifying the digital component of teacher education programs
- research, method development and impact measurement
- innovation projects for the future of learning
Universities are also falling behind so this is an area of concern.
Professor Gulliksen also spoke about the digital champions of the EU (all European countries had one except Germany, until recently, possibly reflecting the Chancellor’s perspective) where digital champion is not an award, it’s a job: a high profile, dynamic and energetic individual responsible for getting everyone on-line and improving digital skills. You need to generate new ideas to go forward, for your country, rather than just copying things that might not fit. (Hello, Hofstede!)
The European Digital Champions work for digital inclusion and help everyone, although we all have responsibility. This provides strategic direction for government but reinforces that the ICT competence required for tomorrow’s work life has to be put in place today. He asked the audience who their European digital champions were and, apart from Sweden, no-one knew. The Danish champion (Lars Frelle-Petersen) has worked with the tax office to force everyone on-line because it’s the only way to do your tax! “The only way to conduct public services should be on the Internet” The digital champion of Finland (Linda Liukas, from Rails girls) wants everyone to have three mandatory languages: English, Chinese and JavaScript. (Groans and chuckles from the audience for the language choice.) The digital champion of Bulgaria (Gergeana Passy) wants Sofia to be the first free WiFi capital of Europe. Romania’s champion (Paul André Baran) is leading the library and wants libraries to rethink their role in the age of ICT. Ireland’s champion (Sir David Puttnam) believes that we have to move beyond triage mentality in education to increase inclusion.
In Sweden, 89% of the population is on-line and it’s plateaued at that. Why? Of those that are not on the Internet, most of them are more than 76% years old. This is a self-correcting problem, most likely. (50% of two year olds are on the Internet in Sweden!) The 1.1 million Swedes not online are not interested (77%) and 18% think it’s too complicated.
Jan wanted to leave us with two messages. The first is that we need to increase the amount of ICT practitioners. Demand is growing at 3% a year and supply is not keeping pace for trained, ICT graduates. If the EU want to stay competitive, they either have to grow them (education) or import them. (Side note: The Grand Coalition for Digital Jobs)
The second thought is the development of digital competence and improvement of digital skills among ICT users. 19% of the work force is ICT intensive, 90% of jobs require some IT skills but 53% of the workforce are not confident enough in their IT skills to seek another job in that sphere. We have to build knowledge and self-confidence. Higher Ed institutions have to look beyond the basic degree to share the resources and guidelines to grow digital competence across the whole community. Push away from the focus on exams and graduation to concentrate on learning – which is anathema to the usual academic machine. We need to work on new educational and business models to produced mature, competent and self-confident people with knowledge and make industry realise that this is actually what they want.
Professor Gulliksen believes that we need to recruit more ICT experience by bringing experts in to the Universities to broaden academia and pedagogy with industry experience. We also really, really need to balance the gender differences which show the same weird cultural trends in terms of self-deception rather than task description.
Overall, a lot of very interesting ideas – thank you, Professor Gulliksen!
Arnold Pears, Uppsala, challenged one of the points on engaging with, and training for, industry in that we prepare our students for society first, and industrial needs are secondary. Jan agreed with this distinction. (This followed on from a discussion that Arnold and I were having regarding the uncomfortable shoulder rubbing of education and vocational training in modern education. The reason I come to conferences is to have fascinating discussions with smart people in the breaks between interesting talks.)
The jacket came back up again at the end. When discussing Computer Science, Jan feels the need to use metaphors – as do we all. Basically, it’s easy to fall into the trap of thinking you can explain something as being simple when you’re drawing down on a very rich learned context for framing the knowledge. CS people can struggle with explaining things, especially to very new students, because we build a lot of things up to reach the “operational” level of CS knowledge and everything, from the error messages presented when a program doesn’t work to the efficiency of long-running programs, depends upon understanding this rich context. Whether the threshold here is a threshold concept (Meyer and Land), neo-Piaegtian, Learning Edge Momentum or Bloom-related problem doesn’t actually matter – there’s a minimum amount of well-accepted context required for certain metaphors to work or you’re explaining to someone how to put a jacket on with your eyes closed. 🙂
One of the final questions raised the issue of computing as a chore, rather than a joy. Professor Gulliksen noted that there are only two groups of people who are labelled as users, drug users and computer users, and the systematic application of computing as a scholastic subject often requires students to lock up more powerful computer (their mobile phones) to use locked-down, less powerful serried banks of computers (based on group purchasing and standard environments). (Here’s an interesting blog on a paper on why we should let students use their phones in classes.)
ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSE
Posted: June 24, 2014 Filed under: Education | Tags: collaboration, computer science education, education, educational problem, educational research, higher education, industry, ITiCSE, ITiCSE 2014, learning, mentoring, pair programming, pedagogy, principles of design, project based learning, small group learning, student perspective, studio based learning, teaching, teaching approaches, thinking, time, time factors, time management, universal principles of design, work-life balance Leave a comment- People: learning community
teachers and learners - Process: creative , reflective
- interactions
- physical space
- collaboration
- Product: designed object – a single focus for the process
- Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
- Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.







