ITiCSE 2014, Day 3, Session 6A, “Digital Fluency”, #ITiCSE2014 #ITiCSE
Posted: June 25, 2014 Filed under: Education | Tags: ALICE, arm the princess, Bologna model, competency, competency-based assessment, computational thinking, computer science education, Duke, education, educational problem, educational research, empowering minorities, empowering women, higher education, ITiCSE, ITiCSE 2014, key competencies, learning, middle school, non-normative approaches, pattern analysis, principles of design, reflection, teaching, thinking, tools, women in computing Leave a commentThe first paper was “A Methodological Approach to Key Competences in Informatics”, presented by Christina Dörge. The motivation for this study is moving educational standards from input-oriented approaches to output-oriented approaches – how students will use what you teach them in later life. Key competencies are important but what are they? What are the definitions, terms and real meaning of the words “key competencies”? A certificate of a certain grade or qualification doesn’t actually reflect true competency is many regards. (Bologna focuses on competencies but what do really mean?) Competencies also vary across different disciplines as skills are used differently in different areas – can we develop a non-normative approach to this?
The author discussed Qualitative Content Analysis (QCA) to look at different educational methods in the German educational system: hardware-oriented approaches, algorithm-oriented, application-oriented, user-oriented, information-oriented and, finally, system-oriented. The paradigm of teaching has shifted a lot over time (including the idea-oriented approach which is subsumed in system-oriented approaches). Looking across the development of the paradigms and trying to work out which categories developed requires a coding system over a review of textbooks in the field. If new competencies were added, then they were included in the category system and the coding started again. The resulting material could be referred to as “Possible candidates of Competencies in Informatics”, but those that are found in all of the previous approaches should be included as Competencies in Informatics. What about the key ones? Which of these are found in every part of informatics: theoretical, technical, practical and applied (under the German partitioning)? A key competency should be fundamental and ubiquitous.
The most important key competencies, by ranking, was algorithmic thinking, followed by design thinking, then analytic thinking (must look up the subtle difference here). (The paper contains all of the details) How can we gain competencies, especially these key ones, outside of a normative model that we have to apply to all contexts? We would like to be able to build on competencies, regardless of entry point, but taking into account prior learning so that we can build to a professional end point, regardless of starting point. What do we want to teach in the universities and to what degree?
The author finished on this point and it’s a good question: if we view our progression in terms of competency then how we can use these as building blocks to higher-level competencies? THis will help us in designing pre-requsitites and entry and exit points for all of our educational design.
The next talk was “Weaving Computing into all Middle School Disciplines”, presented by Susan Rodger from Duke. There were a lot of co-authors who were undergraduates (always good to see). The motivation for this project was there are problems with CS in the K-12 grades. It’s not taught in many schools and definitely missing in many high schools – not all Unis teach CS (?!?). Students don’t actually know what it is (the classic CS identify problem). There are also under-represented groups (women and minorities). Why should we teach it? 21st century skills, rewordings and many useful skills – from NCWIT.org.
Schools are already content-heavy so how do we convince people to add new courses? We can’t really so how about trying to weave it in to the existing project framework. Instead of doing a poster or a PowerPoint prevention, why not provide an animations that’s interactive in some way and that will involve computing. One way to achieve this is to use Alice, creating interactive stories or games, learning programming and computation concepts in a drag-and-drop code approach. Why Alice? There are many other good tools (Greenfoot, Lego, Scratch, etc) – well, it’s drag-and-drop, story-based and works well for women. The introductory Alice course in 2005 started to attract more women and now the class is more than 50% women. However, many people couldn’t come in because they didn’t have the prerequisites so the initiative moved out to 4th-6th grade to develop these skills earlier. Alice Virtual Worlds excited kids about computing, even at the younger ages.
The course “Adventures in Alice Programming” is aimed at grades 5-12 as Outreach, without having to use computing teachers (which would be a major restriction). There are 2-week teacher workshops where, initially, the teachers are taught Alice for a week, then the following week they develop lesson plans. There’s a one-week follow-up workshop the following summer. This initiative is funded until Summer, 2015, and has been run since 2008. There are sites: Durham, Charleston and Southern California. The teachers coming in are from a variety of disciplines.
How is this used on middle and high schools by teachers? Demonstrations, examples, interactive quizzes and make worlds for students to view. The students may be able to undertake projects, take and build quizzes, view and answer questions about a world, and the older the student, the more they can do.
Recruitment of teachers has been interesting. Starting from mailing lists and asking the teachers who come, the advertising has spread out across other conferences. It really helps to give them education credits and hours – but if we’re going to pay people to do this, how much do we need to pay? In the first workshop, paying $500 got a lot of teachers (some of whom were interested in Alice). The next workshop, they got gas money ($50/week) and this reduced the number down to the more interested teachers.
There are a lot of curriculum materials available for free (over 90 tutorials) with getting-started material as a one-hour tutorial showing basic set-up, placing objects, camera views and so on. There are also longer tutorials over several different stories. (Editor’s note: could we get away from the Princess/Dragon motif? The Princess says “Help!” and waits there to be rescued and then says “My Sweet Prince. I am saved.” Can we please arm the Princess or save the Knight?) There are also tutorial topics on inheritance, lists and parameter usage. The presenter demonstrated a lot of different things you can do with Alice, including book reports and tying Alice animations into the real world – such as boat trips which didn’t occur.
It was weird looking at the examples, and I’m not sure if it was just because of the gender of the authors, but the kitchen example in cooking with Spanish language instruction used female characters, the Princess/Dragon had a woman in a very passive role and the adventure game example had a male character standing in the boat. It was a small sample of the materials so I’m assuming that this was just a coincidence for the time being or it reflects the gender of the creator. Hmm. Another example and this time the Punnett Squares example has a grey-haired male scientist standing there. Oh dear.
Moving on, lots of helper objects are available for you to use if you’re a teacher to save on your development time which is really handy if you want to get things going quickly.
Finally, on discussing the impact, one 200 teachers have attend the workshops since 2008, who have then go on to teach 2900 students (over 2012-2013). From Google Analytics, over 20,000 users have accessed the materials. Also, a number of small outreach activities, Alice for an hour, have been run across a range of schools.
The final talk in this session was “Early validation of Computational Thinking Pattern Analysis”, presented by Hilarie Nickerson, from University of Colorado at Boulder. Computational thinking is important and, in the US, there have been both scope and pedagogy discussions, as well as instructional standards. We don’t have as much teacher education as we’d like. Assuming that we want the students to understand it, how can we help the teachers? Scalable Game Design integrates game and simulation design into public school curricula. The intention is to broaden participation for all kinds of schools as after-scjool classes had identified a lot of differences in the groups.
What’s the expectation of computational thinking? Administrators and industry want us to be able to take game knowledge and potentially use it for scientific simulation. A good game of a piece of ocean is also a predator-prey model, after all. Does it work? Well, it’s spread across a wide range of areas and communities, with more than 10,000 students (and a lot of different frogger games). Do they like it? There’s a perception that programming is cognitively hard and boring (on the congnitive/affective graph ranging from easy-hard/exciting-boring) We want it to be easy and exciting. We can make it easier with syntactic support and semantic support but making it exciting requires the students to feel ownership and to be able to express their creativity. And now they’re looking at the zone of proximal flow, which I’ve written about here. It’s good see this working in a project first, principles first model for these authors. (Here’s that picture again)
The results? The study spanned 10,000 students, 45% girls and 55% boys (pretty good numbers!), 48% underrepresented, with some middle schools exposing 350 students per year. The motivation starts by making things achievable but challenging – starting from 2D basics and moving up to more sophisticated 3D games. For those who wish to continue: 74% boys, 64% girls and 69% of minority students want to continue. There are other aspects that can raise motivation.
What about the issue of Computing Computational Thinking? The authors have created a Computational Thinking Pattern Analysis (CTPA) instrument that can track student learning trajectories and outcomes. Guided discovery, as a pedagogy, is very effective in raising motivation for both genders, where direct instruction is far less effective for girls (and is also less effective for boys).
How do we validate this? There are several computational thinking patterns grouped using latent semantic analysis. One of the simpler patterns for a game is the pair generation and absorption where we add things to the game world (trucks in Frogger or fish in predator/prey) and then remove them (truck gets off the screen/fish gets eaten). We also need collision detection. Measuring skill development across these skills will allow you to measure it in comparison to the tutorial and to other students. What does CTPA actually measure? The presence of code patterns that corresponded to computational thinking constructs suggest student skill with computational thinking (but doesn’t prove it) and is different from measuring learning. The graphs produced from this can be represented as a single number, which is used for validation. (See paper for the calculation!)
This has been running for two years now, with 39 student grades for 136 games, with the two human graders shown to have good inter-rater consistency. Frogger was not very heavily correlated (Spearman rank) but Sokoban, Centipede and the Sims weren’t bad, and removing design aspects of rubrics may improve this.
Was their predictive validity in the project? Did the CTPA correlate with the skill score of the final game produced? Yes, it appears to be significant although this is early work. CTPA does appear to be cabal of measuring CT patterns in code that correlate with human skill development. Future work on this includes the refinement of CTPA by dealing with the issue of non-orthogonal constructs (collisions that include generative and absorptive aspects), using more information about the rules and examining alternative calculations. The group are also working not oils for teachers, including REACT (real-time visualisations for progress assessment) and recommend possible skill trajectories based on their skill progression.
ITiCSE 2014, Day 3, Keynote, “Meeting the Future Challenges of Education and Digitization”, #ITiCSE2014 #ITiCSE @jangulliksen
Posted: June 25, 2014 Filed under: Education | Tags: advocacy, community, computer science education, digital learning, digitisation, education, educational problem, educational research, higher education, ITiCSE, ITiCSE 2014, Jan Gulliksen, learning, measurement, Professor Gulliksen, teaching, teaching approaches, thinking Leave a commentThis keynote was presented by the distinguished Professor Jan Gulliksen (@jangulliksen) of KTH. He started with two strange things. He asked for a volunteer and, of course, Simon put his hand up. Jan then asked Simon to act as a support department to seek help with putting on a jacket. Simon was facing the other way so had to try and explain to Jan the detailed process of orientating and identifying the various aspects of the jacket in order. (Simon is an exceedingly thoughtful and methodical person so he had a far greater degree of success than many of us would.) We were going to return to this. The second ‘strange thing’ was a video of President Obama speaking on Computer Science. Professor Gulliksen asked us how often a world leader would speak to a discipline community about the importance of their discipline. He noted that, in his own country, there was very little discussion in the political parties on Computer Science and IT. He noted that Chancellor Merkel had expressed a very surprising position, in response to the video, as the Internet being ‘uncharted territory‘.
Professor Gulliksen then introduced himself as the Dean of the School of Computer Science and communication in KTH, Stockholm, but he had 25 years of previous experience at Uppsala. Within this area, he had more than 20 years of experience working with the introduction of user-centred systems in public organisations. He showed two pictures, over 20 years apart, which showed how little the modern workspace has changed in that time, except that the number of post-it colours have increased! He has a great deal of interest in how we can improve the design for all users. Currently, he is looking at IT for mental and psychological disabilities, finder by Vinnova and PTS, which is not a widely explored area and can be of great help to homeless people. His team have been running workshops with these people to determine the possible impact of increased IT access – which included giving them money to come to the workshop. But they didn’t come. So they sent railway tickets. But they still didn’t come. But when they used a mentor to talk them through getting up, getting dressed, going to the station – then they came. (Interesting reflection point for all teachers here.) Difficult to work within the Swedish social security system because the homeless can be quite paranoid about revealing their data and it can be hard to work with people who have no address, just a mobile number. This is, however, a place where our efforts can have great societal impact.
Professor Gulliksen asks his PhD students: What is really your objective with this research? And he then gives them three options: change the world, contribute new knowledge or you want your PhD. The first time he asked this in Sweden, the student started sweating and asked if they could have a fourth option. (Yes, but your fourth is probably one of the three.) The student then said that they wanted to change the world, but on thinking about it (what have you done), wanted to change to contribute new knowledge, then thought about it some more (ok, but what have you done), after further questioning it devolved to “I think I want my PhD”. All of these answers can be fine but you have to actually achieve your purpose.
Our biggest impact is on the people that we produce, in terms of our contribution to the generation and dissemination of knowledge. Jan wants to know how we can be more aware of this role in society. How can we improve society through IT? This led to the committee for Digitisation, 2012-2015: Sweden shall be the best country in the world when it comes to using the opportunities for digitisation. Sweden produced “ICT for Everyone”, a Digital Agenda for Sweden, which preceded the European initiative. There are 170 different things to be achieved with IT politics but less than a handful of these have not been met since October, 2011. As a researcher, Professor Gulliksen had to come to an agreement with the minister to ensure that his academic freedom, to speak truth to power, would not be overly infringed – even though he was Norwegian. (Bit of Nordic humour here, which some of you may not get.)
The goal was that Sweden would be the best country in the world when it came to seizing these opportunities. That’s a modest goal (The speaker is a very funny man) but how do we actually quantify this? The main tasks for the commission were to develop the action plan, analyse progress in relate to goals, show the opportunities available, administer the organisations that signed the digital agenda (Nokia, Apple and so on) and collaborate with the players to increase digitisation. The committee itself is 7 people, with an ‘expert’ appointed because you have to do this, apparently. To extend the expertise, the government has appointed the small commission, a group of children aged 8-18, to support the main commission with input and proposals showing opportunities for all ages.
The committee started with three different areas: digital inclusion and equal opportunities; school, education and digital competence; and entrepreneurship and company development. The digital agenda itself has four strategic areas in terms of user participation:
- Easy and safe to use
- Services that create some utility
- Need for infrastructure
- IT’s role for societal development.
And there are 22 areas of mission under this that map onto the relevant ministries (you’ll have to look that up for yourself, I can’t type that quickly.) Over the year and a half that the committee has been running, they have achieved a lot.
The government needs measurements and ranking to show relative progress, so things like the World Economics Forum’s Networked Readiness Index (which Sweden topped) is often trotted out. But in 2013, Sweden had dropped to third, with Finland and Singapore going ahead – basically, the Straits Tiger is advancing quickly unsurprisingly. Other measures include the ICT development Index (ID) where Sweden is also doing well. You can look for this on the Digital Commisson’s website (which is in Swedish but translates). The first report has tried to map out the digital Swedend – actions and measures carried, key players and important indicators. Sweden is working a lot in the space but appears to be more passive in re-use than active in creativity but I need to read the report on this (which is also in Swedish). (I need to learn another language, obviously.) There was an interesting quadrant graph of organisations ranked by how active they were and how powerful their mandate was, which started a lot of interesting discussion. (This applies to academics in Unis as well, I realise.) (Jag behöver lära sig ett annat språk, uppenbarligen.)
The second report was released in March this year, focusing on the school system. How can Sweden produce recommendations on how the school system will improve? If the school system isn’t working well, you are going to fall behind in the rankings. (Please pay attention, Australian Government!) In Sweden, there’s a range of access to schools across Sweden but access is only one thing, actual use of the resources is another. Why should we do this? (Arguments to convince politicians). Reduce digital divide, economy needs IT-skilled labours, digital skills are needed to be an active citizen, increased efficiency and speed of learning and many other points! Sweden’s students are deteriorating on the PISA-survey rankings, particularly for boys, where 30% of Swedish boys are not reaching basic literacy in the first 9 years of schools, which is well below the OECD average. Interestingly, Swedish teachers are among the lowest when it comes to work time spent on skills development in the EU. 18% of teachers spend more than 6 days, but 9% spend none at all and is the second worst in European countries (Malta takes out the wooden spain).
The concrete proposals in the SOU were:
- Revised regulatory documents with a digital perspective
- Digitally based national tests in primary/secondary
- web based learning in elementary ands second schools
- digital skilling of teachers
- digital skilling for principals
- clarifying the digital component of teacher education programs
- research, method development and impact measurement
- innovation projects for the future of learning
Universities are also falling behind so this is an area of concern.
Professor Gulliksen also spoke about the digital champions of the EU (all European countries had one except Germany, until recently, possibly reflecting the Chancellor’s perspective) where digital champion is not an award, it’s a job: a high profile, dynamic and energetic individual responsible for getting everyone on-line and improving digital skills. You need to generate new ideas to go forward, for your country, rather than just copying things that might not fit. (Hello, Hofstede!)
The European Digital Champions work for digital inclusion and help everyone, although we all have responsibility. This provides strategic direction for government but reinforces that the ICT competence required for tomorrow’s work life has to be put in place today. He asked the audience who their European digital champions were and, apart from Sweden, no-one knew. The Danish champion (Lars Frelle-Petersen) has worked with the tax office to force everyone on-line because it’s the only way to do your tax! “The only way to conduct public services should be on the Internet” The digital champion of Finland (Linda Liukas, from Rails girls) wants everyone to have three mandatory languages: English, Chinese and JavaScript. (Groans and chuckles from the audience for the language choice.) The digital champion of Bulgaria (Gergeana Passy) wants Sofia to be the first free WiFi capital of Europe. Romania’s champion (Paul André Baran) is leading the library and wants libraries to rethink their role in the age of ICT. Ireland’s champion (Sir David Puttnam) believes that we have to move beyond triage mentality in education to increase inclusion.
In Sweden, 89% of the population is on-line and it’s plateaued at that. Why? Of those that are not on the Internet, most of them are more than 76% years old. This is a self-correcting problem, most likely. (50% of two year olds are on the Internet in Sweden!) The 1.1 million Swedes not online are not interested (77%) and 18% think it’s too complicated.
Jan wanted to leave us with two messages. The first is that we need to increase the amount of ICT practitioners. Demand is growing at 3% a year and supply is not keeping pace for trained, ICT graduates. If the EU want to stay competitive, they either have to grow them (education) or import them. (Side note: The Grand Coalition for Digital Jobs)
The second thought is the development of digital competence and improvement of digital skills among ICT users. 19% of the work force is ICT intensive, 90% of jobs require some IT skills but 53% of the workforce are not confident enough in their IT skills to seek another job in that sphere. We have to build knowledge and self-confidence. Higher Ed institutions have to look beyond the basic degree to share the resources and guidelines to grow digital competence across the whole community. Push away from the focus on exams and graduation to concentrate on learning – which is anathema to the usual academic machine. We need to work on new educational and business models to produced mature, competent and self-confident people with knowledge and make industry realise that this is actually what they want.
Professor Gulliksen believes that we need to recruit more ICT experience by bringing experts in to the Universities to broaden academia and pedagogy with industry experience. We also really, really need to balance the gender differences which show the same weird cultural trends in terms of self-deception rather than task description.
Overall, a lot of very interesting ideas – thank you, Professor Gulliksen!
Arnold Pears, Uppsala, challenged one of the points on engaging with, and training for, industry in that we prepare our students for society first, and industrial needs are secondary. Jan agreed with this distinction. (This followed on from a discussion that Arnold and I were having regarding the uncomfortable shoulder rubbing of education and vocational training in modern education. The reason I come to conferences is to have fascinating discussions with smart people in the breaks between interesting talks.)
The jacket came back up again at the end. When discussing Computer Science, Jan feels the need to use metaphors – as do we all. Basically, it’s easy to fall into the trap of thinking you can explain something as being simple when you’re drawing down on a very rich learned context for framing the knowledge. CS people can struggle with explaining things, especially to very new students, because we build a lot of things up to reach the “operational” level of CS knowledge and everything, from the error messages presented when a program doesn’t work to the efficiency of long-running programs, depends upon understanding this rich context. Whether the threshold here is a threshold concept (Meyer and Land), neo-Piaegtian, Learning Edge Momentum or Bloom-related problem doesn’t actually matter – there’s a minimum amount of well-accepted context required for certain metaphors to work or you’re explaining to someone how to put a jacket on with your eyes closed. 🙂
One of the final questions raised the issue of computing as a chore, rather than a joy. Professor Gulliksen noted that there are only two groups of people who are labelled as users, drug users and computer users, and the systematic application of computing as a scholastic subject often requires students to lock up more powerful computer (their mobile phones) to use locked-down, less powerful serried banks of computers (based on group purchasing and standard environments). (Here’s an interesting blog on a paper on why we should let students use their phones in classes.)
ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSE
Posted: June 24, 2014 Filed under: Education | Tags: collaboration, computer science education, education, educational problem, educational research, higher education, industry, ITiCSE, ITiCSE 2014, learning, mentoring, pair programming, pedagogy, principles of design, project based learning, small group learning, student perspective, studio based learning, teaching, teaching approaches, thinking, time, time factors, time management, universal principles of design, work-life balance Leave a comment- People: learning community
teachers and learners - Process: creative , reflective
- interactions
- physical space
- collaboration
- Product: designed object – a single focus for the process
- Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
- Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.
ITiCSE 2014, Session 3C: Gender and Diversity, #ITiCSE2014 #ITiCSE @patitsel
Posted: June 24, 2014 Filed under: Education | Tags: advocacy, authenticity, community, computer science, design, education, educational problem, educational research, Elizabeth Patitsas, equality, ethics, gender roles, higher education, ITiCSE, ITiCSE 2014, learning, mentoring, sexism, sexism in computer science, students, teaching approaches, thinking 1 CommentThis sessions was dedicated to the very important issues of gender and diversity. The opening talk in this session was “A Historical Examination of the Social Factors Affecting Female Participation in Computing”, presented by Elizabeth Patitsas (@patitsel). This paper was a literature review of the history of the social factors affecting the old professional association of the word “computer” with female arithmeticians to today’s very male computing culture. The review spanned 73 papers, 5 books, 2 PhD theses and a Computing Educators Oral History project. The mix of sources was pretty diverse. The two big caveats were that it only looked at North America (which means that the sources tend to focus on Research Intensive universities and white people) and that this is a big picture talk, looking at social forces rather than individual experiences. This means that, of course, individuals may have had different experiences.
The story begins in the 19th Century, when computer was a job and this was someone who did computations, for scientists, labs, or for government. Even after first wave feminism, female education wasn’t universally available and the women in education tended to be women of privilege. After the end of the 19th century, women started to enter traditional universities to attempt to study PhDs (although often receiving a Bachelors for this work) but had few job opportunities on graduation, except teaching or being a computer. Whatever work was undertaken was inherently short-term as women were expected to leave the work force on marriage, to focus on motherhood.
During the early 20th Century, quantitative work was seen to be feminine and qualitative work required the rigour of a man – things have changed in perceptions, haven’t they! The women’s work was grunt work: calculating, microscopy. Then there’s men’s work: designing and analysing. The Wars of the 20th Century changed this by removing men and women stepping into the roles of men. Notably, women were stereotyped as being better coders in this role because of their computer background. Coding was clerical, performed by a woman under the direction of a male supervisor. This became male typed over time. As programming became more developed over the 50s and 60s and the perception of it as a dark art started to form a culture of asociality. Random hiring processes started to hurt female participation, because if you are hiring anyone then (quitting the speaker) if you could hire a man, why hire a woman? (Sound of grinding teeth from across the auditorium as we’re all being reminded of stupid thinking, presented very well for our examination by Elizabeth.)
CS itself stared being taught elsewhere but became its own school-discipline in the 60s and 70s, with enrolment and graduation of women matching that of physics very closely. The development of the PC and its adoption in the 80s changed CS enrolments in the 80s and CS1 became a weeder course to keep the ‘under qualified’ from going on to further studies in Computer Science. This then led to fewer non-traditional CS students, especially women, as simple changes like requiring mathematics immediately restricted people without full access to high quality education at school level.
In the 90s, we all went mad and developed hacker culture based around the gamer culture, which we already know has had a strongly negative impact on female participation – let’s face it, you don’t want to be considered part of a club that you don’t like and goes to effort to say it doesn’t welcome you. This led to some serious organisation of women’s groups in CS: Anita Borg Institute, CRA-W and the Grace Hopper Celebration.
Enrolments kept cycling. We say an enrolment boom and bust (including greater percentage of women) that matched the dot-com bubble. At the peak, female enrolment got as high as 30% and female faculty also increased. More women in academia corresponded to more investigation of the representation of women in Computer Science. It took quite a long time to get serious discussions and evidence identifying how systematic the under-representation is.
Over these different decades, women had very different experiences. The first generation had a perception that they had to give up family, be tough cookies and had a pretty horrible experience. The second generation of STEM, in 80s/90s, had female classmates and wanted to be in science AND to have families. However, first generation advisers were often very harsh on their second generation mentees as their experiences were so dissimilar. The second generation in CS doesn’t match neatly that of science and biology due to the cycles and the negative nerd perception is far, far stronger for CS than other disciplines.
Now to the third generation, starting in the 00s, outperforming their male peers in many cases and entering a University with female role models. They also share household duties with their partners, even when both are working and family are involved, which is a pretty radical change in the right direction.
If you’re running a mentoring program for incoming women, their experience may be very. very different from those of the staff that you have to mentor them. Finally, learning from history is essential. We are seeing more students coming in than, for a number of reasons, we may be able to teach. How will we handle increasing enrolments without putting on restrictions that disproportionately hurt our under-represented groups? We have to accept that most of our restrictions actually don’t apply in a uniform sense and that this cannot be allowed to continue. It’s wrong to get your restrictions in enrolment at a greater expense on one group when there’s no good reason to attack one group over another.
One of the things mentioned is that if you ask people to do something because of they are from group X, and make this clear, then they are less likely to get involved. Important note: don’t ask women to do something because they’re women, even if you have the intention to address under-representation.
The second paper, “Cultural Appropriation of Computational Thinking Acquisition Research: Seeding Fields of Diversity”, presented by Martha Serra, who is from Brazil and good luck to them in the World Cup tonight! Brazil adapted scalable game design to local educational needs, with the development of a web-ased system “PoliFacets”, seeding the reflection of IT and Educational researchers.
Brazil is the B in BRICS, with nearly 200 million people and the 5th largest country in the World. Bigger than Australia! (But we try harder.) Brazil is very regionally diverse: rain forest, wetlands, drought, poverty, Megacities, industry, agriculture and, unsurprisingly, it’s very hard to deal with such diversity. 80% of youth population failed to complete basic education. Only 26% of the adult population reach full functional literacy. (My jaw just dropped.)
Scalable Game Design (SGD) is a program from the University of Colorado in Boulder, to motivate all students in Computer Science through game design. The approach uses AgentSheets and AgentsCubes as visual programming environments. (The image shown was of a very visual programming language that seemed reminiscent of Scratch, not surprising as it is accepted that Scratch picked up some characteristics from AgentSheets.)
The SGD program started as an after-school program in 2010 with a public middle school, using a Geography teacher as the program leader. In the following year, with the same school, a 12-week program ran with a Biology teacher in charge. Some of the students who had done it before had, unfortunately, forgotten things by the next year. The next year, a workshop for teachers was introduced and the PoliFacets site. The next year introduced more schools, with the first school now considered autonomous, and the teacher workshops were continued. Overall, a very positive development of sustainable change.
Learners need stimulation but teachers need training if we’re going to introduce technology – very similar to what we learned in our experience with digital technologies.
The PolFacets systems is a live documentation web-based system used to assist with the process. Live demo not available as the Brazilian corner of internet seems to be full of football. It’s always interesting to look at a system that was developed in a different era – it makes you aware how much refactoring goes into the IDEs of modern systems to stop them looking like refugees from a previous decade. (Perhaps the less said about the “Mexican Frogger” game the better…)
The final talk (for both this session and the day) was “Apps for Social Justice: Motivating Computer Science Learning with Design and Real-World Problem Solving”, presented by Sarah Van Wart. Starting with motivation, tech has diversity issues, with differential access and exposure to CS across race and gender lines. Tech industry has similar problems with recruiting and retaining more diverse candidates but there are also some really large structural issues that shadow the whole issue.
Structurally, white families have 18-20 times the wealth of Latino and African-American people, while jail population is skewed the opposite way. The schools start with the composition of the community and are supposed to solve these distribution issues, but instead they continue to reflect the composition that they inherited. US schools are highly tracked and White and Asian students tend to track into Advanced Placement, where Black and Latino students track into different (and possibly remedial) programs.
Some people are categorically under-represented and this means that certain perspectives are being categorically excluded – this is to our detriment.
The first aspect of the theoretical prestige is Conceptions of Equity. Looking at Jaime Escalante, and his work with students to do better at the AP calculus exam. His idea of equity was access, access to a high-value test that could facilitate college access and thus more highly paid careers. The next aspect of this was Funds of Knowledge, Gonzalez et al, where focusing on a white context reduces aspects of other communities and diminishes one community’s privilege. The third part, Relational Equity (Jo Boaler), reduced streaming and tracking, focusing on group work, where each student was responsible for each student’s success. Finally,Rico Gutstein takes a socio-political approach with Social Justice Pedagogy to provide authentic learning frameworks and using statistics to show up the problems.
The next parts of the theoretical perspective was Computer Science Education, and Learning Sciences (socio-cultrual perspective on learning, who you are and what it means to be ‘smart’)
In terms of learning science, Nasir and Hand, 2006, discussed Practice-linked Identities, with access to the domain (students know what CS people do), integral roles (there are many ways to contribute to a CS project) and self-expression and feeling competent (students can bring themselves to their CS practice).
The authors produced a short course for a small group of students to develop a small application. The outcome was BAYP (Bay Area Youth Programme), an App Inventor application that queried a remote database to answer user queries on local after-school program services.
How do we understand this in terms of an equity intervention? Let’s go back to Nasir and Hand.
- Access to the domain: Design and data used together is part of what CS people do, bridging students’ concepts and providing an intuitive way of connecting design to the world. When we have data, we can get categories, then schemas and so on. (This matters to CS people, if you’re not one. 🙂 )
- Integral Roles: Students got to see the importance of design, sketching things out, planning, coding, and seeing a segue from non-technical approaches to technical ones. However, one other very important aspect is that the oft-derided “liberal arts” skills may actually be useful or may be a good basis to put coding upon, as long as you understand what programming is and how you can get access to it.
- Making a unique contribution: The students felt that what they were doing was valuable and let them see what they could do.
Take-aways? CS can appeal to so many peopleif we think about how to do it. There are many pathways to help people. We have to think about what we can be doing to help people. Designing for their own community is going to be empowering for people.
Sarah finished on some great questions. How will they handle scaling it up? Apprenticeship is really hard to scale up but we can think about it. Does this make students want to take CS? Will this lead to AP? Can it be inter-leaved with a project course? Could this be integrated into a humanities or social science context? Lots to think about but it’s obvious that there’s been a lot of good work that has gone into this.
What a great session! Really thought-provoking and, while it was a reminder for many of us how far we have left to go, there were probably people present who had heard things like this for the first time.
ITiCSE 2014: Working Groups Reports #ITiCSE2014 #ITiCSE
Posted: June 23, 2014 Filed under: Education | Tags: access, accessibility, computational thinking, computer science education, CT, education, higher education, ITiCSE, ITiCSE 2014, learning, learning technologies, methodology, peer review, teaching, thinking, Workgroups Leave a commentUnfortunately, there are too many working groups, reporting at too high a speed, for me to capture it here. All of the working groups are going to release reports and I suggest that you have a look into some of the areas covered. The topics reported on today were:
- Methodology and Technology for In-Flow Peer Review
In-flow peer review is the review of an exercise as it is going on. Providing elements to review can be difficult as it may encourage plagiarism but there are many benefits to this, which generally justifies the decision to do review. Picking who can review what for maximum benefit is also very difficult.
We’ve tried to do a lot of work here but it’s really challenging because there are so many possibly right ways.
- Computational Thinking in K-9 Education
Given that there are national, and localised, definitions of what “Computational Thinking” is, this is challenging to identify. Many K-12 teachers are actually using CT techniques but wouldn’t know to answer “yes” if asked if they were. Many issues in play here but the working group are a multi-national and thoughtful group who have lots of ideas.
As a note, K-9 refers to Kindergarten to Year 9, not dogs. Just to be clear.
- Increasing Accessibility and Adoption of Smart Technologies for Computer Science Education
How can you integrate all of the whizz-bang stuff into the existing courses and things that we already use everyday? The working group have proposed an architecture to help with the adoption. It’s a really impressive, if scary, slide but I’ll be interested to see where this goes. (Unsurprisingly, it’s a three-tier model that will look familiar to anyone with a networking or distributed systems background.) Basically, let’s not re-invent the wheel when it comes to using smarter technologies but let’s also find out the best ways to build these systems and then share that, as well as good content and content delivery. Identity management is, of course, a very difficult problem for any system so this is a core concern.
There’s a survey you can take to share your knowledge with this workgroup. (The feared and dreaded Simon noted that it would be nice if their survey was smarter.) A question from the floor was that, while the architecture was nice and standards were good, what impact would this have on the chalkface? (This is a neologism I’ve recently learned about, the equivalent of the coalface for the educational teaching edge.) This is a good question. You only have to look at how many standards there are to realise that standard construction and standard adoption are two very different beasts. Cultural change is something that has to be managed on top of technical superiority. The working group seems to be on top of this so it will be interesting to see where it goes.
- Strengthening Methodology Education in Computing
Unsurprisingly, computing is a very broad field and is methodologically diverse. There’s a lot of ‘borrowing’ from other fields, which is a nice way of saying ‘theft’. (Sorry, philosophers, but ontologies are way happier with us.) Our curricular have very few concrete references to methodology, with a couple of minor exceptions. The working group had a number of objectives, which they reduced down to fewer and remove the term methodology. Literature reviews on methodology education are sparse but there is more on teaching research methods. Embarrassingly, the paper that shows up for this is a 2006 report from a working group from this very conference. Oops. As Matti asked, are we really this disinterested in this topic that we forget that we were previously interested in it? The group voted to change direction to get some useful work out of the group. They voted not to produce a report as it was too challenging to repurpose things at this late stage. All their work would be toward annotating the existing paper rather than creating a new one.
One of the questions was why the previous paper had so few citations, cited 5 times out of 3000 downloads, despite the topic being obviously important. One aspect mentioned is that CS researchers are a separate community and I reiterated some early observations that we have made on the pathway that knowledge takes to get from the CS Ed community into the CS ‘research’ community. (This summarises as “Do CS Ed research, get it into pop psychology, get it into the industrial focus and then it will sneak into CS as a curricular requirement, at which stage it will be taken seriously.” Only slightly tongue-in-cheek.)
- A Sustainable Gamification Strategy for Education
Sadly, this group didn’t show up, so this was disbanded. I imagine that they must have had a very good reason.
Interesting set of groups – watch for the reports and, if you use one, CITE IT! 🙂
ITiCSE 2014, Monday, Session 1A, Technology and Learning, #ITiCSE2014 #ITiCSE @patitsel @guzdial
Posted: June 23, 2014 Filed under: Education | Tags: badges, computer science education, data visualisation, digital education, digital technologies, education, game development, gamification, higher education, ITiCSE, ITiCSE 2014, learning, moocs, PBL, projected based learning, SPOCs, teaching, technology, thinking, visualisation Leave a comment(The speakers are going really. really quickly so apologies for any errors or omissions that slip through.)
The chair had thanked the Spanish at the opening for the idea of long coffee breaks and long lunches – a sentiment I heartily share as it encourages discussions, which are the life blood of good conferences. The session opened with “SPOC – supported introduction to Programming” presented by Marco Piccioni. SPOCs are Small Private On-line Courses and are part of the rich tapestry of hand-crafted terminology that we are developing around digital delivery. The speaker is from ETH-Zurich and says that they took a cautious approach to go step-by-step in taking an existing and successful course and move it into the on-line environment. The classic picture from University of Bologna of the readers/scribes was shown. (I was always the guy sleeping in the third row.)
We want our teaching to be interesting and effective so there’s an obis out motivation to get away from this older approach. ETH has an interesting approach where the exam is 10 months after the lecture, which leads to interesting learning strategies for students who can’t solve the instrumentality problem of tying work now into success in the future. Also, ETH had to create an online platform to get around all of the “my machine doesn’t work” problems that would preclude the requirement to install an IDE. The final point of motivation was to improve their delivery.
The first residential version of the course ran in 2003, with lectures and exercise sessions. The lectures are in German and the exercise sessions are in English and German, because English is so dominant in CS. There are 10 extensive home assignments including programming and exercise sessions groups formed according to students’ perceived programming proficiency level. (Note on the last point: Hmmm, so people who can’t program are grouped together with other people who can’t program? I believe that the speaker clarifies this as “self-perceived” ability but I’m still not keen on this kind of streaming. If this worked effectively, then any master/apprentice model should automatically fail) Groups were able to switch after a week, for language or not working with the group.
The learning platform for the activity was Moodle and their experience with it was pretty good, although it didn’t do everything that they wanted. (They couldn’t put interactive sessions into a lecture, so they produced a lecture-quiz plug-in for Moodle. That’s very handy.) This is used in conjunction with a programming assessment environment, in the cloud, which ties together the student performance at programming with the LMS back-end.
The SPOC components are:
- lectures, with short intros and video segments up to 17 minutes. (Going to drop to 10 minutes based on student feedback),
- quizzes, during lectures, testing topic understanding immediately, and then testing topic retention after the lecture,
- programming exercises, with hands-on practice and automatic feedback
Feedback given to the students included the quizzes, with a badge for 100% score (over unlimited attempts so this isn’t as draconian as it sounds), and a variety of feedback on programming exercises, including automated feedback (compiler/test suite based on test cases and output matching) and a link to a suggested solution. The predefined test suite was gameable (you could customise your code for the test suite) and some students engineered their output to purely match the test inputs. This kind of cheating was deemed to be not a problem by ETH but it was noted that this wouldn’t scale into MOOCs. Note that if someone got everything right then they got to see the answer – so bad behaviour then got you the right answer. We’re all sadly aware that many students are convinced that having access to some official oracle is akin to having the knowledge themselves so I’m a little cautious about this as a widespread practice: cheat, get right answer, is a formula for delayed failure.
Reporting for each student included their best attempt and past attempts. For the TAs, they had a wider spread of metrics, mostly programmatic and mark-based.
On looking at the results, the attendance to on-line lectures was 71%, where the live course attendance remained stable. Neither on-line quizzes nor programming exercises counted towards the final grade. Quiz attempts were about 5x the attendance and 48% got 100% and got the badge, significantly more than the 5-10% than would usually do this.
Students worked on 50% of the programming exercises. 22% of students worked on 75-100% of the exercises. (There was a lot of emphasis on the badge – and I’m really not sure if there’s evidence to support this.)
The lessons learned summarised what I’ve put above: shortening video lengths, face-to-face is important, MCQs can be creative, ramification, and better feedback is required on top of the existing automatic feedback.
The group are scaling from SPOC to MOOC with a Computing: Art, Magic, Science course on EdX launching later on in 2014.
I asked a question about the badges because I was wondering if putting in the statement “100% in the quiz is so desirable that I’ll give you a badge” was what had led to the improved performance. I’m not sure I communicated that well but, as I suspected, the speaker wants to explore this more in later offerings and look at how this would scale.
The next session was “Teaching and learning with MOOCs: Computing academics’ perspectives and engagement”, presented by Anna Eckerdal. The work was put together by a group composed from Uppsala, Aalto, Maco and Monash – which illustrates why we all come to conferences as this workgroup was put together in a coffee-shop discussion in Uppsala! The discussion stemmed from the early “high hype” mode of MOOCs but they were highly polarising as colleagues either loved it or hated it. What was the evidence to support either argument? Academics’ experience and views on MOOCs were sought via a questionnaire sent out to the main e-mail lists, to CS and IT people.
The study ran over June-JUly 2013, with 236 responses, over > 90 universities, and closed- and open-ended questions. What were the research questions: What are the community views on MOOC from a teaching perspective (positive and negative) and how have people been incorporating them into their existing courses? (Editorial note: Clearly defined study with a precise pair of research questions – nice.)
Interestingly, more people have heard concern expressed about MOOCs, followed by people who were positive, then confused, the negative, then excited, then uninformed, then uninterested and finally, some 10% of people who have been living in a time-travelling barrel in Ancient Greece because in 2013 they have heard no MOOC discussion.
Several themes were identified as prominent themes in the positive/negative aspects but were associated with the core them of teaching and learning. (The speaker outlined the way that the classification had been carried out, which is always interesting for a coding problem.) Anna reiterated the issue of a MOOC as a personal power enhancer: a MOOC can make a teacher famous, which may also be attractive to the Uni. The sub themes were pedagogy and learning env, affordance of MOOCs, interaction and collaboration, assessment and certificates, accessibility.
Interestingly, some of the positive answers included references to debunked approaches (such as learning styles) and the potential for improvements. The negatives (and there were many of them) referred to stone age learning and ack of relations.
On affordances of MOOCs, there were mostly positive comments: helping students with professional skills, refresh existing and learn new skills, try before they buy and the ability to transcend the tyranny of geography. The negatives included the economic issues of only popular courses being available, the fact that not all disciplines can go on-line, that there is no scaffolding for identity development in the professional sense nor support development of critical thinking or teamwork. (Not sure if I agree with the last two as that seems to be based on the way that you put the MOOC together.)
I’m afraid I missed the slide on interaction and collaboration so you’ll (or I’ll) have to read the paper at some stage.
There was nothing positive about assessment and certificates: course completion rates are low, what can reasonably be assessed, plagiarism and how we certify this. How does a student from a MOOC compete with a student from a face-to-face University.
1/3 of the respondents answered about accessibility, with many positive comments on “Anytime. anywhere, at one’s own pace”. We can (somehow) reach non-traditional student groups. (Note: there is a large amount of contradictory evidence on this one, MOOCs are even worse than traditional courses. Check out Mark Guzdial’s CACM blog on this.) Another answer was “Access to world class teachers” and “opportunity to learn from experts in the field.” Interesting, given that the mechanism (from other answers) is so flawed that world-class teachers would barely survive MOOC ification!
On Academics’ engagement with MOOCs, the largest group (49%) believed that MOOCs had had no effect at all, about 15% said it had inspired changes, roughly 10% had incorporated some MOOCs. Very few had seen MOOCs as a threat requiring change: either personally or institutionally. Only one respondent said that their course was a now a MOOC, although 6% had developed them and 12% wanted to.
For the open-ended question on Academics’ engagement, most believed that no change was required because their teaching was superior. (Hmm.) A few reported changes to teaching that was similar to MOOCs (on line materials or automated assessment) but wasn’t influenced by them.
There’s still no clear vision of the role of MOOCs in the future: concerned is as prominent as positive. There is a lot of potential but many concerns.
The authors had several recommendations of concern: focusing on active learning, we need a lot more search in automatic assessment and feedback methods, and there is a need for lots of good policy from the Universities regarding certification and the role of on-site and MOOC curricula. Uppsala have started the process of thinking about policy.
The first question was “how much of what is seen here would apply to any new technology being introduced” with an example of the similar reactions seen earlier to “Second Life”. Anna, in response, wondered why MOOC has such a global identity as a game-changer, given its similarity to previous technologies. The global discussion leads to the MOOC topic having a greater influence, which is why answering these questions is more important in this context. Another issue raised in questions included the perceived value of MOOCs, which means that many people who have taken MOOCs may not be advertising it because of the inherent ranking of knowledge.
@patitsel raised the very important issue that under-represented groups are even more under-represented in MOOCs – you can read through Mark’s blog to find many good examples of this, from cultural issues to digital ghettoisation.
The session concluded with “Augmenting PBL with Large Public Presentations: A Case Study in Interactive Graphics Pedagogy”. The presenter was a freshly graduated student who had completed the courses three weeks ago so he was here to learn and get constructive criticism. (Ed’s note: he’s in the right place. We’re very inquisitive.)
Ooh, brave move. He’s starting with anecdotal evidence. This is not really the crowd for that – we’re happy with phenomenographic studies and case studies to look at the existence of phenomena as part of a study, but anecdotes, even with pictures, are not the best use of your short term in front of a group of people. And already a couple of people have left because that’s not a great way to start a talk in terms of framing.
I must be honest, I slightly lost track of the talk here. EBL was defined as project-based learning augmented with constructively aligned public expos, with gamers as the target audience. The speaker noted that “gamers don’t wait” as a reason to have strict deadlines. Hmm. Half Life 3 anyone? The goal was to study the pedagogical impact of this approach. The students in the study had to build something large, original and stable, to communicate the theory, work as a group, demonstrate in large venues and then collaborate with a school of communication. So, it’s a large-scale graphics-based project in teams with a public display.
…
Grading was composed of proposals, demos, presentation and open houses. Two projects (50% and 40%) and weekly assignments (10%) made up the whole grading scheme. The second project came out after the first big Game Expo demonstration. Project 1 had to be interactive groups, in groups of 3-4. The KTH visualisation studio was an important part of this and it is apparently full of technology, which is nice and we got to hear about a lot of it. Collaboration is a strong part of the visualisation studio, which was noted in response to the keynote. The speaker mentioned some of the projects and it’s obvious that they are producing some really good graphics projects.
I’ll look at the FaceUp application in detail as it was inspired by the idea to make people look up in the Metro rather than down at their devices. I’ll note that people look down for a personal experience in shared space. Projecting, even up, without capturing the personalisation aspect, is missing the point. I’ll have to go and look at this to work out if some of these issues were covered in the FaceUp application as getting people to look up, rather than down, needs to have a strong motivating factor if you’re trying to end digitally-inspired isolation.
The experiment was to measure the impact on EXPOs on ILOs, using participation, reflection, surveys and interviews. The speaker noted that doing coding on a domain of knowledge you feel strongly about (potentially to the point of ownership) can be very hard as biases creep in and I find it one of the real challenges in trying to do grounded theory work, personally. I’m not all that surprised that students felt that the EXPO had a greater impact than something smaller, especially where the experiment was effectively created with a larger weight first project and a high-impact first deliverable. In a biological human sense, project 2 is always going to be at risk of being in the refectory period, the period after stimulation during which a nerve or muscle is less able to be stimulated. You can get as excited about the development, because development is always going to be very similar, but it’s not surprising that a small-scale pop is not as exciting as a giant boom, especially when the boom comes first.
How do we grade things like this? It’s a very good question – of course the first question is why are we grading this? Do we need to be able to grade this sort of thing or just note that it’s met a professional standard? How can we scale this sort of thing up, especially when the main function of the coordinator is as a cheerleader and relationships are essential. Scaling up relationships is very, very hard. Talking to everyone in a group means that the number of conversations you have is going to grow at an incredibly fast rate. Plus, we know that we have an upper bound on the number of relationships we can actually have – remember Dunbar’s number of 120-150 or so? An interesting problem to finish on.
ASWEC Day 3 (SE Education Track), Keynote, “Teaching Gap: Where’s the Product Gene?” (#aswec2014 #AdelED @jmwind)
Posted: April 9, 2014 Filed under: Education | Tags: ASWEC, aswec 2014, atlassian, community, education, higher education, learning, product engineering, software engineering, teaching Leave a commentToday’s speaker is Jean-Michel Lemieux, the VP of Engineering for Atlassian, to open the Education track for the ASWEC Conference. (I’m the track chair so I can’t promise an unbiassed report of the day’s activities.) Atlassian has a post-induction reprogramming idea where they take in graduates and then get people to value products over software – it’s not about what’s in the software, it’s about who is going to be using it. The next thing is to value experiences over functionality.
What is the product “gene” and can we teach it? Atlassian has struggled with this in the past despite having hired good graduates in the past, because they were a bit narrow and focused on individual features rather than the whole product. Jean-Michel spoke about the “Ship-it” event where you have to write a product in 24 hours and then a customer comes and pick what they would buy.
Jean-Michel is proposing the addition of a new degree – to add a product engineering course or degree. Whether it’s a 1 year or 4 year is pretty much up to the implementers – i.e. us. EE is about curvy waves, Computer Engineering is about square waves, CS is about programs, SE is about processes and systems, and PE is about product engineering. PE still requires programming and overlaps with SE. Atlassian’s Vietnam experience indicates that teaching the basics earlier will be very helpful: algorithms, data structures, systems admin, programming languages, compilers, storage and so on. Atlassian wants the basics in earlier here as well (regular readers will be aware of the new digital technologies curriculum but Jean-Michel may not be aware of this).
What is Product Engineering about? Customers, desirable software over a team as part of an ecosystem that functions for years. This gets away from the individual mark-oriented short-term focus that so many of our existing courses have (and of which I am not a great fan). From a systems thinking perspective, we can look at the customer journey. If people are using your product then they’re going through a lifecycle with your product.
Atlassian have a strong culture of exposure and presentation: engineers are regularly explaining problems, existing solutions and demonstrating understanding before they can throw new things on top. Demoing is a very important part of Atlassian culture: you have to be able to sell it with passion. Define the problem. Tell a story. Make it work. Sell with passion.
There’s a hypothesis drive development approach starting from hypothesis generation and experimental design, leading to cohort selection, experiment development, measurement and analysis and then the publishing of results. Ideally, a short experiment is going to give you a prediction of behaviour over a longer term timeframe with a larger number of people. The results themselves have to be clearly communicated and, from what was demonstrated, associated with the experiment itself.
Atlassian have a UI review process using peer review. This has two parts: “Learn to See” and “Learn to Seek”. For “Learning to See”, the important principles are consistency, alignment, contrast and simplicity. How much can you remove, reuse and set up properly so the UI does exactly what it needs to do and no more? For “Learning to Seek”, the key aspects are “bring it forward” (bring your data forward to make things easier: you can see the date when your calendar app is closed). (This is based on work in Microinteractions, a book that I have’t read.) The use of language in text and error messages is also very important and part of product thinking.
No-one works alone at Atlassian and team work is default. There’s a lot of team archeology and look at what a team has been doing for the past few years and learn from it. The Team Fingerprint shows you how a team works, by looking at their commit history, bug tracking. If they reject commits, when do they do it and why? Where’s the supporting documentation and discussion? Which files are being committed or changed together? If two files are always worked on together, can we simplify this?
In terms of the ecosystem, Atlassian also have an API focus (as Google did yesterday) and they design for extensibility. They also believe in making tools available with a focus on determining whether the product will be open source or licensed and how the IP is going to be handled. Extensibility can be very hard because it’s a commitment over time and your changes today have to support tomorrow’s changes. It’s important to remember that extending something requires you to build a community who will use the extensions – again, communication is very important. An Atlassian platform team is done when their product has been adopted by another team, preferably without any meetings. If you’re open source then you live and die by the number of people who are actually using your product. Atlassian have a no-meeting clause: you can’t have a meeting to explain to someone why they should adopt your product.
When things last for years you have to prepare for it. You need to learn from your running code, rather than just trusting your test data. You need to validate assumptions in production and think like an “ops” person. This includes things like building in consistency checks across the board.
Where’s the innovation in this? The Atlassian approach is a little more prescriptive in some ways but it’s not mandating tools so there’s still room for the innovative approaches that Alan mentioned yesterday.
Question time was interesting, with as many (if not more) comments than questions, but there was a question as to whether the idea for such a course should be at a higher level than an individual University: such as CORE, ACDICT, EA,or ACS. It will be interesting to see what comes out of this.
CSEDU, Day 3, Final Keynote, “Digital Age Learning – The Changing Face of Online Education”, (#csedu14 #AdelED @timbuckteeth)
Posted: April 4, 2014 Filed under: Education | Tags: BBC Model B, computer supported education, correspondence course, Csíkszentmihályi, digital age, digital learning, distance learning, education, flow, higher education, learning, learning environment, learning management systems, on-line learning, Personal Learning Environment, Plymouth University, steve wheeler, student, the Plymouth Institute of Education, thinking, vygotsky, Zone of proximal development, ZPD 3 CommentsNow, I should warn you all that I’ve been spending time with Steve Wheeler (@timbuckteeth) and we agree on many things, so I’m either going to be in furious agreement with him or I will be in shock because he suddenly reveals himself to be a stern traditionalist who thinks blended learning is putting a textbook in the Magimix. Only time will tell, dear reader, so let’s crack on, shall we? Steve is from the Plymouth Institute of Education, conveniently located in Plymouth University, and is a ferocious blogger and tweeter (see his handle above).
Erik introduced Steve by saying that Steve didn’t need much introduction and noted that Steve was probably one of the reasons that we had so many people here on the last day! (This is probably true, the afternoon on the last day of a European conference is normally notable due to the almost negative number of participants.)
“When you’re a distance educator, the back of the classroom can be thousands of miles away” (Steve Wheeler)
Steve started with the idea that on-line learning is changing and that his presentation was going to be based on the idea that the future will be richly social and intensely personal. Paradoxical? Possibly but let’s find out. Oh, look, an Einstein quote – we should have had Einstein bingo cards. It’s a good one and it came with an anecdote (which was a little Upstairs Downstairs) so I shall reproduce it here.
“I never teach my students. I only provide the conditions in which they can learn.” Albert Einstein
There are two types of learning: shallow (rote) learning that we see when cramming, where understanding is negligible or shallow if there at all, and then there is the fluid intelligence, the deeper kind of learning that draws on your previous learning and your knowledge structures. But what about strategic learning where we switch quickly between the two. Poor pedagogy can suppress these transitions and lock people into one spot.
There are three approaches here: knowledge (knowing that, which is declarative), wisdom (knowing how, which is procedural) and transformation (knowing why, which is critical). I’ve written whole papers about the missing critical layer so I’m very happy to see Steve saying that the critical layer is the one that we often do the worst with. This ties back into blooms where knowledge is cognitive, wisdom is application and transformation is analysis and evaluation. Learning can be messy but it’s transformative and it can be intrinsically hard to define. Learning is many things – sorry, Steve, not going to summarise that whole sentence.
We want to move through to the transformational stage of learning.
What is the first attempt at distance learning? St Paul’s name was tossed out, as was Moses. But St Paul was noted as the first correspondence course offered. (What was the assessment model, I wonder, for Epistola.) More seriously, it was highly didactic and one-way, and it was Pitman who established a two-way correspondence course that was both laborious and asynchronous but it worked. Then we had television and in 1968, the Stanford Instructions TV Network popped up. In 1970, Steve saw an example of video conferencing that had been previously confined to Star Trek. I was around in the early 70s and we were all agog about the potential of the future – where is my moon base, by the way? But the tools were big and bulk – old video cameras were incredibly big and ridiculously short lived in their battery life… but it worked! Then people saw uses for the relationship between this new technology and pedagogy. Reel-to-reel, copiers, projectors, videos: all of these technologies were effective for their teaching uses at the time.
Of course, we moved on to computer technology including the BBC Model B (hooray!) and the reliable but hellishly noisy dot matrix printer. The learning from these systems was very instructional, using text and very simplistic in multiple choice question approach. Highly behaviouristic but this is how things were done and the teaching approach matched the technology. Now, of course, we’ve gone tablet-based, on-line gaming environments that have non-touch technologies such as Kinect, but the principle remains the same: over the years we’ve adapted technology to pedagogy.
But it’s only now that, after Sir Tim Berners-Lee, we have the World Wide Web that on-line learning is now available to everybody, where before it was sort-of available but not anywhere near as multiplicable. Now, for our sins, we have Learning Management Systems, the most mixed of blessings, and we still have to ask what are we using them for, how are we using them? Is our pedagogy changing? Is out connection with our students changing? Illich (1972) criticised educational funnels that had a one-directional approach and intend motivated educational webs that allow the transformation of each moment of living into one of learning, sharing and caring.
What about the Personal Learning Environment (PLE)? This is the interaction of tools such as blogs, twitters and e-Portfolios, then add in the people we interact with, and then the other tools that we use – and this would be strongly personal to an individual. If you’ve ever tried to use your partner’s iPad, you know how quickly personalisation changes your perception of a tool! Wheeler and Malik (2010) discuses the PLE that comprises the personal learning network and personal web tools, with an eye on more than the classroom, but as a part of life-long learning. Steve notes (as Stephen Heppel did) that you may as well get students to use their PLEs in the open because they’ll be using them covertly otherwise: the dreaded phone under the table becomes a learning tool when it’s on top of the table. Steve discussed the embedded MOOC that Hugh discussed yesterday to see how the interaction between on-line and f2f students can benefit from each other.
In the late ’80s, the future was “multi-media” and everything had every other medium jammed into it (and they don’t like it up ’em) and then the future was going to converge on the web. Internet take up is increasing: social, political and economic systems change incrementally, but technology changes exponentially. Steve thinks the future is smart mobile and pervasive, due to miniaturisation and capability of new devices. If you have WiFi then you have the world.
“Change is not linear, it’s exponential.” Kurzweil
Looking at the data, there are no more people in the world with mobile phones than people without, although some people have more than one. (Someone in the audience had four, perhaps he was a Telco?) Of course, some reasons for this are because mobile phones replace infrastructure: there are entire African banks that run over mobile networks, as an example. Given that we always have a computer in our pocket, how can we promote learning everywhere? We are using these all the time, everywhere, and this changes what we can do because we can mix leisure and learning without having to move to fixed spaces.
Steve then displayed the Intel info graphic “What Happens In an Internet Minute“, but it’s scary to see how much paper is lagging these days. What will the future look like? What will future learning look like? If we think exponentially then things are changing fast. There is so much content being generated, there must be something that we can use (DOGE photos and Justin Bieber vides excepted) for our teaching and learning. But, given that 70% of what we learn is if informal and outside of the institution, this is great! But we need to be able to capture this and this means that we should produce a personal learning network, because trying to drink down all that content by yourself is exceeding our ability! By building a network, we build a collection of filters and aggregators that are going to help us to bring sense out of the chaos. Given that nobody can learn everything, we can store our knowledge in other people and know where to go when we need that knowledge. A plank of connectivist theory and leading into paragogy, where we learn from each other. This also leads us to distributed cognition, where we think across the group (a hive mind, if you will) but, more simply, you learn from one person, then another, and it becomes highly social.
Steve showed us a video on “How have you used your own technology to enhance your learning“, which you can watch on YouTube. Lucky old 21st Century you! This is a recording of some of Steve’s students answering the question and sharing their personal learning networks with us. There’s an interesting range of ideas and technologies in use so it’s well worth a look. Steve runs a Twitter wall in his classroom and advertises the hashtag for a given session so questions, challenges and comments go out on to that board and that allows Steve to see it but also retweet it to his followers, to allow the exponential explosion that we would want in a personal learning network. Students accessed when they harness the tools they need to solve their problems.
Steve showed us a picture of about 10,000 Germans taking pictures of the then-Presidential Elect Barack Obama because he was speaking in Berlin and it was a historical moment that people wanted to share with other people. This is an example of the ubiquitous connection that we now enjoy and, in many ways, take for granted. It is a new way of thinking and it causes a lot of concern for people who want to stick to previous methods. (There will come a time when a paper exam for memorised definitions will make no sense because people have computers connected to their eyes – so let’s look at asking questions in ways that always require people to actually use their brains, shall we.) Steve then showed us a picture of students “taking notes” by taking pictures of the whiteboard: something that we are all very accustomed to now. Yes, some teachers are bothered by this but why? What is wrong with instantaneous capture versus turning a student into a slow organic photocopying machine? Let’s go to a Papert quote!
“I am convinced that heh best learning takes place when the learner takes charge,” Seymour Papert
“We learn by doing“, Piaget, 1960
“We learn by making“, Papert, 1960.
Steve alluded to constructionist theory and pointed out how much we have to learn about learning by making. He, like many of us, doesn’t subscribe to generational or digital native/immigrant theory. It’s an easy way of thinking but it really gets in the way, especially when it makes teachers fearful of weighing in because they feel that their students know more than they do. Yes, they might, but there is no grand generational guarantee. It’s not about your age, it’s about your context. It’s about how we use the technology, it’s not about who we are and some immutable characteristics that define us as in or out. (WTF does not, for the record, mean “Welcome to Facebook”. Sorry, people.) There will be cultural differences but we are, very much, all in this together.
Steve showed us a second video, on the Future of Publishing, which you can watch again! Some of you will find it confronting that Gaga beats Gandhi but cultures change and evolve – and you need to watch to the end of the video because it’s really rather clever. Don’t stop halfway through! As Steve notes, it’s about perception and, as I’ve noted before, I’m pretty sure that people put people into the categories that they were already thinking about – it’s one of the reasons I have such a strong interest in grounded theory. If you have a “Young bad” idea in your head then everything you see will tend to confirm this. Perception and preconception can heavily interfere with each other but using perception, and being open to change, is almost always a better idea.
Steve talked about Csíkszentmihályi’s Flow, the zone you’re in when the level of challenge roughly matches your level of skill and you balance anxiety and boredom. Then, for maximum Nick points, he got onto Vygotsky’s Zone of Proximal Development, where we build knowledge better and make leaps when we do it with other people, using the knowledgable other to scaffold the learning. Steve also talked about mashing them up, and I draw the reader back to something I wrote on this a whole ago on Repenning’s work.
We can do a lot of things with computers but we don’t have to do all the things that we used to do and slavishly translate them across to the new platform. Waters (2011) talks about new learners: learners who are more self-directed and able to make more and hence learn more.
There are many digital literacies: social networking, privacy management, identity management, creating content, organising content, reusing and repurposing, filtering and selection, self presentation, transliteracy (using any platform to get your ideas across). We build skills, that become competencies, that become literacies and, finally, potentially become masteries.
Steve finished with in discussing the transportability of skills using driving in the UK and the US as an example. The skill is pretty much the same but safe driving requires a new literacy when you make a large contextual change. Digital environments can be alien environments so you need to be able to take the skills that you have now and be able to put them into the new contexts. How do you know that THIS IS SHOUTING? It’s a digital literacy.
Steve presented a quote from Socrates, no, Socrates, no, Plato:
“Knowledge that is acquired under compulsion obtains no hold on the mind.“
and used the rather delightful neologism “Darwikianism” to illustrate evolving improvement on on-line materials over time. (And illustrated it with humour and pictures.) Great talk with a lot of content! Now I have to go and work on my personal learning network!







