It’s fine to write all sorts of wonderful statements about theory and design and we can achieve a lot in thinking about such things. But, let’s be honest, we face massive challenges in the 21st Century and improved thinking and practice in education is one of the most important contributions we can make to future generations. Thus, if we want to change the world based upon our thinking, then all of our discussions have no use if we can’t develop something that’s going to achieve our goals. Dewey’s work provide an experimental, even instrumental, approach to the American philosophical school of pragmatism. To briefly explain this term in the specific meaning, I turn to William James, American psychologist and philosopher.
Pragmatism asks its usual question. “Grant an idea or belief to be true,” it says, “what concrete difference will its being true make in anyone’s actual life? How will the truth be realized? What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth’s cash-value in experiential terms?”
William James, Pragmatism (1907)
(James is far too complex to summarise with one paragraph and I am using only one of his ideas to illustrate my point. Even James’ scholars disagree on how to interpret many of his writings. It’s worth reading him and Hegel at the same time as they square off across the ring quite well.)
What will be different? How will we recognise or measure it? What do we gain by knowing if we are right or wrong? This is why all good education researchers depend so heavily on testing their hypotheses in the space where they will make an impact and there is usually an obligation to look at how things are working before and after any intervention. This places further obligation upon us to evaluate what has occurred and then, if our goals haven’t been achieved, change our approach further. It’s a simple breakdown of roles but I often think as educational work in three heavily overlapping areas: practice, scholarship and research. Practice should be applying techniques that achieve our goals, scholarship involves the investigation, dissemination and comparison of these techniques, and research builds on scholarship to evaluate practice in ways that will validate and develop new techniques – or invalidate formerly accepted ones as knowledge improves. This leads me to my point: evaluating your own efforts to work out how to do better next time.
There are designers, architects, makers and engineers who are committed to the practice of impact design, where (and this is one definition):
“Impact design is rooted in the core belief that design can be used to create positive social, environmental and economic change, and focuses on actively measuring impact to inform and direct the design process.” Impact Design Hub, About.
Thus, evaluation of what works is essential for these practitioners. The same website recently shared some designers talking about things that went wrong and what they learned from the process.
If you read that link, you’ll see all sorts of lessons: don’t hand innovative control to someone who’s scared of risk, don’t ignore your community, don’t apply your cultural values to others unless you really know what you’re doing, and don’t forget the importance of communication.
Writing some pretty words every day is not going to achieve my goal and I need to be reminded of the risks that I face in trying to achieve something large – one of which is not actually working towards my own goals in a useful manner! One of the biggest risks is confusing writing a blog with actual work, unless I use this medium to do something. Over the coming weeks, I hope to show you what I am doing as I move towards my very ambitious goal of “beautiful education”. I hope you find the linked article as useful as I did.
There is a Peanuts comic from April 16, 1972 where Peppermint Patty asks Charlie Brown what he thinks the secret of living is. Charlie Brown’s answer is “A convertible and a lake.” His reasoning is simple. When it’s sunny you can drive around in the convertible and be happy. When it’s raining you can think “oh well, the rain will fill up my lake.” Peppermint Patty asks Snoopy the same question and, committed sensualist that he is, he kisses her on the nose.
Charlie Brown, a character written to be constantly ground down by the world around him, is not seeking to maximise his happiness, he is seeking to minimise his unhappiness. Given his life, this is an understandable philosophy.
But what of beauty and, in this context, beauty in education? I’ve already introduced the term ‘ugly’ as the opposite of beauty but it’s hard for me to wrap my head around the notion of ‘minimising ugliness’; ugly is such a strong term. It’s also hard to argue that any education, when it covers any aspects to which we would apply that label, is ever totally ugly. Perhaps, in the educational framing, the absence of beauty is plainness. We end up with things that are ordinary, rather than extraordinary. I think that there is more than enough range between beauty and plainness for us to have a discussion on the movement between those states.
Is it enough for us to accept educational thinking that is acceptably plain? Is that a successful strategy? Many valid concerns about learning at scale focus on the innate homogeneity and lack of personalisation inherent in such an approach: plainness is the enemy. Yet there are many traditional and face-to-face approaches where plainness stares us in the face. Banality in education is, when identified, always rejected, yet it so often slips by without identification. We know that there is a hole in our slippers, yet we only seem to notice when that hole directly affects us or someone else points it out.
My thesis here is that a framing of beauty should lead us to a strategy of maximising beauty, rather than minimising plainness, as it is only in that pursuit that we model that key stage of falling in love with knowledge that we wish our students to emulate. If we say “meh is ok”, then that is what we will receive in return. We model, they follow as part of their learning. That’s what we’re trying to make happen, isn’t it?
What would Charlie Brown’s self-protective philosophy look like in a positive framing, maximising his joy rather than managing his grief? I’m not sure but I think it would look a lot like a dancing beagle who kisses people on the nose. We may need more than this for a sound foundation to reframe education!
Ever since education became something we discussed, teachers and learners alike have had strong opinions regarding the quality of education and how it can be improved. What is surprising, as you look at these discussions over time, is how often we seem to come back to the same ideas. We read Dewey and we hear echoes of Rousseau. So many echoes and so much careful thought, found as we built new modern frames with Vygotsky, Piaget, Montessori, Papert and so many more. But little of this should really be a surprise because we can go back to the writings of Marcus Fabius Quintilianus (Quinitilian) and his twelve books of The Orator’s Education and we find discussion of small class sizes, constructive student-focused discussions, and that more people were capable of thought and far-reaching intellectual pursuits than was popularly believed.
“… as birds are born for flying, horses for speed, beasts of prey for ferocity, so are [humans] for mental activity and resourcefulness.” Quintilian, Book I, page 65.
I used to say that it was stunning how contemporary education seems to be slow in moving in directions first suggested by Dewey a hundred years ago, then I discovered that Rousseau had said it 150 years before that. Now I find that Quntilian wrote things such as this nearly 2,000 years ago. And Marcus Aurelius, among other stoics, made much of approaches to thinking that, somehow, were put to one side as we industrialised education much as we had industrialised everything else.
This year I have accepted that we have had 2,000 years of thinking (and as much evidence when we are bold enough to experiment) and yet we just have not seen enough change. Dewey’s critique of the University is still valid. Rousseau’s lament on attaining true mastery of knowledge stands. Quintilian’s distrust of mere imitation would not be quieted when looking at much of repetitive modern examination practice.
What stops us from changing? We have more than enough evidence of discussion and thought, from some of the greatest philosophers we have seen. When we start looking at education, in varying forms, we wander across Plato, Hypatia, Hegel, Kant, Nietzsche, in addition to all of those I have already mentioned. But evidence, as it stands, does not appear to be enough, especially in the face of personal perception of achievement, contribution and outcomes, whether supported by facts or not.
Evidence of uncertainty is not enough. Evidence of the lack of efficacy of techniques, now that we can and do measure them, is not enough. Evidence that students fail who then, under other tutors or approaches, mysteriously flourish elsewhere, is not enough.
Authority, by itself, is not enough. We can be told to do more or to do things differently but the research we have suggests that an externally applied control mechanism just doesn’t work very well for areas where thinking is required. And thinking is, most definitely, required for education.
I have already commented elsewhere on Mark Guzdial’s post that attracted so much attention and, yet, all he was saying was what we have seen repeated throughout history and is now supported in this ‘gilt age’ of measurement of efficacy. It still took local authority to stop people piling onto him (even under the rather shabby cloak of ‘scientific enquiry’ that masks so much negative activity). Mark is repeating the words of educators throughout the ages who have stepped back and asked “Is what we are doing the best thing we could be doing?” It is human to say “But, if I know that this is the evidence, why am I acting as if it were not true?” But it is quite clear that this is still challenging and, amazingly, heretical to an extent, despite these (apparently controversial) ideas pre-dating most of what we know as the trappings and establishments of education. Here is our evidence that evidence is not enough. This experience is the authority that, while authority can halt a debate, authority cannot force people to alter such a deeply complex and cognitive practice in a useful manner. Nobody is necessarily agreeing with Mark, they’re just no longer arguing. That’s not helpful.
So, where to from here?
We should not throw out everything old simply because it is old, as that is meaningless without evidence to do so and it is wrong as autocratically rejecting everything new because it is new.
The challenge is to find a way of explaining how things could change without forcing conflict between evidence and personal experience and without having to resort to an argument by authority, whether moral or experiential. And this is a massive challenge.
This year, I looked back to find other ways forward. I looked back to the three values of Ancient Greece, brought together as a trinity through Socrates and Plato.
These three values are: beauty, goodness and truth. Here, truth means seeing things as they are (non-concealment). Goodness denotes the excellence of something and often refers to a purpose of meaning for existence, in the sense of a good life. Beauty? Beauty is an aesthetic delight; pleasing to those senses that value certain criteria. It does not merely mean pretty, as we can have many ways that something is aesthetically pleasing. For Dewey, equality of access was an essential criterion of education; education could only be beautiful to Dewey if it was free and easily available. For Plato, the revelation of knowledge was good and beauty could arose a love for this knowledge that would lead to such a good. By revealing good, reality, to our selves and our world, we are ultimately seeking truth: seeing the world as it really is.
In the Platonic ideal, a beautiful education leads us to fall in love with learning and gives us momentum to strive for good, which will lead us to truth. Is there any better expression of what we all would really want to see in our classrooms?
I can speak of efficiencies of education, of retention rates and average grades. Or I can ask you if something is beautiful. We may not all agree on details of constructivist theory but if we can discuss those characteristics that we can maximise to lead towards a beautiful outcome, aesthetics, perhaps we can understand where we differ and, even more optimistically, move towards agreement. Towards beautiful educational practice. Towards a system and methodology that makes our students as excited about learning as we are about teaching. Let me illustrate.
A teacher stands in front of a class, delivering the same lecture that has been delivered for the last ten years. From the same book. The classroom is half-empty. There’s an assignment due tomorrow morning. Same assignment as the last three years. The teacher knows roughly how many people will ask for an extension an hour beforehand, how many will hand up and how many will cheat.
I can talk about evidence, about pedagogy, about political and class theory, about all forms of authority, or I can ask you, in the privacy of your head, to think about these questions.
- Is this beautiful? Which of the aesthetics of education are really being satisfied here?
- Is it good? Is this going to lead to the outcomes that you want for all of the students in the class?
- Is it true? Is this really the way that your students will be applying this knowledge, developing it, exploring it and taking it further, to hand on to other people?
- And now, having thought about yourself, what do you think your students would say? Would they think this was beautiful, once you explained what you meant?
Over the coming year, I will be writing a lot more on this. I know that this idea is not unique (Dewey wrote on this, to an extent, and, more recently, several books in the dramatic arts have taken up the case of beauty and education) but it is one that we do not often address in science and engineering.
My challenge, for 2016, is to try to provide a year of beautiful education. Succeed or fail, I will document it here.
Given how much research there is to tell us that intrinsic bias is having a terrifying effect on hiring decisions and opportunity, it’s great to have some good news to report when a company decides to address the issue.
Deloitte U.K are going to start using a school-blind approach in its hiring process (WaPo link), which will potentially benefit thousands of school- and college-leavers who would have been perceived to come from “less suitable” educational pedigrees. They are also planning to work out how students performed relative to their peers in a given school, rather than against broad district or national levels. Deloitte are now part of a growing group of companies who have moved beyond “Am I getting the best people” to understanding that their biassed perception of best may be preventing them from seeing some candidates at all. They could be picking their “best” from a much broader range of better, if only they’d address those biases.
This is certainly not going to end bias overnight (there are so many other issues to address) but it’s a great start.
An artist’s educator’s statement (or artist educator statement) is an artist’s educator’s written description of their work. The brief verbal representation is for, and in support of, his or her own work to give the viewer the student/a peer/an observer/questioning parents/unconvinced politicians/citizens/history understanding. As such it aims to inform, connect with artistic/scientific/educational/societal/intellectual/political contexts, and present the basis for the work; it is therefore didactic, descriptive, or reflective in nature. (Wikipedia + Nick Falkner)
Fear thrives in conditions of ignorance and deprivation. Ignorance is defeated by knowledge. Deprivation is defeated by fairness, equality and equity.
Education shares knowledge and provides the basis for more knowledge. Education attacks ignorance, fights fear, champions equality and saves the world.
If I am always learning then I can model learning for my students and adapt my practice to reflect changes in education as my knowledge increases. Who are my students? What do they need to know? How can I teach them? When will I know if they have the knowledge that they need? What do I need to do today, tomorrow and the day after that?
I have made mistakes but I will try not to make the same mistakes again. The essence of education is that we pass on what we have learned and keep developing knowledge so that we do not have to make the same mistakes again.
That is why I am an educator.
The Internet is full of posts and comments that reflect an overwhelming desire to judge people and tell them that they are wrong, twinned with intellectual cowardice. We see people saying “Oh, that’s wrong” without ever having the courage to put up anything that says “This is what I believe. This is my statement. This is my truth.”
You cannot achieve your potential when all you do is react. The echo is not the statement.
This has inspired me to clearly state what I believe about what I do. I regularly make comment on what I believe we should be doing in education so, to be consistent and to remove all uncertainty about my intentions, the next post on this blog is my educator’s statement. A statement of what I believe you need to know to understand my work, derived from the concept of the artist’s statement.
Put up your own. Share your truth. Be brave.
Walidah Imarisha very generously continued the discussion of my last piece with me on Twitter and I have updated that piece to include her thoughts and to provide vital additional discussion. As always, don’t read me talking about things when you can read the words of the people who are out there fixing, changing the narrative, fighting and winning.
Thank you, Walidah!
The Only Way Forward is With No Names @iamajanibrown @WalidahImarisha #afrofuturism #worldcon #sasquanPosted: August 23, 2015
Edit: Walidah Imarisha and I had a discussion in Twitter after I released this piece and I wanted to add her thoughts and part of our discussion. I’ve added it to the end so that you’ll have context but I mention it here because her thoughts are the ones that you must read before you leave this piece. Never listen to me when you can be listening to the people who are living this and fighting it.
I’m currently at the World Science Fiction Convention in Spokane, Washington state. As always, my focus is education and (no surprise to long term readers) equity. I’ve had the opportunity to attend some amazing panels. One was on the experience of women in art, publishing and game production of female characters for video gaming. Others were discussing issues such as non-white presence in fiction (#AfroFuturism with Professor Ajani Brown) and a long discussion of the changes between the Marvel Universe in film and comic form, as well as how we can use Science Fiction & Fantasy in the classroom to address social issues without having to directly engage the (often depressing) news sources. Both the latter panels were excellent and, in the Marvel one, Tom Smith, Annalee Flower Horne, Cassandra Rose Clarke, and Professor Brown, there was a lot of discussion of both the new Afro-American characters in movies and TV (Deathlok, Storm and Falcon) as well as how much they had changed from the comics.
I’m going to discuss what I saw and lead towards my point: that all assessment of work for its publishing potential should, where it is possible and sensible, be carried out blind, without knowledge of who wrote it.
I’ve written on this before, both here (where I argue that current publishing may not be doing what we want for the long term benefit of the community and the publishers themselves) and here, where we identify that systematic biases against people who are not western men is rampant and apparently almost inescapable as long as we can see a female name. Very recently, this Jezebel article identified that changing the author’s name on a manuscript, from female to male, not only included response rate and reduced time waiting, it changed the type of feedback given. The woman’s characters were “feisty”, the man’s weren’t. Same characters. It doesn’t matter if you think you’re being sexist or not, it doesn’t even matter (from the PNAS study in the second link) if you’re a man or a woman, the presence of a female name changes the level of respect attached to a work and also the level of reward/appreciation offered an assessment process. There are similar works that clearly identify that this problem is even worse for People of Colour. (Look up Intersectionality if you don’t know what I’m talking about.) I’m not saying that all of these people are trying to discriminate but the evidence we have says that social conditioning that leads to sexism is powerful and dominating.
Now let’s get back to the panels. The first panel “Female Characters in Video Games” with Andrea Stewart, Maurine Starkey, Annalee Flower Horne, Lauren Roy and Tanglwyst de Holloway. While discussing the growing market for female characters, the panel identified the ongoing problems and discrimination against women in the industry. 22% of professionals in the field are women, which sounds awful until you realise that this figure was 11% in 2009. However, Maurine had had her artwork recognised as being “great” when someone thought her work was a mans and “oh, drawn like a woman” when the true owner was revealed. And this is someone being explicit. The message of the panel was very positive: things were getting better. However, it was obvious that knowing someone was a woman changed how people valued their work or even how their activities were described. “Casual gaming” is often a term that describes what women do; if women take up a gaming platform (and they are a huge portion of the market) then it often gets labelled “casual gaming”.
So, point 1, assessing work at a professional level is apparently hard to do objectively when we know the gender of people. Moving on.
The first panel on Friday dealt with AfroFuturism, which looks at the long-standing philosophical and artistic expression of alternative realities relating to people of African Descent. This can be traced to the Egyptian origins of mystic and astrological architecture and religions, through tribal dances and mask ceremonies of other parts of Africa, to the P.Funk mothership and science-fiction works published in the middle of vinyl albums. There are strong notions of carving out or refining identity in order to break oppressive narratives and re-establish agency. AfroFuturism looks into creating new futures and narratives, also allowing for reinvention to escape the past, which is a powerful tool for liberation. People can be put into boxes and they want to break out to liberate themselves and, too often, if we know that someone can be put into a box then we have a nasty tendency (implicit cognitive bias) to jam them back in. No wonder, AfroFuturism is seen as a powerful force because it is an assault on the whole mean, racist narrative that does things like call groups of white people “protesters” or “concerned citizens”, and groups of black people “rioters”.
(If you follow me on Twitter, you’ve seen a fair bit of this. If you’re not following me on Twitter, @nickfalkner is the way to go.)
So point 2, if we know someone’s race, then we are more likely to enforce a narrative that is stereotypical and oppressive when we are outside of their culture. Writers inside the culture can write to liberate and to redefine identity and this probably means we need to see more of this.
I want to focus on the final panel, “Saving the World through Science Fiction: SF in the Classroom”, with Ben Cartwright, Ajani Brown (again!), Walidah Imarisha and Charlotte Lewis Brown. There are many issues facing our students on a day-to-day basis and it can be very hard to engage with some of them because it is confronting to have to address your own biases when you talk about the real world. But you can talk about racism with aliens, xenophobia with a planetary invasion, the horrors of war with apocalyptic fiction… and it’s not the nightly news. People can confront their biases without confronting them. That’s a very powerful technique for changing the world. It’s awesome.
Point 3, then, is that narratives are important and, with careful framing, we can discuss very complicated things and get away from the sheer weight of biases and reframe a discussion to talk about difficult things, without having to resort to violence or conflict. This reinforces Point 2, that we need more stories from other viewpoints to allow us to think about important issues.
We are a narrative and a mythic species: storytelling allows us to explain our universe. Storytelling defines our universe, whether it’s religion, notions of family or sense of state.
What I take from all of these panels is that many of the stories that we want to be reading, that are necessary for the healing and strengthening of our society, should be coming from groups who are traditionally not proportionally represented: women, People of Colour, Women of Colour, basically anyone who isn’t recognised as a white man in the Western Tradition. This isn’t to say that everything has to be one form but, instead, that we should be putting systems in place to get the best stories from as wide a range as possible, in order to let SF&F educate, change and grow the world. This doesn’t even touch on the Matthew Effect, where we are more likely to positively value a work if we have an existing positive relationship with the author, even if said work is not actually very good.
And this is why, with all of the evidence we have with cognitive biases changing the way people think about work based on the name, that the most likely approach to improve the range of stories that we will end up publishing is to judge as many works as we can without knowing who wrote it. If we wanted to take it further, we could even ask people to briefly explain why they did or didn’t like it. The comments on the Jezebel author’s book make it clear that, with those comments, we can clearly identify a bias in play. “It’s not for us” and things like that are not sufficiently transparent for us to see if the system is working. (Apologies to the hard-working editors out there, I know this is a big demand. Anonymity is a great start. 🙂 )
Now some books/works, you have to know who wrote it; my textbook, for example, depends upon my academic credentials and my published work, hence my identify is a part of the validity of academic work. But, for short fiction, for books? Perhaps it’s time to look at all of the evidence and to look at all of the efforts to widen the range of voices we hear and consider a commitment to anonymous review so that SF&F will be a powerful force for thought and change in the decades to come.
Thank you to all of the amazing panellists. You made everyone think and sent out powerful and positive messages. Thank you, so much!
Edit: As mentioned above, Walidah and I had a discussion that extended from this on Twitter. Walidah’s point was about changing the system so that we no longer have to hide identity to eliminate bias and I totally agree with this. Our goal has to be to create a space where bias no longer exists, where the assumption that the hierarchical dominance is white, cis, straight and male is no longer the default. Also, while SF&F is a great tool, it does not replace having the necessary and actual conversations about oppression. Our goal should never be to erase people of colour and replace it with aliens and dwarves just because white people don’t want to talk about race. While narrative engineering can work, many people do not transfer the knowledge from analogy to reality and this is why these authentic discussions of real situations must also exist. When we sit purely in analog, we risk reinforcing inequality if we don’t tie it back down to Earth.
I am still trying to attack a biased system to widen the narrative to allow more space for other voices but, as Walidah notes, this is catering to the privileged, rather than empowering the oppressed to speak their stories. And, of course, talking about oppression leads those on top of the hierarchy to assume you are oppressed. Walidah mentioned Katherine Burdekin & Swastika Nights as part of this. Our goal must be to remove bias. What I spoke about above is one way but it is very much born of the privileged and we cannot lose sight of the necessity of empowerment and a constant commitment to ensuring the visibility of other voices and hearing the stories of the oppressed from them, not passed through white academics like me.
Seriously, if you can read me OR someone else who has a more authentic connection? Please read that someone else.
Walidah’s recent work includes, with adrienne maree brown, editing the book of 20 short stories I have winging its way to me as we speak, “Octavia’s Brood: Science Fiction Stories from Social Justice Movements” and I am so grateful that she took the time to respond to this post and help me (I hope) to make it stronger.
Let me start by putting up a picture of some people celebrating!
My first confession is that the ‘acceptance’ I’m talking about is for academic and traditional fiction publishing. The second confession is that I have attempted to manipulate you into clicking through by using a carefully chosen title and presented image. This is to lead off with the point I wish to make today: we are a mess of implicit and explicit cognitive biases and to assume that we have anything approximating a fair evaluation mechanism to get work published is to, sadly, be making a far reaching assumption.
If you’ve read this far, my simple takeaway is “If people don’t even start reading your work with a positive frame of mind and a full stomach, your chances of being accepted are dire.”
If you want to hang around my argument is going to be simple. I’m going to demonstrate that, for much simpler assessments than research papers or stories, simple cognitive biases have a strong effect. I’m going to follow this and indicate how something as simple as how hungry you are can affect your decision making. I’m then going to identify a difference between scientific publishing and non-scientific publishing in terms of feedback and why expecting that we will continue to get good results from both approaches is probably too optimistic. I am going to make some proposals as to how we might start thinking about a fix, but only to start discussion because my expertise in non-academic publishing is not all that deep and limited by not being an editor or publisher!
[Full disclosure: I am happily published in academia but I am yet to be accepted for publication in non-academic approaches. I am perfectly comfortable with this so please don’t read sour grapes into this argument. As you’ll see, with the approaches I propose, I would in fact strip myself of some potential bias privileges!]
I’ve posted before on an experiment  where the only change to the qualifications of a prospective lab manager was to take the name from male to female. The ‘female’ version of this CV got offered less money, less development support and was ‘obviously’ less qualified. And this effect occurred whether the assessor was a man or a woman. This is the pretty much the gold standard for experiments of this type because it reduced any possibility of someone acting out of character because they knew what the experiment was trying to prove. There’s a lot of discussion in fiction at the moment about gendered bias, as well as academia. You’re probably aware of the Bechdel Test, which simply asks if there are two named women in a film who talk to each other about something other than men, and how often the mainstream media fails that test. But let’s look at something else. Antony LaPaglia tells a story that he used to get pulled up on his American accent whenever anyone knew that he was Australian. So he started passing as American. Overnight, complaints about his accent went away.
Compared to assessing a manuscript, reading a CV, bothering to put in two woman with names and a story, and spotting an accent are trivial and yet we can’t get these right without bias.
There’s another thing called the Matthew Effect, which basically says that the more you have, the more you’re going to get (terrible paraphrasing). Thus, the first paper in a field will be one of the most cited, people are comfortable giving opportunities to people who have used them well before, and so on. It even shows up in graph theory, where the first group of things connected together tend to become the most connected!
So, we have lots of examples of bias that comes in, if we know enough about someone that the bias can engage. And, for most people who aren’t trying to be discriminatory, it’s actually completely unconscious. Really? You don’t think you’d notice?
Let’s look at the hunger argument. An incredible study  (Economist link for summary) shows that Israeli judges are less likely to grant parole, the longer they’ve waited since they ate, even when taking other factors into account. Here’s a graph. Those big dips are meal breaks.
When confronted with that terrifying graph, the judges were totally unaware of it. The people in the court every day hadn’t noticed it. The authors of the study looked at a large number of factors and found some things that you’d expect in terms of sentencing but the meal break plunges surprised everyone because they had never thought to look for it. The good news is that, most days, the most deserving will still get paroled but, and it’s a big but, you still have to wonder about the people who should have been given parole who were denied because of timing and also the people who were paroled who maybe should not have been.
So what distinguishes academia and non-academic publishing? Shall we start by saying that, notionally, many parts of academic publishing subscribe to the Popperian model of development where we expose ideas to our colleagues and they tear at them like deranged wolves until we fashion truth? As part of that, we expect to get reviews from almost all submissions, whether accepted or not, because that is how we build up academic consensus and find out new things. Actual publication allows you to put your work out to everyone else where they can read it, work with it or use it to fashion a counter-claim.
In non-academic publishing, the publisher wants something that is saleable in the target market and the author wants to provide this. The author probably also wants to make some very important statements about truth, beauty, the lizard people or anything else (much as in academic publishing, the spread of ideas is crucial). However, from a publisher’s perspective, they are not after peer-verified work of sufficient truth, they are after something that matches their needs in order to publish it, most likely for profit.
Both are directly or indirectly prestige markers and often have some form of financial rewards, as well as some truth/knowledge construction function. Non-academic authors publish to eat, academic authors publish to keep their jobs or get tenure (often enough to allow you to eat). But the key difference is the way that feedback is given because an academic journal that gave no feedback would have trouble staying in business (unless it had incredible acceptance already, see Matthew Effect) because we’re all notionally building knowledge. But “no feedback” is the default in other publishing.
When I get feedback academically, I can quickly work out several things:
- Is the reviewer actually qualified to review my work? If someone doesn’t have the right background, they start saying things like surely when they mean I don’t know, and it quickly tells you that this review will be uninformative.
- Has the reviewer actually read the work? I would ask all the academics reading this to send me $1 if they’ve ever been told to include something that is obviously in the paper and takes up 1-2 pages already, except I am scared of the tax and weight implications.
- How the feedback can be useful. Good feedback is great. It spots holes, it reinforces bridges, it suggests new directions.
- If I want to publish in that venue again. If someone can’t organise their reviewers and oversee the reviews properly? I’m not going to get what I need to do good work. I should go and publish elsewhere.
My current exposure to non-academic publishing has been: submit story, wait, get rejection. Feedback? “Not suitable for us but thank you for your interest”, “not quite right for us”,”I’m going to pass on this”. I should note that the editors have all been very nice, timely (scarily so, in some cases) and all of my interactions have been great – my problem is mechanistic, not personal. I should clearly state that I assume that point 1 from above holds for all non-academic publishing, that is that the editors have chosen someone to review in a genre that they don’t actually hate and know something about. So 1 is fine. But 2 is tricky when you get no feedback.
But that tricky #2, “Has the reviewer actually read the work”, in the context of my previous statements really becomes “HOW has the reviewer read my work?” Is there an informal ordering of people you think you’ll enjoy to newbies, even unconsciously? How hungry is the reviewer when they’re working? Do they clear up ‘simple checks’ just before lunch? In the absence of feedback, I can’t assess the validity of the mechanism. I can’t improve the work with no feedback (step 3) and I’m now torn as to whether this story was bad for a given venue or whether my writing is just so awful that I should never darken their door again! (I accept, dear reader, that this may just be the sad truth and they’re all too scared to tell me.)
Let me remind you that implicit bias is often completely unconscious and many people are deeply surprised when they discover what they have been doing. I imagine that there are a number of reviewers reading this who are quite insulted. I certainly don’t mean to offend but I will ask if you’ve sat down and collected data on your practice. If you have, I would really love to see it because I love data! But, if what you have is your memory of trying to be fair… Many people will be in denial because we all like to think we’re rational and fair decision makers. (Looks back at those studies. Umm.)
We can deal with some aspects of implicit bias by using blind review systems, where the reviewer only sees the work and we remove any clues as to who wrote it. In academia this can get hard because some people’s contributed signature is so easy to see but it is still widely used. (I imagine it’s equally hard for well known writers.) This will, at least, remove gender bias and potentially reduce the impact of “famous people”, unless they are really distinctive. I know that a blinding process isn’t happening in all of the parts of non-academic publishing because my name is all over my manuscripts. (I must note that there are places that use blind submission, such as Andromeda Spaceways Inflight Magazine and Aurealis, for initial reading, which is a great start.) Usually, when I submit, my covering letter has to clearly state my publication history. This is the very opposite of a blind process because I am being asked to rate myself for Matthew Effect scaling every time I submit!
(There are also some tips and tricks in fiction, where your rejections can be personalised, yet contain no improvement information. This is still “a better rejection” but you have to know this from elsewhere because it’s not obvious. Knowing better writers is generally the best way to get to know about this. Transparency is not high, here.)
The timing one is harder because it requires two things: multiple reviewers and a randomised reading schedule, neither of which take into account the shoe string budgets and volunteer workforce associated with much of fiction publishing. Ideally, an anonymised work gets read 2-3 times, at different times relative to meals and during the day, taking into account the schedule of the reader. Otherwise, that last manuscript you reject before rushing home at 10pm to reheat a stale bagel? It would have to be Hemingway to get accepted. And good Hemingway at that.
And I’d like to see randomised reading applied across academic publishing as well. And we keep reviewing it until we actually reach a consensus. I’ve been on a review panel recently where we had two ‘accepts’, two ‘mehs’ and two ‘kill it with fires’ for the same paper. After group discussion, we settled for ‘a weak accept/strong meh’. Why? Because the two people who had rated it right down weren’t really experts so didn’t recognise what was going on. Why were they reviewing? Because it’s part of the job. So don’t think I’m going after non-academic publishing here. I’m exposing problems in both because I want to try and fix both.
But I do recognise that the primary job of non-academic publishing is getting people to read the publication, which means targeting saleable works. Can we do this in a way that is more systematic than “I know good writing when I see it” because (a) that doesn’t scale and (b) the chances of that aligning across more than two people is tiny.
This is where technological support can be invaluable. Word counting, spell checking and primitive grammar checking are all the dominion of the machine, as is plagiarism detection on existing published works. So step one is a brick wall that says “This work has not been checked against our submissions standards: problems are…” and this need not involve a single human (unless you are trying to spellcheck The Shugenkraft of Berzxx, in which case have a tickbox for ‘Heavy use of neologisms and accents’.) Plagiarism detection is becoming more common in academic writing and it saves a lot of time because you don’t spend it reading lifted work. (I read something that was really familiar and realised someone had sent me some of my own work with their name on it. Just… no.)
What we want is to go from a flood, to a river, then to manage that river and direct it to people who can handle a stream at a time. Human beings should not be the cogs and failure points in the high volume non-academic publishing industry.
Stripping names, anonymising and randomly distributing work is fairly important if we want to remove time biases. Even the act of blinding and randomising is going to reduce the chances that the same people get the same good or bad slots. We are partially systematic. Almost everyone in the industry is overworked, doing vast and wonderful things and, in the face of that, tired and biassed behaviour becomes more likely.
The final thing that would be useful is something alone the lines of a floating set of check boxes that sit with the document, if it’s electronic. (On paper, have a separate sheet that you can scan in once it’s filled in and then automatically extract the info.) What do you actually expect? What is this work/story not giving you? Is it derivative work? Is it just all talk and no action? Is it too early and just doesn’t go anywhere? Separating documents from any form of feedback automation (or expecting people to type sentences) is going to slow things down and make it impossible to give feedback. Every publishing house has a list of things not to do, let’s start with the 10 worst of those and see how many more we can get onto the feedback screen.
I am thinking of an approach that makes feedback an associated act of reading and can then be sent, with accept or reject, in the same action. Perhaps it has already been created and is in use in fine publishing houses, but my work hasn’t hit a bar where I even get that feedback? I don’t know. I can see that distributed editorial boards, like Andromeda, are obviously taking steps down this path because they have had to get good at shunting stuff around at scale and I would love to know how far they’ve got. For me, a mag that said “We will always give you even a little bit of feedback” will probably get all of my stuff first. (Not that they want it but you get the idea.)
I understand completely that publishers are under no obligation whatsoever to do this. There is no right to feedback nor is there an expectation outside of academia. But if we want good work, then I think I’ve already shown that we are probably missing out on some of it and, by not providing feedback, some (if not many) of those stories will vanish, never worked on again, never seen again, because the authors have absolutely no guidance on how to change their work.
I have already discussed mocking up a system, building from digital humanist approaches and using our own expertise, with one of my colleagues and we hope to start working on something soon. But I’d rather build something that works for everyone and lets publishers get more good work, authors recognised when they get it right, and something that brings more and more new voices into the community. Let me know if it’s already been written or take me to school in the comments below. I can’t complain about lack of feedback and then ignore it when I get it!
 PNAS, vol. 109 no. 41, Corinne A. Moss-Racusin, 16474–16479, doi: 10.1073/pnas.1211286109
 PNAS vol. 108 no. 17, Shai Danziger, 6889–6892, doi: 10.1073/pnas.1018033108
I’ve been doing a lot of reading recently on the classification of knowledge, the development of scientific thinking, the ways different cultures approach learning, and the relationship between myths and science. Now, some of you are probably wondering why I can’t watch “Agents of S.H.I.E.L.D.” like a normal person but others of you have already started to shift uneasily because I’ve talked about a relationship between myths and science, as if we do not consider science to be the natural successor to preceding myths. Well, let me go further. I’m about to start drawing on thinking on myths and science and even how the myths that teach us about the importance of evidence, the foundation of science, but for their own purposes.
Because much of what we face as opposition in educational research are pre-existing stereotypes and misconceptions that people employ, where there’s a lack of (and sometimes in the face of) evidence. Yet this collection of beliefs is powerful because it prevents people from adopting verified and validated approaches to learning and teaching. What can we call these? Are these myths? What do I even mean by that term?
It’s important to realise that the use of the term myth has evolved from earlier, rather condescending, classifications of any culture’s pre-scientific thinking as being dismissively primitive and unworthy of contemporary thought. This is a rich topic by itself but let me refer to Claude Lévi-Strauss and his identification of myth as being a form of thinking and classification, rather than simple story-telling, and thus proto-scientific, rather than anti-scientific. I note that I have done the study of mythology a grave disservice with such an abbreviated telling. Further reading here to understand precisely what Lévi-Strauss was refuting could involve Tylor, Malinowski, and Lévy-Bruhl. This includes rejecting a knee-jerk classification of a less scientifically advanced people as being emotional and practical, rather than (even being capable of) being intellectual. By moving myth forms to an intellectual footing, Lévi-Strauss allows a non-pejorative assessment of the potential value of myth forms.
In many situations, we consider myth and folklore as the same thing, from a Western post-Enlightenment viewpoint, only accepting those elements that we can validate. Thus, we choose not to believe that Olympus holds the Greek Pantheon as we cannot locate the Gods reliably, but the pre-scientific chewing of willow bark to relieve pain was validated once we constructed aspirin (and willow bark tea). It’s worth noting that the early location of willow bark as part of its scientific ‘discovery’ was inspired by an (effectively random) approach called the doctrine of signatures, which assumed that the cause and the cure of diseases would be located near each other. The folkloric doctrine of signatures led the explorers to a plant that tasted like another one but had a different use.
Myth, folklore and science, dancing uneasily together. Does this mean that what we choose to call myth now may or may not be myth in the future? We know that when to use it, to recommend it, in our endorsed and academic context is usually to require it to become science. But what is science?
Karl Popper’s (heavily summarised) view is that we have a set of hypotheses that we test to destruction and this is the foundation of our contemporary view of science. If the evidence we have doesn’t fit the hypothesis then we must reject the hypothesis. When we have enough evidence, and enough hypotheses, we have a supported theory. However, this has a natural knock-on effect in that we cannot actually prove anything, we just have enough evidence to support the hypothesis. Kuhn (again, heavily summarised) has a model of “normal science” where there is a large amount of science as in Popper’s model, incrementing a body of existing work, but there are times when this continuity gives way to a revolutionary change. At these times, we see an accumulation of contradictory evidence that illustrates that it’s time to think very differently about the world. Ultimately, we discover the need for a new coherency, where we need new exemplars to make the world make sense. (And, yes, there’s still a lot of controversy over this.)
Let me attempt to bring this all together, finally. We, as humans, live in a world full of information and some of it, even in our post-scientific world, we incorporate into our lives without evidence and some we need evidence to accept. Do you want some evidence that we live our lives without, or even in spite of, evidence? The median length for a marriage in the United States is 11 years and 40-50% of marriages will end in divorce yet many still swear ‘until death do us part’ or ‘all of my days’. But the myth of ‘marriage forever’ is still powerful. People have children, move, buy houses and totally change their lives based on this myth. The actions that people take here will have a significant impact on the world around them and yet it seems at odd with the evidence. (Such examples are not uncommon and, in a post-scientific revolution world, must force us to consider earlier suggestions that myth-based societies move seamlessly to a science-based intellectual utopia. This is why Lévi-Strauss is interesting to read. Our evidence is that our evidence is not sufficient evidence, so we must seek to better understand ourselves.) Even those components of our shared history and knowledge that are constructed to be based on faith, such as religion, understand how important evidence is to us. Let me give an example.
In the fourth book of the New Testament of the Christian Bible, the Gospel of John, we find the story of the Resurrection of Lazarus. Lazarus is sick and Jesus Christ waits until he dies to go to where he is buried and raise him. Jesus deliberately delays because the glory to the Christian God will be far greater and more will believe, if Lazarus is raised from the dead, rather than just healed from illness. Ultimately, and I do not speak for any religious figure or God here, anyone can get better from an illness but to be raised from the dead (currently) requires a miracle. Evidence, even in a book written for the faithful and to build faith, is important to humans.
We also know that there is a very large amount of knowledge that is accepted as being supported by evidence but the evidence is really anecdotal, based on bias and stereotype, and can even be distorted through repetition. This is the sea of confusion that we all live in. The scientific method (Popper) is one way that we can try to find firm ground to stand on but, if Kuhn is to be believed, there is the risk that one day we stand on the islands and realise that the truth was the sea all along. Even with Popper, we risk standing on solid ground that turns out to be meringue. How many of these changes can one human endure and still be malleable and welcoming in the face of further change?
Our problem with myth is when it forces us to reject something that we can demonstrate to be both valuable and scientifically valid because, right now, the world that we live in is constructed on scientific foundations and coherence is maintained by adding to those foundations. Personally, I don’t believe that myth and science have to be at odds (many disagree with me, including Richard Dawkins of course), and that this is an acceptable view as they are already co-existing in ways that actively shape society, for both good and ill.
Recently I made a comment on MOOCs that contradicted something someone said and I was (quite rightly) asked to provide evidence to support my assertions. That is the post before this one and what you will notice is that I do not have a great deal of what we would usually call evidence: no double-blind tests, no large-n trials with well-formed datasets. I had some early evidence of benefit, mostly qualitative and relatively soft, but, and this is important to me, what I didn’t have was evidence of harm. There are many myths around MOOCs and education in general. Some of them fall into the realm of harmful myths, those that cause people to reject good approaches to adhere to old and destructive practices. Some of them are harmful because they cause us to reject approaches that might work because we cannot find the evidence we need.
I am unsurprised that so many people adhere to folk pedagogy, given the vast amounts of information out there and the natural resistance to rejecting something that you think works, especially when someone sails in and tells you’ve been wrong for years. The fact that we are still discussing the nature of myth and science gives insight into how complicated this issue is.
I think that the path I’m on could most reasonably be called that of the mythographer, but the cataloguing of the edges of myth and the intersections of science is not in order to condemn one or the other but to find out what the truth is to the best of our knowledge. I think that understanding why people believe what they believe allows us to understand what they will need in order to believe something that is actually, well, true. There are many articles written on this, on the difficulty of replacing one piece of learning with another and the dangers of repetition in reinforcing previously-held beliefs, but there is hope in that we can construct new elements to replace old information if we are careful and we understand how people think.
We need to understand the delicate relationships between myth, folklore and science, our history as separate and joined peoples, if only to understand when we have achieved new forms of knowing. But we also need to be more upfront about when we believe we have moved on, including actively identifying areas that we have labelled as “in need of much more evidence” (such as learning styles, for example) to assist people in doing valuable work if they wish to pursue research.
I’ll go further. If we have areas where we cannot easily gain evidence, yet we have competing myths in that space, what should we do? How do we choose the best approach to achieve the most effective educational outcomes? I’ll let everyone argue in the comments for a while and then write that as the next piece.