Sunday, April 11, 2010

Experiments and Education

I want to make a few critical points in response to columnist Kevin Baldeosingh’s Express article, Experimental Sex Education of Friday April 9th. But first I want to point interested parties to Visible Learning by John Hattie, which is a synthesis of over 800 meta-analyses representing over 50,000 studies related to student achievement. Of the 138 factors that he lists as key influences on student learning, gender ranks 122nd in terms of effect size (d=0.12) while variables relating to what teachers actually do in their classrooms make up more than half of the top twenty (d= 0.61-1.44). In this piece though I want to explain why it is extremely difficult to run a true experimental design in education and even more so in the educational climate of administration/governance/policy by vaps as occurs frequently in T&T.

The medical ‘gold-standard’ of controlled randomly assigned double-blind experiments is inappropriate for education. Neither teachers nor students are randomly assigned to secondary schools in T&T nor is it feasible to do so at present. Even among the entrants to ‘prestige’ schools, the legacy mechanism of selecting the 20% confounds the assumption of a randomly selected ‘statistically similar’ population in terms of achievement.

Secondly, in a densely networked place such as T&T or a small community where everybody talks, any group that was receiving a ‘placebo’ treatment would likely figure this out quickly. Also, as is already the case with the media attention given to the potential decision to pilot a shift from co-ed to single-sex schools, if you knew that you were part of a study there is a strong likelihood that some actions/behaviors relevant to one’s learning would be altered and could be attributed to the fact of being aware that one was being studied. His suggestion to convert a single-sex government school to a co-ed one is prone to this critique.

So we cannot have randomly assigned nor double-blind, perhaps even single-blind experimental designs in education. Now what about controls? There is no way to ‘control’ or perhaps even list all of the variables that might affect learning during a study Hattie’s 138 culled from the quantitative literature is a good start. In T&T the phenomenon of extra-lessons as well as (lack of) homework assistance/supervision at home would confound any attempt at ‘proving’ that a perhaps not well understood construct like gender and the decision to separate ‘girls’ from ‘boys’ ‘results’ in greater achievement. The warrant to support the claim would simply be too weak.

Now on to the ethical concerns of conducting ‘experiments’ on ‘other people’s children’ in a field as politically and emotionally charged as education. The most important one, in my opinion, being the way it robs children and teachers of any human agency and reduces the diversity and variability of human beings into the limited pigeon-holes of researcher determined categories. In the first case, experimental design assumes a correspondence between changes in the manipulated variable(s) (independent) and the observed/measured (dependent) variable(s) and since all other variables are kept constant (controlled) cause and effect can be established and one of several outcomes can be predicted. Experimental design also depends critically upon on an assumption that the thing(s) being experimented on do not, cannot (or should not) act intentionally to alter the quality of the variables being investigated or that such actions can be ignored and requires that agents’ histories have no bearing upon the experiment’s outcomes.

Experimental design depends on ignorant, passive and essentially ahistorical agents. Learners do not meet these criteria. They are not inert bits of matter buffeted about solely by external forces despite dominant discourses that continue to talk about increasing the numbers of some type of students in the pipeline or the misleading misnomer brain drain for what is a more complex phenomenon. Nor are learners eternal captives of prior conditioning, but rather they are engaged in continuously construing and re-construing their experiences, testing new knowledge for ‘fit’ with prior experiences, expectations, future goals, desires, aversions, and personal beliefs and altering their actions and their environments. Prior experience or history plays a significant role, but does not determine the complete landscape of future learning. What is learnt in any moment is unpredictable. To treat any learner, group of learners or learning system in this instrumental fashion raises profoundly disturbing ethical concerns.

Experiments are especially good at generating waste in their pursuit of determining cause and effect. Many failed experiments often precede the one that ‘works’ and being a part of a failed experiment in a consequential area such as education is not what parents, teachers or students have signed up for. In other areas, medicine for example, risks, including death, are discussed with participants. Which researcher in our system would dare say that potential risks include lower achievement and failure to complete the mandated curriculum even if the quality of what is learnt is improved? We talk only of potential benefits.

What systems, legislation and oversight are in place to seek and protect students, teachers and parents’ rights from researchers desire to know whether acting as proxies validating government’s policies or academic entrepreneurs? I do know that in T&T some schools have developed their own in-house guidelines and policies for participation in research, though at times I feel this is being used to protect reputations (read prestige) from unfavourable or less than flattering findings and limits reporting of classroom based research by teachers – another factor which robs policy makers of valuable data. Whether such policies are ‘legal’, however, remains to be tested in the future.

While ethical research policies, like other educational policies could be imposed from above without widespread stake-holder consultation, a more dialogical approach coupled with the simultaneous development of the requisite institutional, infrastructural, legislative, and enforcement capabilities would likely create a better climate for the conduct and reporting of useful education research in T&T.

Finally, a brief comment on Mr. Baldeosingh’s ‘poke’ at the UWI School of Education Express column that “99 per cent of the pedagogy in those articles was not based on any scientific research.” I don’t dispute this claim because as I have outlined above, it is next to impossible to do true experimental research in education, and I think this is how he might have been defining ‘scientific’. I also don’t dispute the claim because, as a former contributor to that weekly column, I know that many of the articles weren’t about pedagogy – education isn’t only about teaching or method after all – but there was an invisible pedagogy at work in the occasional reminders to write in a style appropriate to the format of a newspaper and the general “readership of the Express” which I was told required a less academic, less theory or research heavy focus, and a more – albeit no less difficult to learn and master – clear, concise, and convincing journalistic style.

Now, while I don’t want to, because I would prefer Kevin to spend his time on more important things, like unearthing corruption and mocking Ministerial malfeasance, but I’m calling his bluff on the invented, arbitrary, and likely hyperbolic statistic. A simple apology to my former colleagues will suffice.

No comments:

Post a Comment