The findings reveal some useful insights into students' engagement with the elements of disciplined enquiry. The global distinction between questions 1 , 2 and 3 on the one hand and questions 4 and 5 on the other suggests that not all of the essay questions were viewed as equally attractive! That the first three questions (at relational level in the SOLO Taxonomy) were answered more often then questions 4 and 5 (which were at the extended abstract level) possibly suggests that students were able to differentiate between questions which are more or less cognitively demanding. Such a conclusion is consistent with the raft of literature (such as Crooks, 1988; Ramsden, 1997; Gibbs, 1999) which finds that students are strategically sensitive to the intellectual demands of assessment tasks. However, while students can make global (and very possibly accurate) judgements as to the relative amount of cognitive engagement they need to invest in tasks which they construe as requiring 'deep' or 'surface' learning, the findings in this study make clear that they are not equally conversant with each of the elements of disciplined enquiry.
With the exception of question, there was significantly more evidence of disciplinary concepts than there was of analysis. In part this is not a surprising finding. In any learning, knowledge of concepts is of fundamental importance. If there is no evidence of knowledge, learning of the most essential kind has not taken place (Mayer, 1987) so it is desirable that students can show that they have grasped new knowledge, although it was a minority of students who showed their grasp of propositional knowledge to be comprehensive. In the essay questions which were posed there was an expectation that traditional, discipline-based theories and concepts in the study of motivation would be either applied or interpreted (Eraut, 1994) to demonstrate understanding of how psychological concepts in the study of motivation can illuminate professional issues and action. Regrettably, however, most of the students neither applied nor interpreted the knowledge but, instead, replicated it (Eraut, 1994). In other words the knowledge used to answer the essay questions was frequently a reproduction of the materials in students' prescribed and recommended reading, and indeed this may account for the very different performance on question 3, which could, on reflection, be interpreted as a 'straight, theoretical piece' and for which the rehearsal of other's ideas and criticisms may well produce a competent response. This phenomenon of knowledge replication is, according to Eraut (1994), a significant feature of higher education (possibly because of its dominance in school) and is consistent with studies reviewed by Entwistle (1997), Kember (1998) Prosser & Trigwell (1999) and others which find that novice students, and even some finishing students, conceive of learning only as the accretion of knowledge. However, learning at the relational or extended abstract levels (Biggs and Collis, 1989) is not just a matter of learners passively adding to their existing bases of knowledge. If learning is to be generative or transformative - such that it can be used to solve new problems and interpret new situations - students must analyse their knowledge to organise, synthesise, interpret, and evaluate various pieces of knowledge. It is to the indicator of analysis that the discussion now turns.