PRISM Journal PRISM Volume 3. Issue 1 (2020) https://doi.org/10.24377/prism.ljmu.0301206 https://doi.org/10.24377/prism.ljmu.0301206 99 © 2020 PRISM, ISSN: 2514-5347 Paper-work: what documents have to say about assessment practices Judith Enriquez School of Education, Liverpool John Moores University, Liverpool, UK (J.G.Enriquez@ljmu.ac.uk) Received: 02/06/2020 Accepted for publication: 24/07/2020 Published: 04/09/2020 Abstract Documents are usually circulated as carriers of transparent information. They can serve as evidence of accountability. In fact, they embody the most desired value of managerialism, where the culture of audit and compliance is fully served and delivered in written and textual form. This article explores assessment by attending to its principal instrument – the document – through which it is organised, monitored and implemented in higher education. It is an invitation to ‘see’ what documents, such as, module guides, ‘do’ for universities and the assessment practices of academics. Under close scrutiny, documents ‘do’ more than record and transfer information. Their associated paper-work expresses and reproduces norms, patterns of thoughts and work habits that are accepted and assumed to be shared in the prevailing outcome-based assessment systems of higher education. This article provides a critical account based on practice-oriented and material-semiotic approaches to assessment. It bears witness to the past and persistent norms and standards that are shaped by documents, paper-work, control, compliance and surveillance and less by pedagogical and student engagement. Keywords: social practice; document analysis; outcome-based assessment; Bloom's taxonomy; intended learning outcomes 1. Introduction Assessment in higher education (HE) fulfils functions of certification on the one hand, and accountability for raising standards on the other. It is employed as a mechanism of transparency for external quality assurance based on a techno-rationalist perspective and positivist model of academic standards (Bloxham 2012; Bloxham and Boyd 2012; Bloxham, Boyd and Orr, 2011). This has been a dominant approach (Filer, 2000; Orr, 2005), which, unfortunately, has limited the goal of assessment to monitoring measurable outcomes that are quite contrary to the daily realities of teaching and learning. Consequently, assessment has become a socially decontextualized practice (Bloxham, 2009; Broadfoot and Black, 2004; Orr, 2005) and a mechanism of self and peer surveillance (Ecclestone 1999a; 1999b). Most recently, it has increasingly been conditioned and driven by the feedback factor of national student surveys and the general discourse on student engagement. Consequently, assessment as a social practice remains under-examined (Filer 2000; Boud, et https://doi.org/10.24377/prism.ljmu.0301206 https://doi.org/10.24377/prism.ljmu.0301206 https://openjournals.ljmu.ac.uk/index.php/prism/ mailto:J.G.Enriquez@ljmu.ac.uk https://orcid.org/0000-0002-3868-1003 PRISM Enriquez (2020) PRISM 100 3(1) al. 2018). This article intends to contribute to this lack of practice-based research by paying attention more closely to the vital role of documents and their material performativity. The fact that this remains relatively underexplored is surprising given the pivotal role that documents play as part of quality reviews and assessment practices. Undeniably, assessment is largely a written practice. In fact, instruction and certification would cease to exist without documents. Assessment is unthinkable and not feasible without documents: moderation reports, marking criteria, programme descriptors, and written feedback. Moderation activities need to be documented in order to demonstrate to external examiners that the marking process was conducted with objective and transparent scrutiny; this process is also geared towards ensuring comparability of academic standards with other UK institutions. Thus, the practice of assessment becomes formal or official to the extent that it is documented, circulated and examined. In a variety of ways, documents ‘keep in order’ practices. In fact, assessment practices are characterised and structured by the accumulation of written records as a way of quantifying and verifying organisational quality assurance (Freeman and Maybin, 2011). In short, assessment is almost always done on and with paper. It is this paper-work, the material force of assessment practices that I would like to shine a light on. To initiate and establish the role of the document materiality and its performative energy in assessment practice, I draw from an array of disciplinary strands that have influenced my own writings in the academic work that I have been doing. These include the fields of Science and Technology Studies (STS), in particular actor-network theory (Latour, 2005; Law, 2009), non-representational theory (Thrift, 2008), organisation studies (Orlikowski and Yates, 1994; 2002) and information and communication studies (Allen-Robertson, 2017; Drucker, 2013), including the notion of intertextuality from contemporary literary criticism. Assessment as a product of measurement and transparency on the one hand, and mechanism of managerialism in universities on the other, undermines its everyday practice and silences the power relations inherent to its ’paper-work’. It plays a vital role in the performativity of routines, that is, of constant reporting and recording; in short, the mapping and documentation of HE practices. As such, there are implicit assumptions and decisions contained in the documents (and documentation processes) where the standards and levels of activities are concerned; not least, the expectation – indeed requirement – that academic staff and students comply. Documents are circulated and used as carriers of transparent information. As message carriers, they have the capacity and power to dictate and determine actions and instill particular views. In a performative regime, they are fabrications that institutions produce based upon one or more versions of representations that are written into existence as performative texts (Ball, 2003). In short, documents are able to extend the scope and reach of command and standardisation, making it possible to direct action over time and at a distance (Freeman and Maybin, 2011). The paper-work associated with documents serves not only as a medium for passive-aggressive bureaucratic practices, but also as a source of scapegoating for administrative control and performative compliance. Over time and for the sake of compliance, paper-work becomes a ritual of ‘opaque transparency’ (Orr, 2005), and conventional normality. This positivist, rationalist function of documents determines the terms of engagement for teaching and learning, student engagement and student-staff relations. Such doings undermine the very standards that assessment tries to uphold and the student- centredness that outcome-based assessment claims to promote. This article is an invitation to pause and look closely at assessment and confront its documents. It is not only text and language that ‘stand in’ for the corporate consensus of the institution; the document as an auditable act of performativity also enshrines and inscribes this. It becomes and produces an evidence trail of accountability. In fact, the document in this sense embodies the most desired value of managerialism, where the culture of audit and compliance is fully served and delivered in written and textual form. PRISM Enriquez (2020) PRISM 101 3(1) 2. Documenting assessment practices To examine the material performativity of documents, two key elements that dictate the design and standards of assessment are revisited: Bloom’s taxonomy and intended learning outcomes. The highly varied and fluid realities of assessment are made durable, fixed and circulated; more importantly, they are rendered transparent through the application of Bloom’s taxonomy, in what are ultimately levelled and de-politicised documents – in this case, module guides or syllabi. I had no success in locating research that has in fact explored and probed how the judgements made in relation to marking, moderating and external examining are affected by Bloom’s taxonomy and intended learning outcomes. Therefore, it is a rather pressing matter that we attend to the documentation associated with assessment. We cannot simply understand assessment without critiquing its documentary framework. As such, documents ‘are treated as sources of authority and compliance, they are treated as carriers or vehicles of messages, communicating or reflecting official intentions, objectives, commitments, proposals, ‘thinking’, ideology and responses to external events’ (Freeman and Maybin, 2011, p. 157). They ‘do’ things too. To this end, this article intends to contribute to the re-framing of assessment as social practice by examining the documentary realities that frame and regulate assessment practices. Practice is theorised with three main elements – norms, conventions and routines. Documents express and reproduce norms, patterns of thought, work habits and standards. Because their ordering effects are ordinarily hidden, usually accepted not discussed or explicitly communicated – even, and perhaps especially, to those who express them – they must be drawn out by detailed interpretation. In addition, the performativity of assessment is further analysed through document materiality. By this I mean, documents are analytically considered not only as mediators and vehicles of discipline and bureaucracy (Hull, 2012), but also as material objects that are constitutive in performing assessment practices. This matter is elaborated in the next section. Furthermore, the article is a reminder of the popular and persistent inscriptions of assessment documents that have significantly structured and influenced institutional norms without much critical reflection and inquiry. Studying the paper-work of assessment in its material specificity draws attention to the doings of documents and challenges the inscriptions, (that is, scripts of standards, conventions and routines) that it circulates. The document analysis I employ here focuses on what is made to ‘matter more’ – the ‘paper-work’ that has somehow afforded primacy over practice- oriented sensibilities. Analysis includes a close reading of the documents themselves, but also include developing and understanding the ways in which documents refer to other documents as they are authored, produced, used and consumed. Here intertextuality is useful in alerting us to the fact that documents are usually part of a network or system of relations. Documents circulate through organisational hierarchies, programmes, teaching sessions, and assessment practices. In doing so, they actively construct those practices, networks and organisations. Taking up lead from organisation studies, document analysis, in this regard, is not just interested in content analysis or in reading descriptions and inscriptions and applying a constant comparison method to derive textual categories or themes. More importantly, it is concerned with what purpose is being served (Orlikowski and Yates, 1994; 2002). In the discussion that follows here, I simply claim that if we are interested in understanding the historical roots of specific concerns, dominant conceptions and governing conditions that potentially constrain innovative and alternative assessment practices, then we must pay attention to what is being done with documents and how assessment is presented through documents. Document analysis allows us to probe hegemonic and taken-for-granted assessment practices and uncover unintended realities by focusing on the role documents play in the much-desired transparency and accountability of quality assurance and control. Documents as qualitative sources of knowledge have predominantly been framed as ‘vessels of content’, rather than as material objects in use (Coffey, 2014; Prior, 2008). Re-framing documents as ‘vehicles of action’ as well in their own right would allow us to take into account and to act upon the consequences of their PRISM Enriquez (2020) PRISM 102 3(1) prescribed function or intent. In this case, what are we doing with our module guides or syllabi when their documentation enacts Bloom’s taxonomy and the intended learning outcome or educational initiative? 3. Documents as material objects Documents are not merely representational artefacts. More importantly, they express and reproduce norms and patterns of work set by relations of power between institutional and social actors. Yet, although they attend closely to the requirements and structure of assessment, they say little about the work of documenting itself, except that standards are applied and met. Documents play significant roles in organisations and yet their realities are usually omitted in institutional inquiry or educational practice (Atkinson and Coffey, 2011). In fact, they promote particular educational ideologies and values and establish what norms and conventions must be followed. How documents of assessment work to assemble a putative reality is considered in more detail here, through the document analysis of 53 module guides – inclusive of one particular university’s curriculum design guide, and its related programme handbooks. All of the module guides within two particular degree programmes have a basic generic outline, which follows a prescribed university template. Using the template is an institutional requirement. A module guide must contain teaching staff information and a syllabus which includes an outline of the module content, the aims and the learning outcomes of the module. The guides must also include a timetable of sessions as well as assessment details and submission, including feedback, dates. The guides conclude with a list of essential and recommended references or resources. Learning outcomes in all of the guides apply Bloom’s taxonomy. This link to Bloom is explicitly found in the curriculum design guide of the university, where it was suggested that modules must refer to the adaptations of Bloom’s taxonomy (i.e., Anderson and Krathwohl’s [2001] revised Bloom’s taxonomy) when creating the learning outcomes for each module. Furthermore, the influence of John Bigg’s (1996) constructive alignment concept was evident in the formulation of learning outcomes. At this juncture, I ask the reader to trust the work that I have done here. I would like to relocate my own ‘paper-work’ away from our default mode of thinking and framing the representation of what research should look or read like and momentarily suspend the either/or mental construct of what its representation should include or exclude. I am fully aware that I have made a deep cut into the psyche of curriculum development practice, best practice and what seems to be foundational to teacher education programmes and HE practices. I would not make such a deliberate act without evidence to back me up. However, the intent does not go as far as to ‘name and identify’ those involved that could easily be revealed by the documents I have exposed in this work. To fully detail the descriptions and content of the data that support the key claims of this article is to ‘point fingers’ to those behind the documents and I would not do that. The documents involved (e.g., module guides, programmes, assessment criteria, marking grids) are entangled with specific people. Inevitably, the ethics of this work must be upheld by not making explicit the structural elements of a ‘proper’ research article, with introduction, method, analysis, discussion (or combined analysis/discussion section), conclusion sections. I do disappoint with good intentions and what matters more is the work that has been done by the documents and not so much who they represent or how many. We use documents to account for ourselves and what we do – to comply, to evidence, to justify and record. And yet, there is often little or no mention of the documentary realities of social practices even though document studies do have a long historical foundation within social science through the works of Foucault and Bourdieu (Coffey, 2014). The paper-work and documentary realities of assessment are explored by paying close attention to the unintended and yet repeatedly choreographed practices with documents. To facilitate a documentary intent, Law’s (2009) argument about collateral realities is enacted to draw attention to documents-at-work in the following ways: First, attend to practices. Look to see what is being done. In particular, attend empirically to PRISM Enriquez (2020) PRISM 103 3(1) how it is being done: how the relations are being assembled and ordered to produce objects, subjects, and appropriate locations. Second, wash away the assumption that there is a reality out there beyond practice that is independent, definite, singular, coherent, and prior to that practice. Ask, instead, how it is that such a world is done in practice, and how it manages to hold steady. Third, ask how this process works to delete the way in which this sense of a definite exterior world is being done, to wash away the practices and turn representations into windows on the world. Four, remember that wherever you look whether this is a meeting hall, a talk, a laboratory, or a survey, there is no escape from practice. It is practised all the way down, contested or otherwise. Five, look for the gaps, the aporias and the tensions between the practices and their realities – for if you go looking for differences you will discover them (Law, 2009, 12, underlined text in the original). I further argue in support of Pinch (2008), who states that ‘[t]he social world is a world built of things, social action is through and through mediated by materiality, and social theory will remain impoverished unless it addresses this materiality’ (Pinch, 2008, p. 479). Materiality plays an important role in the institutions and infrastructures that develop around them. Material objects carry assumptions and expectations about behavioural patterns from situation to situation, from home to workplace and from students to teachers. This is quite evident with the ‘work from home’ arrangement that the Covid-19 crisis has single- handedly orchestrated during lockdown. If we extend ‘affordance’ to documents as objects from which we can derive meaning through their use, we can see how the evidentiary nature of the document arises from the confluence of material form and social interpretation (Allen-Robertson 2017). In short, the performative materiality of documents should not be solely analysed based on its content, but also by its acts. Using insights from STS and non-representational approaches, the document analysis put to work here attends to the making of assessment as it becomes assembled materially and semiotically, as part of a particular set of relations (e.g., lesson, module, course, programme). Documents as material objects don’t represent, they perform. The emphasis on understanding documents as constitutive, rather than representational, forces us to look at them, to see how they work (Drucker, 2013). Thus, the overarching question or line of inquiry for the paper-work of assessment is: what gets done for learning to occur? By ‘seeing’ the work being done by intended learning outcomes, and verbs like ‘describe’, ‘analyse’ and ‘critically discuss’ based on Bloom’s taxonomy in module guides. It proceeds by making the ‘paper-work’ of documents visible and placing under close scrutiny the ‘common sense’ understanding that has been maintained by and in documents. There is a need to suspend and resist institutional tendencies, temptations or even individual conveniences that treat module guides as transparent, self-evident and fully sensible documents or standard text. They simply are not as discussed in the following sections. Documents are both producers and products of practice through repetition and coordination. And for this reason, Law (2009) argues, they promote and maintain particular realities and not others. If documents or texts, including other representations or things, do realities in practice, then as such, they could be done differently or in more than one way. Hence, the ‘paper-work’ of assessment engages in various processes, including selection, juxtaposition, deletion, ranking and framing. All of which create patterns of assessment through repeated practice. For instance, since the rise of outcome-based assessment, Bloom’s taxonomy and verb-driven learning outcomes have become conventions selected and juxtaposed in national degree standards and systems of external examiners. How pre-determined outcomes in module guides come to matter more than the emergent realities of what is learned and could be assessed should be more critically considered. In the following sections, the article elaborates on Bloom’s taxonomy, intended learning outcomes and the use of ‘critically’ in module guides to restore analytically the ‘collateral realities’ (Law, 2009) of documents, and to look at them rather than through them (Kafka, 2012). The assessment elements are not PRISM Enriquez (2020) PRISM 104 3(1) neutral purveyors of written text. Instead, they are and must be treated as mediators that shape their inscriptions and their relations with the subjects and objects they refer to (Hull, 2012). I provide a practice- based perspective on assessment whereby documents participate and to some extent dictate the learning, which is repetitively or routinely reduced to mechanistic and instrumentalist criteria and categories of higher-order thinking skills. 4. Bloom’s Taxonomy One of the most important and influential works of more than half a century continues to do at least two things. First, it eliminates the social aspects of learning; and secondly, it defines learning outcomes as individual goals in behavioural terms. This is none other than Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook 1: Cognitive Domain, edited by Benjamin Bloom and published in 1956. It is commonly known as ‘Bloom’s Taxonomy’, a six-tiered approach to the classification of intellectual expectations. It was a collective product of the collaborative effort of thirty-four educators, psychologists, and school examiners. It is often overlooked or forgotten that it was part of the three- part system of cognitive, affective and psychomotor domains. The second handbook on affective domain was published in 1964. The committee never did publish a handbook for the psychomotor domain despite various attempts. Let us be reminded here that Bloom’s taxonomy was and is still is a guide that focuses solely on the cognitive domain of learning, and assumes that learning can be compartmentalised. As such, Booker (2007) points out, its aim was to provide a generic classification system for test questions to meet broader educational goals and measurements. The use of Bloom’s taxonomy as a way to view, develop and evaluate learning objectives is well established. For more than half a century, educators have turned to Bloom’s taxonomy to provide the language or more specifically, the appropriate verbs for educational levels, such as, ‘identify’ for first-year (freshman) level, for intended learning outcomes that could be in theory behaviourally measured. The taxonomy though has been revised ever since (see Anderson and Krathwohl, 2001; Krathwohl, 2002; Marzano, 2000) and alternative taxonomies have been on offer, such as Hauenstein (1998) holistic taxonomy; Fink’s (2013) taxonomy of significant learning; and Biggs’ (1996) Structure of Observed Learning Outcome (SOLO) taxonomy. None of these, though adapted to various disciplines and programme or course levels, have overtaken or diminished the demand for Bloom’s taxonomy. It remains to be the dominant framework for classifying, categorising and defining programme aims and intended learning outcomes appropriate to educational levels. This article does not necessarily suggest that learning objectives in the original work of Bloom and his colleagues are the same as learning outcomes (see Harden 2002 for a more elaborate discussion). Suffice to say that Bloom’s taxonomy has survived various educational shifts from behaviourism to constructivism and from a focus on learning content to student learning outcomes. Amidst these educational shifts and turns, curriculum developers, advisors, and evaluators have continued to use the taxonomy as a method of mapping the progression of student learning within programmes of study. Undeniably, Bloom’s taxonomy has been a key document for exercising transparency in articulating the scope and level of intended learning outcomes beyond subject-matter content items in ‘measurable’ terms. However, the collateral (unintended) reality of this, on the same token, is that it limits knowledge to such items of content within a view that the mind is a ‘mental filing cabinet’ (Bereiter and Scardamalia, 1998; 2005), where knowledge could be stored and retrieved for higher-order thinking skills. Ultimately, it perpetuates and promotes the view that learning is a product. Surely, such a suggestion is objectionable given the sophistication and advancement of educational theories and approaches. Bloom’s taxonomy has insisted that the cognitive domain matters more and institutions and academics have complied in practice. Furthermore, assessment-related documents, such as curriculum guides, code of practice for quality assurance, programme handbooks, where we find Bloom’s taxonomy at work in complete circulation in educational systems, have been a ‘perfect fit’ for the marketised view of education. Having said this, it needs to be emphasised that learning as product PRISM Enriquez (2020) PRISM 105 3(1) was the dominant mindset long before the rise of neo- liberal agenda for education (Hager, 2004). With the help of Bloom’s taxonomy, ‘the learning- as-product view has remained very resilient. It is as though formal education systems have never got beyond a mass production mindset reminiscent of the industrial era’ (Hager, 2004, p. 6). This framing puts the focus of assessment on products of learning. In so doing, the formulation of learning outcomes deflects attention from the process aspects or practices of learning. In encouraging the spread of the taxonomy and associated verbs: programmes, documents for quality and compliance have uncritically deployed and disseminated an outdated conceptualisation of learning and knowledge. The dominant learning-as-product view is steadily circulated in assessment-related documents. The learning outcomes are assumed to be stable and fixed over time. This stability enables learning outcomes to be incorporated into curricula and textbooks, to be passed on from teachers to students, its attainment to be measured in essays, presentations, and examinations and be readily amenable to comparison through moderation, external examination, and quality assurance review. Thus, HE institutions depend on documents in ensuring that learning outcomes are stable, durable and familiar to be widely replicable across programmes and disciplines. This delivers the transparency requirement of standardisation and objective benchmarking of educational attainment. Bloom’s taxonomy has been put to work for far too long and as such, it has become one of the institutional norms. Its place in module guides and its work in assessment practices must be reviewed at the very least. 5. Intended Learning Outcomes Setting learning outcomes is now the prevailing approach of assessment in HE, replacing the identification and development of content (Orr, 2005). Hussey and Smith (2002; 2003; 2008) have argued that the concept of learning outcomes has become tightly 1 Here, I refer to Bruno Latour’s (2005) concept of ‘black box’ as those processes that are deemed fixed, stable and entangled with notions of specificity, transparency, and measurability and their uses have to do more with administrative and regulatory necessity than to serve the purposes for which they are adopted for. In fact, they have become largely irrelevant to classroom activities and practices. The account that the specification of learning outcomes in programme handbooks and module guides is important for ensuring transparency of expectations to students must be examined. There are false assumptions that must be unpicked and exposed in driving programmes through a set of learning outcomes. First, writing learning outcomes down does not make them transparent. Besides, once read, the interpretation is varied and the meaning is not easily shared. Ultimately, they are only transparent to those who create and write them. In fact, Orr (2005) argues that transparency has led to opaque or black boxed1 practices. Programme developers and leaders wrestle with documents upon documents replete with demonstrable and behaviourist verbs, conveniently laid out by the same text: ‘On completion of this module a student should be able to’. The following scenario should be quite familiar, especially for colleagues and institutions that have been subjected to the preparation of a programme for review and re-validation: Those involved in approving or validating new programmes can become embroiled in debates about the precise niceties of the semantics; the focus on such activities being in danger of diverting attention away from the principal purposes of modules or courses. Institutions back themselves into the most remarkable corners of what is and what is not acceptable at which level, such as bans on the use of the verb ‘analyse’ at first year level … and the complete expunging of the verb ‘understand’ from any level (Hussey and Smith, 2003, p. 367). The verb must describe what students should be able to do. It has to be an observable and assessable function. Non-specific verbs and phrases such as, ‘understand’, ‘be familiar with’, ‘appreciate’ and persistent without scrutiny though their workings are not necessarily and explicitly known and understood. PRISM Enriquez (2020) PRISM 106 3(1) ‘comprehend’ should be avoided. Alternative active verbs must be used, such as, ‘compare’, ‘describe’, ‘explain’ and ‘identify’. The sequence identified by the descriptors may well represent a seamless progression in cognitive terms, but it remains, as Hussey and Smith (2002) point out, at odds with the empirical knowledge of academic staff and suggests a uni-directional movement that distorts the real process of knowledge construction and meaning-making. Verbs could not stand on their own even when they are written down and assigned to an educational taxonomy. And yet, we (and this is includes my own practice) concede and use a prescriptive list of descriptors to comply with curriculum development guidelines. I am not arguing that learning outcomes should be abandoned or that we should not have them in module guides. They do matter. However, they have to matter and be done differently. I do agree that learners must be introduced to concepts and ideas progressively towards more complex levels. However, my argument is that documented learning outcomes could potentially limit the possibilities of assessment practices by devaluing the emergent and dialogic relation between students and their teachers. The verbs used in the 53 module guides that became ‘data’ for the document analysis in this article, were made to ‘act’ in ways that are unnatural to what really matters in assessment. The verb ‘analyse’ along with ‘evaluate’ and ‘reflect’, was most frequently used for second-year and third-year level learning outcomes. What makes third-year (level 6) ‘analyse’ a distinctly higher level outcome descriptor than second-year level ‘analyse’ was the fact that the former was prefixed with ‘critically’. In fact, ‘critically’ was used 39 times in the module guides. This deliberate act to articulate learning outcomes at the ‘right’ programme level of learning outcomes is further explored in the next section. 6. Show me ‘critically’ Hussey and Smith (2002) have a few objections to learning outcomes. First, they argue that their clarity, explicitness, and objectivity are largely spurious or contrived. They give the impression of precision only because we unconsciously interpret them against a prior understanding of what is required and a black boxed construction of what the verbs mean, pretending or wishfully establishing a shared meaning. In brief, they are parasitic upon the very knowledge and understanding that they promise to exhibit. In particular, they rely heavily on Bloom’s behaviourist taxonomy. For instance, the word ‘analyse’ or ‘discuss’ have been preceded by the word ‘critically’ in third-year level learning outcomes. To qualify a second-year level ‘analyse’ and a third-year level ‘analyse’ by adding ‘critically’ would not achieve a precise interpretation of meaning for students because a written text is not a meaning carrier. Instead, meaning is constructed by the students and their teachers. Interpretation is relative and must be relevant to the assessment type, lecture content, subject matter or level. In this sense ‘critically’ only serves as an intended outcome; more than this, it importantly assumes that we already knew (or know) what constitutes critical evaluation as a distinct style or array of contents at third-year level. The word in itself does not tell us this. Learning outcomes remain ambiguous no matter what verbs and descriptors are used. This is further complicated in practice when learning outcomes are also used as assessment criteria. Of course, we know that we have to formulate our learning outcomes based on the subject matter, an understanding of the requirements of the course and educational level and informed by our experiences of teaching and marking at various levels. These are not easily captured in text and even if they are, students would not necessarily have the expertise or experience to read the ‘intended meaning or message’; hence, the extent to which the words themselves are able to capture and articulate – in a universally precise way – academic standards and expectations should be recognised as problematic. The mere fact of writing and documenting learning outcomes does not make them unambiguous and transparent at all. And, where the word ‘critically’ is concerned, this (in itself) does not clarify the difference between ‘second-year level analyse’ and ‘third-year level analyse’. In the end, criticality is formulaically bureaucratised and reduced to a hollow cipher; another collateral consequence of power-sanctioned documentation and paper-work, it is no more easily understood nor is its implied meaning conjured and shared. PRISM Enriquez (2020) PRISM 107 3(1) 7. Documents at work Documents are powerful means for structuring and disseminating information, but also for instructing and maintaining norms. Yet the ‘social life’ of (or paper- work associated with) the document is generally neglected in assessment literature. If the notion of paper-work remains unexamined, there is a real danger that inane procedural concerns ‘trump’ at the expense of critical understanding and pedagogic rigour. Furthermore, an uncritical acceptance of increasingly prescriptive and standardised outcomes, along with elevated protocols for control compliance as part of managerial functions, serve to create and maintain instrumental attitudes to assessment with a false assurance of quality. Seeing module guides and assessment documents as vehicles that enshrine and maintain mechanisms of control, as artefacts that produce a black box mimesis for transparency and accountability, and not just an innocuous conduit to carry and deliver information, makes it easier to understand the utility and persistence of old ideologies and learning theories disguised within new educational priorities and agendas. Documents are not innocuous. The idea of a document as a neutral carrier of information is misleading. Undoubtedly, documents ‘carry’ and ‘transmit’ information. But simultaneously, they hide or silence others. We need to see the way documents have served not simply to write, but also to underwrite social aspects of learning as clearly expressed in Bloom’s behaviourist taxonomy; not simply to comply, but also to coordinate social values and experiences. By conceptualising the module guide, with its learning outcomes and precisely worded assessment regime as social practice, as paper-work with material force and purpose, this article has attempted to bring into view a broader framework for documents-at-work and to emphasise how what is written down enact and produce a collateral deficit. Documents become sources of standards and are, to some extent, circulated as standards. As such, they become performance monitors that carry the weight of invisible and yet dominant positivist values. They control and regulate the behaviour of academics and students. In practice, through the act of paper-work assessment activities serve and ensure institutional audit, national standards and external examining benchmarks. Increasingly, documents have also served as substitutes for communities of practice as academic staff members follow or read the same documents or use the same report templates and guidelines. Within a techno-rationalist agenda, a bureaucratic document culture is established and promoted, and alongside it, a culture of compliance persists. The drive for transparency and accountability has disintegrated communication and community where most often than not documents speak on behalf of educators when it comes to quality and standards. Hence, the resulting community is an ‘imagined’ one and the central way that they are imagined is through the documents they share. Paper-work coordinates assessment activities. The mirror of accountability and quality standards is held up to academic staff in moderation reports, external examiners’ comments, etc. Consequently, the paper-work creates a sense of commonality that is remarkably resilient, even though they become outdated and irrelevant. In fact, academics fully cooperate with documents. Academic practices are negotiated with them. Assessment must be completed in consultation with them. Inadvertently, they maintain a sense of community – that we are all in it together. They have become our closest ‘colleagues’. This is not to suggest that communities of practice no longer exist or could not exist. Instead, this account merely recognises the efficacy of bureaucratic text lies in its capacity to promote tick-box exercises, particularly in a climate of increasing teaching hours per academic staff and larger student cohorts. Seen this way, Brown and Duguid (1996) argue, shared documents are in many ways the grounds for contention and opacity and the pre-text for agreement or compliance. Without context, documents (modules guides) and words like ‘critically’ are ‘standards-in-use’ that are easily shared without necessarily co-constructing meaning or interpretation. In different practices, there is no “right” interpretation of a document – of a learning outcome or of ‘critically’. The meaning of ‘critically’ is not simply “in” the written word or document that contains it. Rather, it is constructed by the “culture of audit and compliance” around assessment and other documentary realities PRISM Enriquez (2020) PRISM 108 3(1) under consideration. From this point of view, the fixed, immutable document plays a valuable role. It demands institutional transparency and compliance. Being able to talk about the “same” set of documents is extremely useful. Over time, they become the norms and conventions that form and inform habits of assessment. From the representation of information to performativity, we are compelled to submit and accept that we have somehow access to transparency, accountability and quality through the production and use of the same documents. In fact, we submit to the use of the same ‘verbs’ for learning outcomes to establish the ‘right’ thinking skills for students. Surely, we can see how documents do things just as ‘words’ do things as John Austin suggests in his 1962 book on How to do things with words and yet we would not claim the same about how documents shape and dictate what really matters in assessment and its practices. Documents are too deeply entrenched within an academic culture that they have taken on a common-sense appeal. They seem ‘natural’ and yet, there is nothing natural about Bloom’s taxonomy and its levels of higher thinking skills. At their very best, documents, just like the apparatuses Barad (2003) speaks about, are not just inscription or recording artefacts that could be set and circulated before, during and after assessment. They are not neutral probes or passive arrangements that are merely there to capture assessment practices. In fact, they are a key and core part of assessment practice; they enact boundaries carefully executed in assessment criteria and rubrics. To change assessment practices, we have to intervene and interrupt the doings of documents and rework what really matters to assessment and its practices. So what can we do with documents? I do not propose that Bloom’s taxonomy should be replaced but that its ideologies must not be overlooked. We must consider and make explicit its limitations and contradictions to current educational theories and approaches. We must work with students to participate in and co-create the meaning of learning outcomes and other documents of their programmes. Academic staff and students must engage in producing and performing assessment with other documents that are not necessarily prescribed by institutional and quality assurance protocols. It must be the case that we can create other documentary realities. 8. Conclusion So what do we learn if we attend to documents? What happens if we see them and the work they are doing, and manage to treat them as part of – and an expression of – practice, rather than as more or less transparent representation of a pre-given or intended reality of learning outcomes and higher-order thinking skills? The answer comes in three parts: 1) We overcome the obviousness of representations and focus on what gets done with and by documents. 2) We recognise that when standards are done, they are startlingly varied or multiple in their effects. 3) It allows us to explore alternatives and include other kinds of documents and ways of documenting that could produce and share learning outcomes and assessment criteria that value open and emergent processes. As we have seen, assessment is in part documents- at-work and the other part, habits or routinised processes or procedures. Particular collateral realities are enacted in the documentary accounts – realities of transparency in written learning outcomes, academic progression through verb assignment using Bloom’s taxonomy and objectivity through moderation reports and external examiner’s reviews. Documents have performative effects and such realities are ‘done’ and ‘accomplished’ through assessment practices. The quality assurance of assessment practices outlines behaviours and social processes conducive to matters of certification as an institutional activity. Activities, behaviours, and social rules become institutionalised through documents that occur through common behavioural routines that lead to shared taken-for- granted norms, conventions, and habits. ‘Written’ assessment practices must describe assessment in terms of the manner and the extent to which it makes sense to learners. Otherwise, documents become a mechanism to shut down or mute staff and student voice and remove the critical dimensions of student-centredness from assessment PRISM Enriquez (2020) PRISM 109 3(1) practices. Documents could easily become rules to be followed and nothing else. Furthermore, in a managerialist university, they could easily become ends in themselves. It is therefore highly recommended that alternative documents must be produced and used. Understanding how documents are involved in enacting and producing assessment practices invite and encourage us to change documents or how we use them. This act could have wide-reaching social and material effects. For instance, discussion of past papers and exemplars and the setting of assessment criteria with students and peers could create the opportunity to develop a common ground for establishing assessment standards. Please understand that this is not a complaint or criticism about institutional guidelines or indeed about the Quality Assurance Agency and its standards. It is just an attempt to attend to what documents actually ‘say’ and ‘do’. It is an observation about the nature of practice, specifically the practice of assessment, which is not something I and my fellow academics simply do. Our doings are not independent of the paper-work of assessment documents. This is not an evaluation of assessment or how it is done. Instead, it is a recognition that it could be done differently. What is deemed ‘common sense’ or ‘common practice’ is always more or less incoherent. Documents are practices through and through. Their representation, textual or otherwise, is actually not transparent at all. Documents themselves can either liberate or oppress us. We must learn to perceive and do things differently with documents. There are many types of politics at play at both macro and micro levels. The politics of those realities that are no longer questioned – those documents and taxonomies like Bloom’s must be made visible as briefly shown here. The point is to shift our understanding of the sources of relative immutability and obduracy of conventions and apparent transparency of standards through documents to ‘choreographies of practice’ (Law 2009). Transparency through paper-work is intended to ‘level the ground’ and reduce, if not to eliminate, the intractable practices of assessment. However, the collateral reality of documents has proven to simply displace our concerns to an impersonal and inflexible medium (Kafka, 2012). In fact, it authorises blanket and distant surveillance of academic work without ever ‘filing’ or ‘documenting’ the tacit marking standards and criteria that remain unwritten but regularly at work. Ultimately, paper-work is also part of assessment work. It happens to be a form of work that we find ourselves doing a lot of the time. And like many kinds of work, it just has to be done. The invitation or reminder of this article is to do it differently and to recognise perhaps that some aspects of assessment must remain unwritten and the written (i.e., intended learning outcomes, ‘critically’) could be communicated and expressed in other forms of communicative practices. Documents, as currently constructed and issued, enforce a certain type of paper- work. My work is not complete, valid and compliant without a permanent document to refer to. The paper- work of this article addresses the particular social and material practice of an educator or a classroom teacher who spends so much time doing fair assessments and push him or her just slightly over the edge to follow the flow of power in the opposite direction of conventional performativity and ‘undo’ documents. PRISM Enriquez (2020) PRISM 110 3(1) 9. Disclosure statement This research project was funded by the Teaching and Learning Academy, Liverpool John Moores University. 10. Acknowledgements The author would like to thank Adele Lunn for her contribution to this research project. I am also grateful for the comments received from reviewers to ‘uplift’ the intent of the artilcle and for the painstaking editorial input provided by Dr Craig Hammond. 11. Open Access Policy This journal provides immediate open access to its content with no submission or publications fees. This journal article is published under the following Creative Commons Licence: This licence allows others to read, download, copy, distribute, print, search, or link to this article (and other works in this journal), and/or to use them for any other lawful purpose in accordance with the licence. PRISM is also indexed in the world largest open- access database: DOAJ (the Directory of Open Access Journals). DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals. 12. To cite this article: Enriquez, J. (2020). Paper-work: what module guides have to say about assessment practices. PRISM, 3(1), 99-112 https://doi.org/10.24377/prism.ljmu.0301206 https://doaj.org/toc/2514-5347?source=%7B%22query%22%3A%7B%22filtered%22%3A%7B%22filter%22%3A%7B%22bool%22%3A%7B%22must%22%3A%5B%7B%22terms%22%3A%7B%22index.issn.exact%22%3A%5B%222514-5347%22%5D%7D%7D%2C%7B%22term%22%3A%7B%22_type%22%3A%22article%22%7D%7D%5D%7D%7D%2C%22query%22%3A%7B%22match_all%22%3A%7B%7D%7D%7D%7D%2C%22size%22%3A100%2C%22_source%22%3A%7B%7D%7D https://doaj.org/toc/2514-5347?source=%7B%22query%22%3A%7B%22filtered%22%3A%7B%22filter%22%3A%7B%22bool%22%3A%7B%22must%22%3A%5B%7B%22terms%22%3A%7B%22index.issn.exact%22%3A%5B%222514-5347%22%5D%7D%7D%2C%7B%22term%22%3A%7B%22_type%22%3A%22article%22%7D%7D%5D%7D%7D%2C%22query%22%3A%7B%22match_all%22%3A%7B%7D%7D%7D%7D%2C%22size%22%3A100%2C%22_source%22%3A%7B%7D%7D https://doi.org/10.24377/prism.ljmu.0301206 https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doaj.org/ PRISM Enriquez (2020) PRISM 111 3(1) 13. References Allen-Robertson, J. (2017). Critically assessing digital documents: materiality and the interpretative role of software. Information, Communication & Society, 21(11), 1-15. https://doi.org/10.1080/1369118X.2017.1351575 Anderson, L.W., & Krathwohl, D.R. (Eds.) (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Boston, MA: Pearson. Atkinson, P. A., & Coffey, A.J. (1997). Analysing documentary realities. In D. Silverman (Ed.), Qualitative research: Theory, method, and practice (pp. 45-62). Thousand Oaks: Sage. Ball, S.J. (2003). The teacher's soul and the terrors of performativity. Journal of Education Policy 18(2), 215- 228. https://doi.org/10.1080/0268093022000043065 Barad, K. (2003). Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter. Signs: Journal of Women in Culture and Society, 28(3), 801- 831. https://doi.org/10.1086/345321 Bereiter, C., & Scardamalia, M. (1998). Beyond Bloom’s taxonomy: Rethinking knowledge for the knowledge age. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International Handbook of Educational Change (pp. 675-692). Dordrecht: Kluwer Academic Publishers. Bereiter, C., & Scardamalia, M. (2005). Beyond Bloom’s Taxonomy: Rethinking Knowledge for the Knowledge Age. In M. Fullan (Ed.), Fundamental Change: International Handbook of Educational Change (pp. 5- 22). Dordrecht: Springer. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32, 1-18. https://doi.org/10.1007/BF00138871 Bloxham, S. (2009). Marking and moderation in the UK: false assumptions and wasted resources. Assessment & Evaluation in Higher Education, 34(2), 209-220. https://doi.org/10.1080/02602930801955978 Bloxham, S. (2012). ‘You can see the quality in front of your eyes’: grounding academic standards between rationality and interpretation. Quality in Higher Education, 18(2), 185-204. https://doi.org/10.1080/13538322.2012.711071 Bloxham, S., & Boyd, P. (2012). Accountability in grading student work: Securing academic standards in a twenty- first century quality assurance context. British Educational Research Journal, 38(4), 615-634. https://doi.org/10.1080/01411926.2011.569007 Bloxham, S., Boyd, P., & Orr, S. (2011). Mark my words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education, 36(6), 655-670. https://doi.org/10.1080/03075071003777716 Booker, M.J. (2007). A roof without walls: Benjamin Bloom’s taxonomy and the misdirection of American education. Academic Questions, 20(4), 347-355. https://doi.org/10.1007/s12129-007-9031-9 Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G., & Molloy, E. (2018). Reframing assessment research: through a practice perspective. Studies in Higher Education, 43(7), 1107-1118. https://doi.org/10.1080/03075079.2016.1202913 Broadfoot, P., & Black, P. (2004). Redefining assessment? The first ten years of Assessment in Education. Assessment in Education, 11(1), 7-27. https://doi.org/10.1080/0969594042000208976 Brown, J.S., & Duguid, P. (1996). The Social Life of Documents; introduction by Esther Dyson. First Monday, 1(1) [online]. http://firstmonday.org/ojs/index.php/fm/article/viewArt icle/466/387. Accessed 07 July 2016. Coffey, A. (2014). Analysing documents. In U. Flick (Ed.), Qualitative data analysis (pp. 367–379). London: SAGE. Drucker, J. (2003). Performative Materiality and Theoretical Approaches to Interface. Digital Humanities Quarterly 7(1), online. http://digitalhumanities.org/dhq/vol/7/1/000143/00014 3.html Ecclestone, K. (1999a). Care or Control?: Defining Learners' Needs for Lifelong Learning. British Journal of Educational Studies, 47(4), 332-347. https://doi.org/10.1111/1467-8527.00123 Ecclestone, K. (1999b). Empowering or Ensnaring?: The Implications of Outcome-based Assessment in Higher Education. Higher Education Quarterly, 53(1), 29-49. https://doi.org/10.1111/1468-2273.00111 https://doi.org/10.1080/1369118X.2017.1351575 https://doi.org/10.1080/0268093022000043065 https://doi.org/10.1086/345321 https://doi.org/10.1007/BF00138871 https://doi.org/10.1080/02602930801955978 https://doi.org/10.1080/13538322.2012.711071 https://doi.org/10.1080/01411926.2011.569007 https://doi.org/10.1080/03075071003777716 https://doi.org/10.1007/s12129-007-9031-9 https://doi.org/10.1080/03075079.2016.1202913 https://doi.org/10.1080/0969594042000208976 http://firstmonday.org/ojs/index.php/fm/article/viewArticle/466/387 http://firstmonday.org/ojs/index.php/fm/article/viewArticle/466/387 http://digitalhumanities.org/dhq/vol/7/1/000143/000143.html http://digitalhumanities.org/dhq/vol/7/1/000143/000143.html https://doi.org/10.1111/1467-8527.00123 https://doi.org/10.1111/1468-2273.00111 PRISM Enriquez (2020) PRISM 112 3(1) Filer, A. (Ed.) (2002). Assessment: Social practice and social product. London: RoutledgeFalmer. Fink, L.D. (2013). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco, CA: Jossey-Bass. Freeman, R., & Maybin, J. (2011). Documents, practices and policy. Evidence & Policy, 7(2), 155-170. https://doi.org/10.1332/174426411X579207 Hager, P. (2004). Conceptions of learning and understanding learning at work. Studies in Continuing Education, 26(1), 3-17. https://doi.org/10.1080/158037042000199434 Harden, R.M. (2002). Learning outcomes and instructional objectives: is there a difference? Medical Teacher, 24(2), 151-155. https://doi.org/10.1080/0142159022020687 Hauenstein, A.D. (1998). A conceptual framework for educational objectives: A holistic approach to traditional taxonomies. Lanham, MD: University Press of America. Hull, M.S. (2012). Documents and Bureaucracy. Annual Review of Anthropology, 41, 251-267. http://dx.doi.org/10.1146/annurev.anthro.012809.1049 53 Hussey, T., & Smith, P. (2002). The trouble with learning outcomes. Active learning in Higher Education, 3(3): 220- 233. https://doi.org/10.1177/1469787402003003003 Hussey, T., & Smith, P. (2003). The uses of learning outcomes. Teaching in Higher Education, 8(3), 357-368. https://doi.org/10.1080/13562510309399 Hussey, T., & Smith, P. (2008). Learning outcomes: a conceptual analysis. Teaching in Higher Education, 13(1), 107- 115. https://doi.org/10.1080/13562510701794159 Kafka, B. (2012). The Demon of Writing: powers and failures of paperwork. New York, NY: Zone Books. Krathwohl, D.R. (2002). A revision of Bloom's taxonomy: An overview. Theory into practice, 41(4), 212-218. https://doi.org/10.1207/s15430421tip4104_2 Latour, B. (2005). Reassembling the social: An introduction to actor–network-theory. Oxford: Oxford University Press. Law, J. (2009). Collateral Realities. Available online http://heterogeneities.net/publications/Law2009Collate ralRealities.pdf. Accessed 07 July 2016. Marzano, R.J. (2000). Designing a new taxonomy of educational objectives. Thousand Oaks, CA: Corwin Press. Orr, S. (2005). Transparent opacity: assessment in the inclusive academy. In Improving student learning: diversity and inclusivity. In C. Rust (Ed.) Proceedings of the 2004 12th International Symposium (pp. 175-187). Oxford: Oxford Centre for Staff and Learning Development. Pinch, T. (2008). Technology and institutions: living in a material world. Theory and Society 37, 461-483. https://doi.org/10.1007/s11186-008-9069-x Prior, L. (2008). Repositioning documents in social research. Sociology, 42(5), 821–836. https://doi.org/10.1177%2F0038038508094564 Thrift, N. (2008). Non-representational theory: Space, politics, affect. London: Routledge. https://doi.org/10.1332/174426411X579207 https://doi.org/10.1080/158037042000199434 https://doi.org/10.1080/0142159022020687 http://dx.doi.org/10.1146/annurev.anthro.012809.104953 http://dx.doi.org/10.1146/annurev.anthro.012809.104953 https://doi.org/10.1177/1469787402003003003 https://doi.org/10.1080/13562510309399 https://doi.org/10.1080/13562510701794159 https://doi.org/10.1207/s15430421tip4104_2 http://heterogeneities.net/publications/Law2009CollateralRealities.pdf.%20Accessed%2007%20July%202016 http://heterogeneities.net/publications/Law2009CollateralRealities.pdf.%20Accessed%2007%20July%202016 https://doi.org/10.1007/s11186-008-9069-x https://doi.org/10.1177%2F0038038508094564