Open access
Research Article
9 March 2020

Developing Miller’s Pyramid to Support Students’ Assessment Literacy

Publication: Journal of Veterinary Medical Education
Volume 48, Number 2

Abstract

Assessment literacy is increasingly recognized as an important concept to consider when developing assessment strategies for courses and programs. Assessment literacy approaches support students in their understanding of assessment expectations and help them both understand and optimize their performance in assessment. In this teaching tip, a model for assessment literacy that builds on the well-known Miller’s Pyramid model for assessment in clinical disciplines is proposed and contextualized. The model progresses thinking from assessment methods themselves to consideration of the activities that need to be built into curricula to ensure that assessment literacy is addressed at each level of the pyramid. The teaching tip provides specific examples at each of the levels. Finally, the relevance of this work to overall curriculum design is emphasized.

Introduction

Assessment literacy is a term which encompasses the range of knowledge, skills and attributes necessary to understand both the purpose and process of assessment.13 Price et al.1 describe a series of six competences that underpin this notion of assessment literacy, and these can be broadly grouped into two categories: knowledge and understanding of assessment principles, and students’ ability to understand assessment standards (including developing the ability to act as assessors themselves).3
This teaching tip describes an overall conceptual framework for program-wide consideration of assessment literacy. We base this on the well-known model of Miller’s Pyramid, which presents a framework for assessing levels of clinical competence. In Miller’s Pyramid, the cognitive levels of knowledge and application of knowledge (knows and knows how, respectively) underpin practical application of that knowledge (shows how) and the behavioral ability to apply what has been learned in a practice setting (does).4 We provide specific examples of assessment literacy approaches at each level and propose the assessment literacy pyramid (ALP) model as a useful framework for curriculum developers.

The Approach: Adapting Miller’s Pyramid to an Assessment Literacy Pyramid (ALP)

This teaching tip aims to demonstrate examples of curriculum interventions to support the development of assessment literacy in veterinary students at all levels of Miller’s Pyramid. The ALP approach requires that students should be able to assess their own performance, as well as that of their peers, at all stages of this pyramid. As a student progresses through their curriculum, their self-assessment should develop and align more closely with the assessments of experts (e.g., their teacher). This is consistent with the social constructivist assessment process model described elsewhere that students need to actively engage in assessment and feedback5 in order to fully understand the process. Aspects of this can be fulfilled with reciprocal peer feedback, allowing students to receive feedback and apply their own understanding of assessment criteria with varying levels of guidance in relation to the student’s independent practice.
This teaching tip includes evidence gathered in the course of evaluating the approach, including data from surveys, interviews and focus groups. Data presented here pertains to projects that have ethical approval from the Royal (Dick) School of Veterinary Studies Human Ethics Review Committee, reference numbers HERC_86_17 (primarily level 3 data), HERC_134_17 (primarily level 1 and 2 data), and HERC_69_17 (primarily level 4 data).

Assessment Literacy Levels 1 and 2

At the knows and knows how levels (ALP levels 1 and 2), an excellent example of an assessment literacy activity is in the context of multiple-choice questions (MCQs), which are used ubiquitously to assess knowledge (and if well written, application of knowledge). In our ALP model (Figure 1), we have merged the knows and knows how levels in recognition of the fact that well-written MCQs and free-text questions can be used to assess at the knows how level. We highlight, however, the faculty development implications to ensure that this is the case in practice. Furthermore, there may be appropriate instances where both question types are deliberately designed to map to ALP level 1 (e.g., to assess core facts in relation to anatomical terms or legislation). For MCQs, the tool PeerWise has been demonstrated across subject areas to have great utility in terms of collaborative learning and helping students understand the challenges and nature of MCQs from both their own and an examiners perspective.610 The PeerWise system is an online tool that is freely available and was developed in the Department of Computer Science (University of Auckland).a In addition to the challenge of authoring questions, the system allows students to answer peers’ questions, rate them according to both difficulty and quality and provide qualitative comments in a discussion board format.11,12 Thus, students are engaged in both question authoring and answering and are seeing the process from the examiner and examinee perspective. An extension of this type of activity which further helps students understand the process and develop their assessment literacy is to engage them in the process of standard setting; that is, assessing the difficulty of individual questions in relation to setting a standard that separates competent students from those who do not perform well enough. We engaged students in a standard-setting workshop using the Angoff method,13 and at the end of the exercise, students were asked their views on the standard setting process via questionnaire containing Likert-scale and open-ended items. Complete survey data was available from 124 students (response rate 87%). Eighty-six percent of participants agreed or strongly agreed with the statement, “Participating in the standard setting exercise has helped me understand the process of assessment better.”
Figure 1: The ALP, showing the 4 levels aligned to Miller’s Pyramid4 with exemplar assessment literacy activities presented at relevant levels within the pyramid
ALP = Assessment Literacy Pyramid
A particular theme emerged from the open-ended free-text data related to assessment literacy, as reported by the following respondents:
I had no idea of what it really was. So, this gave us a good idea of how the process works. I don’t feel like I am going into an exam blind in terms of how it may be assessed.
. . . greatly changed my opinion on the importance/fairness/value of multiple-choice testing in comparison to other methods.
Interesting! Didn’t realize as much work went into checking the questions were at the right level. Makes me feel more confident about the process in general. Good to understand how it all works.
In the context of free-text response assessment questions, again constructing activities to allow students to act as examiners themselves; for example, by reviewing previous students’ answers (or answers generated by faculty based, for example, on common misconceptions) can be very valuable. This type of activity is easy to incorporate, and research shows students find this type of activity engaging, enjoyable, and helpful in relation to their own understanding of what makes for good quality work.3 An additional powerful method that we have used to facilitate assessment literacy skills development in the context of free-text answers uses the method of adaptive comparative judgment.14 This approach allows students to compare pairs of answers written by other students and consider which piece of work is better and why, thus engaging them in a deeper consideration of what makes for good quality work and conversely can expose misconceptions that may be held by their peers.

Assessment Literacy Level 3

The Objective Structured Clinical Examination (OSCE) is the classic assessment in veterinary (and medical) education used to assess clinical skills in a simulated environment. The OSCE was first described by Harden in 1979,15 and much research has been done on the use of this method as it has evolved, although comparatively less on the use of students as assessors. A recent systematic review in the context of the OSCE indicated that an approach involving peer assessment has the potential to promote learning for both the assessors and the assessed although students tended to award higher overall grades compared to faculty.16
An approach to developing assessment literacy at this level is to engage students as assessors in formative OSCEs (FOSCEs). In this example, students act as one another’s examiner. Students are given a short briefing on the assessment process prior to an opportunity to practice assessment as a class on a non-clinical related example (e.g., lacing a shoe). Students are then partnered up and act as both assessor and student in two OSCE stations under mock exam conditions. During this session, they give feedback to their peer. The aim of the formative OSCEs is to build assessment literacy for the OSCEs, encouraging students to identify as the “assessing expert” to get a greater understanding of how a marking scheme facilitates the feedback process. In addition, it serves as a low-stakes opportunity for students to experience the OSCE examination. Students were asked as part of a post-intervention evaluation, “Did attending the FOSCE class change your concerns over the forthcoming OSCE examinations?”, 58% indicated it had reduced their concerns, 32% that it had increased their concerns, and 10% reporting no change. Open-ended responses in the survey highlighted a greater understanding of the assessment criteria and marking process; for example, two respondents reported that:
I feel much better about OSCE’s now that I’ve done the FOSCE class and have practiced my skills in a few sessions. I was really worried about missing details, but I realized that half of the criteria is from “putting on gloves” or selecting the “appropriate needle.”
I’m still going through every available OSCE marking sheet and practicing.
In a post-formative OSCE focus group, students discussed how they felt about acting as the examiner, and became more conscious of the difficulty of acting as an examiner. They also iterated practical strategies for using one another as peer study aids. Two students commented that:
. . . it was really useful to see, like, from the other side. I feel like personally I was kind of willing for the other person to pass and see if they didn’t do anything where they were, like, oh, like I have to say no but you really want to say yes.
I have a friend who made a timetable of when we have free and when the clinical skills lab is free . . . and she’s going message me when she’s going go in and we’re going go in and kind of do our individual revision and then examine each other.
After the formative OSCE session, students were given the opportunity to repeatedly practice OSCE stations with peers in a self-led environment. In this context, students gain assessment literacy experience by exploring the more challenging aspects of being an assessor. Assessing an OSCE scenario requires an understanding of the task, and appropriately attending to the examinee’s actions, and the examination criteria. While students may tend to award higher grades when examining their peers (as has been shown in some studies16,17), this experience is particularly useful to encourage students to understand why examiners require training, and to experience a truncated version of that training themselves. Using formative OSCEs allows students to place themselves into the “expert” assessor role in a low-stakes setting, exploring their own role in assessments and clarifying the whole assessment process more holistically.

Assessment Literacy Level 4

At level 4, the focus is on the students as peer assessors in the context of workplace-based assessment. The stakes become higher as this “doing” of veterinary work is as close to the practicing vet as a veterinary school can get. In our context, much of the feedback students receive during their clinical rotations is designed to be formative in nature and is either given as part of a dialogue verbally or as typed comments into an online system. This is similar to how feedback would be delivered to independent veterinary practitioners, from peers and clients. Truly assessment-literate veterinary students will be able to deliver appropriate feedback, and identify appropriate feedback from a range of sources, and identify this as useful even though it may not come with an associated “grade.” At level 4, students have their last opportunity to match their self and peer-assessment to the assessment of experts. Two small scale studies investigating student perceptions and attitudes to the giving and receiving of feedback in the final year were carried out. In the first, students reported they found it easier to receive than to give feedback, and furthermore that students tended to give peer feedback focused more on personal qualities (and be highly supportive) than on specific tasks or procedures.18 Overall, students reported they found the intervention useful; two students reported that:
It was very helpful knowing what my peers think about my clinical, professional and teamwork skills, as they are the ones that I work with the most and that get to really see how I perform in clinics, on a more personal level.
I think that the peer feedback worked well with [our] group given the dynamic that already existed between the group members.
This latter comment highlights the need for significant support and structure around such interventions; the implication here is that it worked well because of a pre-existing positive group dynamic which may not always be the case. In a second study embedded in the context of a final year orthopedics rotation, students participated in a one-to-one feedback session with a clinician where they were encouraged to reflect on the feedback that they received from both their peers and clinicians during the rotation. In post-rotation interviews students discussed how they made use of this feedback.
Gives you a real true reflection of how you act in a professional scenario, so I’ll reflect, I’ll consider what they said. If they said I have good communication skills I’ll think about what I’ve done that might have been different, or I’ll discuss why I should maybe change my history styles, like I’ve done that on clinics . . . and I’ve tried it and my student peers have said “oh that was better than you normally did or that wasn’t as good” to try and develop yourself. So, I personally reflect on it quite a lot.
As with all the other assessment literacy interventions, it is important to support students in their role as assessors and feedback-providers. This becomes increasingly challenging at the higher levels of our pyramid, especially given the (often) close social connections that can exist with the student community. To circumvent these challenges with peer evaluation, we encourage students to observe and be observed, to act as assessor and assessed and to give and receive feedback from several of their peers.
In our proposed model (Figure 1), peer assessment and feedback is therefore the common “spine” that runs through the pyramid with additional enhancements afforded by working with students as partners either in development of questions themselves or in discussions about the appropriate marking rubrics associated with particular tasks. The benefits of this peer feedback approach have been highlighted elsewhere emphasizing that students learn not only from the peer feedback itself but (probably more importantly) at the metacognitive level by being able to justify their comments and reflecting on their assessment both in the context of the student they have assessed and on their own abilities.19
At all levels, whether in the context of written work or performance in practice, a focus on assessment literacy by necessity involves breaking down of traditional academic/ student boundaries as the student takes on the role of the assessor. This type of activity is consistent with the “students as partners” agenda, which is currently topical in the United Kingdom (UK) higher education sector.20 An activity that shows promise in this area is faculty–student co-creation of both assessments and also the criteria against which these assessments are assessed.21 Engaging students in reflection about appropriate assessment criteria provides opportunities for rich conversations about assessment, which in turn are likely to help students gain a deeper understanding of expectations in a given assessment context.
Table 1 further defines the role of the student in a range of exemplar assessment literacy activities across the various levels just discussed. This provides further clarity to the role the student takes in these activities; that is, are they simply acting as an assessor and gaining skills in judgment, or are they also potentially contributing to the generation of new assessments or setting of standards for themselves and their peers?
Table 1: Overview of exemplar approaches to assessment literacy aligned to Miller’s Pyramid levels and further categorized according to nature of activity
Miller’s Pyramid levelALP levelExemplar assessment literacy activitiesStudent assesses own/ others’ workStudent contributes to the assessment itself
Question creationStandard setting
Knows, Knows How1 and 2PeerWise question authoring and commenting 
  Engaging students in standard setting of MCQs  
  Generating free-text questions and outline answers  
  Marking and discussion of authentic answers 
Shows3Peer and self-assessment of OSCE performance 
Does4Peer and self-assessment of rotation performance 
 4Engagement of students in developing assessment rubrics 
ALP = Assessment Literacy Pyramid

Discussion

This teaching tip proposes an overall framework against which to consider and design assessment literacy activities across the veterinary curriculum. In many of the activities, feedback is interlinked with the assessment activity—hence, a further benefit of the approach presented in this teaching tip is that it also encourages students to become more self-regulated learners. This self-regulation is underpinned by becoming not only assessment-literate but feedback-literate and moving from being passive receivers to highly engaged participants in the feedback process.22 Nash and Winston23 highlight the need for a greater sense of shared responsibility between faculty and students in relation to feedback, which is a well-recognized challenging area in higher education.5,22,24 Additionally, students are encouraged to view feedback as useful even without the attached grade. This type of assessment (and feedback) literacy approach helps break down barriers between faculty and students, demystify the assessment process at the same time as empowering students in their role as assessors and feedback givers to themselves and their peers. It is interesting that although student feedback in relation to these interventions has generally been positive, our ALP level 3 study showed that for 32% of students, the intervention actually increased their concerns about the forthcoming assessment. However, it could be argued that this more timely recognition of the rigors and nature of the assessment is beneficial as the intervention is timed to allow students to work on any areas they identify as problematic in the formative exercise.
We would also emphasize the need to build into each of these types of activity appropriate support and discussion time to ensure that student and faculty understanding of standards is as well aligned as it can be. For example, in the context of peer marking of free-text short answer questions, our earlier work clearly demonstrated marked differences in students’ abilities as assessors3 at the start of a peer assessment exercise. Time is therefore needed within the curriculum to ensure there is the opportunity to fully discuss and support students as they move from the perhaps more “comfortable” space of the one being assessed to the more challenging position of the assessor. As mentioned earlier, this challenge may relate not only to the academic aspects of the work being assessed but—especially at ALP levels 3 and 4, where practical skills and performance are being assessed—it may become intertwined with aspects of social relationships and peer group dynamics.
Finally, although this paper is focused on approaches to assessment and assessment literacy, it is increasingly recognized that assessment should not be considered as secondary to curriculum design but is, in itself, a curriculum design decision. For example, Medland calls for “assessment to be a central aspect of curriculum design and development that is integral to teaching and learning, rather than an afterthought.”25(p.91) Boud and Molloy also highlight the importance of curriculum design in relation to supporting students’ assessment literacy.26

Future Work

In supporting students to develop their assessment literacy skills, there are implications also for faculty development. It has been shown that even in the context of the UK external examining system, there are questions about the assessment literacy of external examiners,27 and there is no reason to expect that this variability would be any less evident across faculty in an individual University.

Acknowledgments

The authors are grateful to the Principals Teaching Award Scheme (University of Edinburgh) for funding parts of this work.

Footnote

References

1 Price M, Rust C, O'Donovan B, Handley K, Bryant R. Assessment literacy: the foundation for improving student learning. Oxford, UK: Oxford Centre for Staff and Learning Development; 2012.
2 Smith CD, Worsfold K, Davies L, Fisher R, McPhail R. Assessment literacy and student learning: the case for explicitly developing students’ assessment literacy. Assess Eval High Educ. 2013;38(1):44–60. https://doi.org/10.1080/02602938.2011.598636.
3 Rhind SM, Patterson J. Assessment literacy: definition, implementation, and implications. J Vet Med Educ. 2015;42(1):28–35. https://doi.org/10.3138/jvme.0714-067R1. Medline: 25547906
4 Miller GE. The assessment of clinical skills/competence/ performance. Acad Med. 1990;65(9):S63–7. https://doi.org/10.1097/00001888-199009000-00045. Medline: 2400509
5 Price M, Handley K, Millar J, O'Donovan B. Feedback: all that effort, but what is the effect? Assess Eval High Educ. 2010;35(3):277–89. https://doi.org/10.1080/02602930903541007.
6 Duret D, Christley R, Denny P, Senior A. Collaborative learning with PeerWise. Res Learn Technol. 2018;26:260. https://doi.org/10.25304/rlt.v26.1979.
7 Hudson SL, Jarstfer MB, Persky AM. Student learning with generated and answered peer-written questions. Am J Pharm Educ. 2018;82(2):Article 6315. https://doi.org/10.5688/ajpe6315. Medline: 29606713
8 Kay AE, Hardy J, Galloway RK. Learning from peer feedback on student-generated multiple choice questions: views of introductory physics students. Phys Rev Phys Educ Res. 2018;14(1): Article  010119. https://doi.org/10.1103/PhysRevPhysEducRes.14.010119.
9 McKenzie W, Roodenburg J. Using PeerWise to develop a contributing student pedagogy for postgraduate psychology. Australas J Educ Technol. 2017;33(1):32–47. https://doi.org/10.14742/ajet.3169.
10 Walsh JL, Harris BH, Denny P, Smith P. Formative student-authored question bank: perceptions, question quality and association with summative performance. Postgrad Med J. 2018;94(1108):97–103. https://doi.org/10.1136/postgradmedj-2017-135018. Medline: 28866607
11 Denny P, Luxton-Reilly A, Hamer J. Student use of the PeerWise system. ACM SIGCSE Bulletin [Internet]. 2008;40(3):73–7. https://doi.org/10.1145/1597849.1384293.
12 Denny P, Luxton-Reilly A, Hamer J, Purchase H. Coverage of course topics in a student generated MCQ repository. ACM SIGCSE Bulletin [Internet]. 2009;41(3). 11–5. https://doi.org/10.1145/1595496.1562888.
13 Angoff WH. Scales, norms, and equivalent scores. In: Thorndike RL, editor. Educational measurement. 2nd ed. Washington, DC: American Council on Education; 1971. p. 508–600.
14 Rhind SM, Hughes KJ, Yool D, Shaw D, Kerr W, Reed N. Adaptive comparative judgment: a tool to support students’ assessment literacy. J Vet Med Educ. 2017;44(4):686–91. https://doi.org/10.3138/jvme.0616-113R. Medline: 28581915
15 Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13(1):39–54. https://doi.org/10.1111/j.1365-2923.1979.tb00918.x. Medline: 763183
16 Khan R, Payne MW, Chahine S. Peer assessment in the objective structured clinical examination: a scoping review. Med Teach. 2017;39(7):745–56. https://doi.org/10.1080/0142159X.2017.1309375. Medline: 28399690
17 Inayah AT, Anwer LA, Shareef MA, et al. Objectivity in subjectivity: do students' self and peer assessments correlate with examiners' subjective and objective assessment in clinical skills? A prospective study. BMJ Open. 2017;7(5):e012289. https://doi.org/10.1136/bmjopen-2016-012289. Medline: 28487454
18 Brown A, Whittington R, Thomas E, McKay J, Hughes K, Rhind SM. Peer feedback on non-clinical skills: the student perspective. Poster session presented at: VetEd 2017; 2017 Jul 5–7; Liverpool, UK.
19 Liu NF, Carless D. Peer feedback: the learning element of peer assessment. Teach High Educ. 2006;11(3):279–90. https://doi.org/10.1080/13562510600680582.
20 Healey M, Flint A, Harrington K. Engagement through partnership: students as partners in learning and teaching in higher education [Internet]. York, UK: Higher Education Academy; 2014 Available from: https://www.heacademy.ac.uk/knowledge-hub/engagement-through-partnership-students-partners-learning-and-teaching-higher.
21 Deeley SJ, Bovill C. Staff student partnership in assessment: enhancing assessment literacy through democratic practices. Assess Eval High Educ. 2017;42(3):463–77. https://doi.org/10.1080/02602938.2015.1126551.
22 Nicol DJ, Macfarlane-Dick D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud High Educ. 2006;31(2):199–218. https://doi.org/10.1080/03075070600572090.
23 Nash RA, Winstone NE. Responsibility-sharing in the giving and receiving of assessment feedback. Front Psychol. 2017;8:1519. https://doi.org/10.3389/fpsyg.2017.01519.
24 Hughes K, McCune V, Rhind S. Academic feedback in veterinary medicine: A comparison of school leaver and graduate entry cohorts. Assess Eval High Educ. 2013;38(2):1–16. https://doi.org/10.1080/02602938.2011.614682.
25 Medland E. Assessment in higher education: drivers, barriers and directions for change in the UK. Assess Eval High Educ. 2016;41(1):81–96. https://doi.org/10.1080/02602938.2014.982072.
26 Boud D, Molloy E. Rethinking models of feedback for learning: the challenge of design. Assess Eval High Educ. 2013;38(6):698–712. https://doi.org/10.1080/02602938.2012.691462.
27 Medland E. Examining the assessment literacy of external examiners. Lond Rev Educ. 2015;13(3):21–33. https://doi.org/10.18546/LRE.13.3.04.

Information & Authors

Information

Published In

Go to Journal of Veterinary Medical Education
Journal of Veterinary Medical Education
Volume 48Number 2April 2021
Pages: 158 - 162
PubMed: 32149588

History

Published online: 9 March 2020
Published in print: April 2021

Key Words:

  1. assessment literacy
  2. feedback
  3. peer assessment

Authors

Affiliations

Susan M. Rhind
Biography: Susan M. Rhind, BVMS, PhD, FRCPath, PFHEA, MRCVS, is Director of Veterinary Teaching, Chair of Veterinary Medical Education, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. Her interests include assessment and feedback, e-learning, curriculum development, and student well-being and support. E-mail: [email protected].
Jill MacKay
Biography: Jill MacKay, MSc, PhD, FHEA, is a Research Fellow in Veterinary Medical Education, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. Her interests include the use of technology and e-learning to support learners and educators, relationships between students and lecturers within a classroom, and staff/student experiences within higher education.
Andrew J. Brown
Biography: Andrew J. Brown, MA VetMB DipACVECC MRCVS, was a Senior Lecturer, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. His specific interests included transferrable skills, self-reflection, assessment and feedback, and the student experience.
Caroline J. Mosley
Biography: Caroline J. Mosley, BSc, RVN, FHEA, is Clinical Skills Education Manager, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. Her specific interests include practical skills training from initial animal handling to carrying out clinical procedures safely and effectively.
John M. Ryan
Biography: John M. Ryan, MVB, Dipl.ECVS, MRCVS, is a Lecturer in Small Animal Orthopaedics, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. His research interests include the role of orthopedic surgery in the veterinary undergraduate curriculum, the refining of surgical techniques, and quantitative assessment of the effect of rehabilitation in dogs.
Kirsty J. Hughes
Biography: Kirsty J. Hughes, BVM&S MSc BSc PhD FHEA MRCVS, is a Research Assistant in Veterinary Medical Education, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. Her specific research interests include assessment and feedback, staff development, and educational research methods.
Sharon Boyd
Biography: Sharon Boyd, BSc, is a Senior Lecturer in Distance Student Learning, Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush Veterinary Centre, Roslin, Midlothian, EH25 9RG UK. She is a part-time PhD candidate at Moray House School of Education researching digital narrative methods to capture a sense of “place” in research, with a particular focus on the distance learning student campus.

Metrics & Citations

Metrics

VIEW ALL METRICS

Related Content

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Format





Download article citation data for:
Susan M. Rhind, Jill MacKay, Andrew J. Brown, Caroline J. Mosley, John M. Ryan, Kirsty J. Hughes, and Sharon Boyd
Journal of Veterinary Medical Education 2021 48:2, 158-162

View Options

View options

PDF

View PDF

EPUB

View EPUB

Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media

About Cookies On This Site

We use cookies to improve user experience on our website and measure the impact of our content.

Learn more

×