Open access
Research Article
12 July 2016

Approaches to Teaching Biometry and Epidemiology at Two Veterinary Schools in Germany

Publication: Journal of Veterinary Medical Education
Volume 43, Number 4

Abstract

In a thematically broad and highly condensed curriculum like veterinary medicine, it is essential to pay close attention to the didactic and methodical approaches used to deliver that content. The course topics ideally should be selected for their relevance but also for the target audience and their previous knowledge. The overall objective is to improve the long-term availability of what has been learned. For this reason, an evaluation among lecturers of German-speaking veterinary schools was carried out in 2012 to consider which topics in biometry and epidemiology they found relevant to other subject areas. Based on this survey, two veterinary schools (Berlin and Hannover) developed a structured approach for the introductory course in biometry and epidemiology. By means of an appropriate choice of topics and the use of adequate teaching methods, the quality of the lecture course could be significantly increased. Appropriately communicated learning objectives as well as a high rate of student activity resulted in increased student satisfaction. A certain degree of standardization of teaching approaches and material resulted in a comparison between the study sites and reduced variability in the content delivered at different schools. Part of this was confirmed by the high consistency in the multiple-choice examination results between the study sites. The results highlight the extent to which didactic and methodical restructuring of teaching affects the learning success and satisfaction of students. It can be of interest for other courses in veterinary medicine, human medicine, and biology.

Introduction

Biometry and epidemiology as related subject areas within veterinary medicine are considered fundamental to other courses because the concepts and methodology associated with these subjects have to be applied in numerous areas of veterinary medical activities. In the veterinary medical curriculum, various lecturers often teach identical biometrical and epidemiological methods in the context of their individual subject areas and terminology. Furthermore, the learning objectives of individual subject areas within the veterinary medical curriculum are not always clearly formulated and communicated. As a consequence, the transferability to other subject areas is made difficult and repetition is seen as essential.
In the German curriculum, veterinary medicine is a state exam. Therefore, there is a uniform audit plan, which is conducted annually. Before the mandatory implementation of the ordinance concerning the certification of veterinary surgeons (TAppV) in 2006, the biometry and epidemiology course was scheduled in the first year of study. With the inception of the TAppV, this has been changed and lecturers can set the biometry and epidemiology course arbitrarily in the curriculum. This resulted in different decisions at the teaching sites in Germany and thus in different concurrent biometry and epidemiology teaching. But all locations in Germany decided to schedule the introductory biometry and epidemiology course consistently before the clinical lectures. In Hannover and Berlin, the entire course and the examinations are presented in German; relevant content was translated for this publication. In Berlin, the one-semester introductory biometry an epidemiology course takes place in the second semester, whereas in Hannover it is taught in the fourth semester. At both locations, it is a weekly 90-minute class. The class size is 175 veterinary students in Berlin and about 260 in Hannover. Exams usually take place at the end of the semester for every course in the veterinary curriculum.
The introductory course on biometry and epidemiology is usually held in the form of a lecture. This teaching concept has a minimal amount of interaction with the students in comparison to alternative teaching forms such as the seminar. With regard to long-term availability of acquired knowledge, lectures provide poor conditions since they do not require the active involvement of students, which constitutes the core element of effective learning.1,2
In addition, biometry and epidemiology are often less attractive to students in the midst of practically related clinical subjects in veterinary medical training. The content appears abstract and is often taught in an isolated manner without any link to their future professional careers.36 Instead of student-centered teaching, a purely lecturer-centered knowledge transfer often takes place.
In addition, the introductory biometry and epidemiology course often leads to a high amount of material due to a corresponding lack of didactic reduction in content for the target audience. Renkl7 warns in this setting that long-term persistence of learning is uncertain; high levels of in-depth learning are generally not possible, and instead superficial learning, or “inert” knowledge, predominates (i.e., knowledge that is accessible and suitable for examinations but not for practical problem solving).2
Despite numerous descriptions of these factors that inhibit learning,36 there are very few subject-related published projects that describe how this matter should be addressed.
Therefore, the question arises as to which biometrical and epidemiological content is unanimously considered relevant for all disciplines and whether a pooling and didactic revision of this key knowledge is possible in the introductory course on veterinary medical biometry and epidemiology under the conditions of the German veterinary medical curriculum. This would enable the students to acquire fundamental skills, promoting a long-term availability and transferability of what has been learned to other areas of study as well.
As part of a survey at all veterinary schools in Germany, Austria, and Switzerland (all German-speaking institutions), the question about the relevant subject content and the optimal time for teaching it from the point of view of other colleagues from different areas had to be clarified.8 However, the question regarding adequate teaching methodology promoting the long-term availability and transferability of learned competences remained open to debate.
In general, the ways in which students learn is not a stable characteristic of the individual but rather a reaction to the chosen didactic and methodical teaching concept.2 By means of a suitable selection of subject content tailored to the needs of the students, together with didactic and methodical interventions, the subject content of biometry and epidemiology builds on students' previous knowledge and addresses real problems that concern their lives as veterinarians and their primary interests. Trigwell & Prosser9 described the connection between the teaching concept of lecturers and the corresponding effects on the learning behavior of students. Lecturer-centered teaching creates superficial learning, whereas increased student-centered teaching creates in-depth learning, thereby enabling the long-term availability and transferability of learned competences as well.
Through this project, we wanted to verify the following hypotheses:
1.
A structured teaching concept, which is based on operationalized learning objectives and a high rate of student activity, increases student satisfaction.
2.
Close collaboration and communication of lecturers with a high level of standardization in instruction produces high reproducibility in electronic examination results.
To test these hypotheses, we compiled the following information in cooperation between two institutes responsible for the introductory course in biometry and epidemiology in veterinary medicine: operationalized learning objectives, target-group orientated and structured teaching content, presentation slides, and exercise material from a didactic-methodical point of view. In the summer term of 2014, the biometry and epidemiology lecture course was taught at both locations in parallel under close collaboration and communication using harmonized course material. In addition, success of student learning was measured with the aid of an identical electronic examination at both locations. In what follows, we will present the results of the restructuring as well as the experience gained in the evaluation.

Material and Methods

Teaching Structure

As part of an online questionnaire for all German-speaking veterinary schools, lecturers of all faculties of veterinary medical education were asked to judge 44 selected topics within five categories of biometry and epidemiology (generic skills, basic concepts, descriptive statistics, distribution and probabilities, inductive statistics) with regard to their relevance for their own courses.8
Following this questionnaire, a final selection of relevant topics was made from the survey with input from the various involved disciplines (biometry, epidemiology, veterinary medicine, and didactics). Primarily this focused on the relevance assessment of the topics from the survey. However, there were also topics that were seen as core elements of biometry and epidemiology by the specialist representatives, although they were not rated highly relevant in the questionnaire. A central catalog of topics with 27 core topics arose from this analysis, which constituted the reduced concept of the course (didactic reduction).10 So as to structure the knowledge to be conveyed, these core topics were then taught in the following course modules:
1.
Introduction and Data Collection
2.
Measures of Location and Variability
3.
Measures of Scale and Graphical Representations
4.
Association
5.
Correlation and Regression
6.
Probability
7.
Diagnostic Tests
8.
Statistical Distributions
9.
CIs
10.
One-Sample Tests
11.
Two-Sample Tests
For each topic, an operationalized learning objective was formulated that the students should have achieved after the course. For the final topics and the operationalized learning objectives, see Appendix 1, Table A1. The operationalization of learning objectives is designed to check whether the students have actually obtained the required knowledge, capabilities, and skills, with the observed processes being described in the learning outcomes of the students.11 As an example, the first operationalized learning objective reads as follows: “Students will be able to define biometry and epidemiology and describe the tasks thereof.” Without explicit operators, this learning objective would possibly read as follows and would therefore not be verifiable: “Students will know the definitions of biometry and epidemiology and the tasks thereof.” The formulation of learning objectives is the first step in successful teaching. They support the choice of adequate teaching methods for conveying the chosen content and serve to verify the teaching success.2 Thus, this step was used to produce more verifiable outcomes for each topic.
As multiple core topics were taught in every course module, there were also several learning objectives pursued per course module.
As a next step, the existing lecture material from both locations was combined, content regarding highly relevant learning objectives was revised and given more emphasis, and less relevant content was abridged or completely omitted. Moreover, every course module was formulated according to an identical basic structure, considering active elements so as to take into account the lecturer-centered teaching format of the course, which could not be changed ad hoc2:
Every course event began with the clear presentation of the pursued course content in response to the following the title: “What is today all about?” This question encouraged the willingness to learn and the motivation of the students by demonstrating reasons why they should get involved in such a learning process. According to Gagné's (1974) events of instruction, these overviews help students get used to the structure of the new subject content and put the new knowledge into perspective in a broader context.12,13
The lecturers then went through the content of the previous lecture by leading a class discussion on the solutions from instructive homework exercises. The students received statistical exercises weekly based on the content of the current lecture. They worked on the exercises alone or in small groups and then presented solutions briefly at the beginning of the following lecture. These exercises served to deepen newly acquired knowledge and skills through practice.12 During the discussion of exercises and solutions, particular care was taken to focus on the students. If students were interested in further training, they could also use the exercise book from previous years. However, this did not follow the new lecture format and therefore was seen as additional, voluntary work for deepening their biometrical skills.
This was followed by a pure lecture period in which the lecturer presented the most important definitions and terms of the current course module. The statistical measurements and terminology were explained in more detail with the help of practical examples from veterinary medicine to connect biometrical and epidemiological knowledge with the previous knowledge and interests of the students. In the course of working on such practical examples, classical solution strategies for biometrical and epidemiological questions were presented to the students that could be practiced and addressed in detail in the homework. For the entire lecture course, so-called constant data records were available for the lecturers. These data sets were introduced at the beginning of the semester so that any statistical measurement and terminology could be explained briefly and precisely by means of these data without having to dwell on several new data structures.
After a maximum lecturing time of 10–15 minutes, there was a short phase of interactivity with the students (Sandwich Principle).14,15 This involved short rhetorical questions with the aim of increasing attention, direct questions to the students in form of voting or classical opinion polls, and in-class activities in which the students had to work on short tasks that were discussed at the end in a plenary session. For the in-class activities, the constant data records were partly used. For every activity, the approach, required material, and the adequate teaching method were recorded in written form in a flowchart.
During and after each weekly lecture, students were engaged in continuous activity. They had to fill in small gaps in the lecture script while lecturers presented the content. This required continuous attendance and concentration during the lecture. Moreover, short studies with work assignments were presented, on which the students had to work in small groups for about 5 minutes. This was also implemented to actively integrate students in the lectures and to enable in-depth learning.2,12 As an example, in the lecture on correlation and regression, we presented a short study on the correlation of drinking coffee in the sunshine and individual body weight. The students briefly discussed in this context the meaning of casual factors, confounders, and associated factors.
At the end of every lecture, a presentation to the students titled “What you should be able to do” focused on which learning objectives they should have met after the lecture. This presentation of the learning objectives spelled out the expectations of the lecturers to the students and enable students, particularly in preparation for the examination, to concentrate on the named sections and ultimately to ensure coverage of all required competences.
New homework exercises were taken up at the end of each lecture. These mostly included one or two statistical tasks that were thematically related to the current lecture and that students had to work on within a week.

Presentation Design

The lecture slides were newly developed as well. A simple and clearly arranged design was adopted. Depending on the type of slide (introduction, definition, example, activity, summary), an identical, recognizable layout was chosen. Nonetheless, the blackboard was also partly used for the comparison of calculated results and short demonstrations of methods. The presentation slides were made available online to the students with small gaps, without calculations and results, so that the students could concentrate on the lecturer's explanations and actively participate in the interaction processes without being distracted by having to take notes. In addition, these slides could be used for preparation and follow-up work for the course.

Constant Data Records

Before the entire course was presented, all students at both locations received a link for an online survey. They were asked to answer a total of 16 questions on biometrical issues and on their personal environment and individual attitudes regarding veterinary medical issues—for example, how they assess the importance of preventive vaccination and homeopathy. The purpose of collecting the demographic data was to summarize the data and use them as constant data records for examples in class and in exercises for the students. Alongside the student demographic data, published data from a retriever health study16 and from a simulated dairy cow study were used. On the one hand, this enabled the students to deal with data to which they could relate. On the other hand, data from two elementary fields of veterinary medicine—small animals and farm animals—were intended to generate a continuous professional relation to important practical areas of veterinary medicine. In the first week of the course, students were introduced to the structures of these constant data records so that they would not have to gain an understanding of the structure of the data during the presentation of new statistical measurements and methods. Instead, they were able to grapple with statistical questions directly.

Examinations

The main examinations in the summer term of 2014 were carried out at the end of the entire course. For students who failed or were unable to attend the first examination, a repeat examination was carried out at the beginning of the winter term. Both examinations were carried out in electronic form on computers made available through an examination center. Students at both locations had to answer 40 identical multiple-choice questions. Order of questions and order of answers within each question were randomized by the online examination system for each student. A list of formulas and a basic school pocket calculator without integrated graphics were made available as aids. The examination included at least one question for each learning objective. There were five possible answers per question, of which only one was correct (Multiple Choice Type A). Similarity of responses between both locations—Hannover and Berlin—was measured by means of correlations regarding the frequency of correct answers per question.
Moreover, to undertake a systematic comparison of learning objective levels in 2014 to those of the previous years, the examination questions from Hannover from the previous years (2011, 2012, and 2013) were matched to the learning objectives of 2014. Berlin could not be included in this analysis as the course was carried out for the first time in this form in the summer term of 2014. Therefore, comparable examination papers from previous years were not available. Before the summer term of 2014, examinations in Hannover each consisted of 25 multiple-choice tasks and 4 arithmetical problems with several subtasks. For analyzing and comparing the years, each individual subtask was retrospectively allocated an operationalized learning objective so that every examination included 39 to 42 relevant learning objective tasks. In the subsequent step, examinations were taken from the years 2011 to 2013. From the year 2011, there were a total of 237 examinations, from 2012, a total of 221 examinations, and from 2013, a total of 184 examinations. A simple random sample of 130 examinations from each year were included in the analysis to give us an equal sample size for each year and a feasible workload. This was achieved by sorting the examinations of each year by student number and generating, for each student number, a random number smaller than the sample size per year and without repetition. In the final analysis, we included the first 130 examinations by random numbers for each year. Learning objectives of a task from the previous years were considered achieved if the full points for a task were achieved. If several tasks were assigned to one learning objective, the mean from the percentage of correct answers per task was calculated for each learning objective.

Evaluation

We recorded the estimation of subjective improvement of the new course concept by comparing the 2013 and 2014 student evaluation results from Hannover. In 2013, the entire collective of students, in the presence of the lecturer, evaluated the course simultaneously on the final lecture day with the aid of PowerVote.a In 2014, the evaluation was conducted online at the end of the course with the help of LimeSurvey.b Despite technical differences in delivery, the evaluations included identical questions. While the entire group of students who participated in the final lecture class of the semester responded to the evaluation in 2013 (response rate of 100%), the online evaluation in 2014 was voluntary. Here a response rate of about 35% was achieved, similar to what can be expected from voluntary online evaluations. Evaluation results from Berlin could not be included in this analysis because no results were available from previous years.
The evaluation in both years was based on a total of 20 compiled questions based on Rindermann's validated evaluation sheet.17 This comprises closed questions, mainly assessed on a four-point ordinal scale, usually from entirely (agreement) to not at all. The general estimation of the entire course should be assessed on a scale from 1 (very good) to 6 (inadequate).
The following evaluation aspects were covered by the questions:
1.
Structure
2.
Analysis
3.
Teaching material
4.
Relevance
5.
Redundancy
6.
Requirements
7.
Learning (quantitatively)
8.
Learning (qualitatively)
9.
Individual involvement
10.
Communicative teaching forms
11.
General estimation
For a statistical analysis of the evaluation results, we used the Chi-square test for the entirely and to a large degree and to a lesser degree and not at all responses for each semester. For questions with five or six response categories, we calculated 2×5 or 2×6 Chi-square tests. All questions, responses, and p values of the Chi-square test are shown in Table 1 and Table 2.
Table 1: Comparison of the student evaluation results of 2013 and 2014 in Hannover (part 1)
 Summer semester 2013 (n=79)Summer semester 2014 (n=80) 
 EntirelyTo a
large
degree
To a
lesser
degree
Not
at all
EntirelyTo a
large
degree
To a
lesser
degree
Not
at all
χ2-test,
p value*
1. Structure         
• The content structure of the course is logical and comprehensible.18.2%58.4%22.1%1.3%21.3%68.8%8.8%1.3%.024
• The lecturer sums up the contents regularly.12.8%53.8%30.8%2.6%32.5%51.3%16.3%0.0%.013
2. Analysis         
• The content is illustrated with examples.59.0%32.1%7.7%1.3%58.8%35.0%5.0%1.25%.518
• The importance of covered biometrical and epidemiological topics was conveyed.23.1%51.3%20.5%5.1%36.3%51.3%11.3%1.25%.035
• The examples used establish a connection between theory and practical veterinary application.26.6%46.8%24.1%2.5%35.0%55.0%7.5%2.5%.007
• Non-veterinary medical examples (chocolate data, jelly babies, etc.) additionally support the learning process.44.3%34.2%17.7%3.8%40.0%42.5%15.0%2.5%.523
3. Teaching material         
• The lecture course slides were helpful.17.7%49.4%20.3%12.7%36.3%47.5%16.3%0.0%.015
• The exercise book was helpful.44.2%45.5%9.1%1.3%45.0%40.0%11.3%3.8%.386
5. Redundancy         
• Course content often overlaps unnecessarily with content in other courses.2.53%0.0%5.1%92.4%0.0%0.0%30.0%70.0%.245
7. Learning (quantitatively)         
• My level of knowledge is significantly greater after the course than before.3.8%35.4%41.8%19.0%7.5%55.0%32.5%5.0%.003
• The chosen topics give a good overview of the subject content of biometry and epidemiology and have fulfilled my expectations.15.8%48.7%30.3%5.3%21.3%63.8%13.8%1.3%.003
8. Learning (qualitatively)         
• I have a more fundamental understanding than before the course.6.3%38.0%39.2%16.5%20.0%52.5%20.0%7.5%<.001
• I learned something useful and important in the course.7.7%33.3%47.4%11.5%5.0%58.8%27.5%8.8%.004
9. Individual involvement         
• The lecturer encourages questions and active participation.35.9%47.4%12.8%3.8%43.8%47.5%8.8%0.0%.135
• Independent completion of practical exercises is encouraged.16.7%28.2%42.3%12.8%53.8%38.8%7.5%0.0%<.001
10. Communicative teaching forms         
• Communicative teaching forms are used (e.g., group work).2.6%3.9%22.4%71.1%17.5%48.8%31.3%2.5%<.001
*
Values were calculated on the basis of uncorrected Pearson χ2-tests (grouped 2×2 tables); values of p<.05 were considered significant.
Cells have expected counts less than 5. Chi-square test may not be a valid test, therefore Fisher's exact test was used.
Table 2: Comparison of the student evaluation results from 2013 and 2014 in Hannover (part 2)
 Summer semester 2013 (n=79)Summer semester 2014 (n=80)  χ2-test,
p value*
4. Relevance
• The course as such is relevant (profession, doctoral thesis).
EntirelyTo a
large
degree
To a
lesser
degree
Not
at all
Cannot
be
judged
EntirelyTo a
large
degree
To a
lesser
degree
Not
at all
Cannot
be
judged
   
 11.4%25.3%17.7%8.9%36.7%12.5%32.5%15.0%6.3%33.8%  .297
5. Redundancy
• My previous knowledge was…
Far
too
little
Partly
too
little
Exactly
right
A lot
was
known
Everything
was
known
Far
too
little
Partly
too
little
Exactly
right
A lot
was
known
Everything
was
known
   
 34.6%38.5%21.8%3.8%1.3%10.0%51.3%22.5%16.3%0.0%  <.001
6. Requirements
• The amount of the syllabus was…
Far
too
much
A bit
too
much
Exactly
right
A bit
too
little
Far
too
little
Far
too
much
A bit
too
much
Exactly
right
A bit
too
little
Far
too
little
   
 19.0%49.4%30.4%1.3%0.0%0.0%35.0%63.8%1.3%0.0%  <.001
6. Requirements
• The pace of the course is…
Far
too
fast
A bit
too
fast
Exactly
right
A bit
too
slow
Far
too
slow
Far
too
fast
A bit
too
fast
Exactly
right
A bit
too
slow
Far
too
slow
   
 11.4%45.6%30.4%12.7%0.0%2.5%45.0%46.3%6.3%0.0%  .031
11. General estimation123456123456 
• If a grade were to be given for the entire course, I would give the course the following grade:5.1%37.2%37.2%7.7%12.8%0.0%11.3%36.3%36.3%13.8%1.3%1.3%.039
*
Values were calculated on the basis of uncorrected Pearson χ2-tests (2×5 and 2×6 tables); values of p<.05 were considered significant.
Cells have expected counts less than 5. Chi-square test may not be a valid test, therefore Fisher's exact test was used.

Results

In what follows, we present the results of the didactic and methodical restructuring of the introductory lecture course on biometry and epidemiology. These include a comparison of the 2014 examination results from at the vet schools in Hannover and Berlin, the achievement of learning objectives in the years before the restructuring in Hannover, and the comparison of evaluation results from Hannover with those of the year before restructuring.

Comparison of the Examination Results between the Teaching Sites

To compare the examination results between the vet schools of Hannover and Berlin, we considered the relative frequency of correct answers per question, both for the main examination and for the repeat examination. This revealed that the percentage of correct answers between the locations in the main examination is closely correlated with a correlation coefficient according to Bravais–Pearson of r=0.886 (Figure 1).
Figure 1: Correlation of the percentage of correct answers per examination question in the main examination (r=0.886)
In the results of the repeat examination, this effect could be observed as well. The correlation coefficient according to Bravais–Pearson was r=0.864 (Figure 2).
Figure 2: Correlation of the percentage of correct answers per examination question in the repeat examination (r=0.864)
Additionally, we analyzed tasks according to their categories (see Figures 1 and 2). High percentages of all tasks in the category of generic skills were answered correctly in both examinations. Tasks in the category of basic concepts were consistently answered with a high degree of correctness. In the categories of descriptive statistics and of distribution and probabilities, the percentages of correct answers varied substantially among the individual tasks. Tasks related to the category of inductive statistics tended to be answered with a lower percentage of correctness.
In addition, we tried to analyze the examination results from previous years. As far as possible, every subtask of the arithmetical part from the examinations of the previous years (2011, 2012, and 2013) was retrospectively matched to an explicit learning objective from 2014. This was difficult because often one subtask could be allotted to several learning objectives, or a single subtask did not fulfill a whole learning objective. The learning objective of a subtask was considered achieved if the full points for this task were obtained. In previous years, consequential errors had been taken into account in the grading of arithmetic-based tasks. For example, despite incorrect final results in individual subtasks, total points could be achieved in the remaining subtasks. Therefore it was not reasonable to decide for every subtask if a student had achieved the learning objective or not. In contrast, in a purely multiple-choice question examination like the one used in the summer semester of 2014, consequential errors in arithmetic-based tasks could not be taken into account. Such a task could only be assessed as correct or incorrect based on the final result. In addition, the students were presumably already familiar with the structure of the multiple-choice questions because no serious restructuring of the examination format had been undertaken since 1999.
Consideration of the results from previous years also indicated considerable fluctuations in the examination results between 2011 and 2013 (not reported here in detail). These fluctuations could have occurred due to the different teams of lecturers and graders. In contrast, an entirely multiple-choice examination is more objective and systematic.18 Consequently, the explicit comparison between the years was not considered useful as the examination format had changed considerably and the learning objectives were only implicit before 2014.

Comparative Analysis of Evaluation Results

The p values of the Chi-square test show that the proportions of positive student evaluations in all evaluation aspects either improved in 2014 compared to the previous year or were evaluated positively in both years. In response to the statement “I have a more fundamental understanding than before the course,” 44.3% of students in 2013 chose entirely or to a large degree. With the new teaching format in 2014, a total of 72.5% chose entirely or to a large degree (p<.001). Very strong effects were also recorded for the statements “Independent completion of practical exercises is encouraged” and “Communicative teaching forms are used (e.g., group work).” In 2013, 44.9% (practical exercises) and 6.5% (communicative teaching forms) answered with entirely or to a large degree. In 2014, after the implementation of interactive teaching elements, 92.6% (practical exercises) and 66.3% (communicative teaching forms) answered with entirely or to a large degree (p<.001). There were significant difference between the years in the categories of “previous knowledge” and the “amount of the syllabus” (p<.001 for both). These results show that through the restructured teaching format in 2014, the content became more connected to the curriculum and more relevant to the target audience and their previous knowledge. The general estimation of the course shifted from higher numbers to lower numbers and thus to a better overall course rating (p=.039).
In six questions, a slight tendency towards lesser relevance was seen. For example, the question about the usefulness of the exercise book from previous years was attributed less relevance in comparison to 2013 (p=.386). In all other non-significant evaluation cases, the assessment was already very positive in 2013 and remained positive in 2014. The detailed evaluation results are presented in Tables 1 and 2.

Discussion

In the context of the introductory course on biometry and epidemiology, the aim of the didactic methodical restructuring measures described was to convey the main and relevant content clearly to the students and to promote long-term availability of what they had learned. The explicit selection of lecture content and the formulation of learning objectives, as described by Sullivan et al., was essential to the success of this project.19 Unfortunately, there are no available publications with comparable examples, evaluations of process, and accompanying data. However, the Teaching and Didactics of Biometry project group of the German Region of the International Biometric Society (IBS-DR), the Didactics and Communication Skills working group of the German Veterinary Medical Society (DVG), and the German Society for Medical Education (GMA) all address the aim of improving teaching processes. They are all working on innovative teaching formats and materials.20
From a subject-specific point of view, and taking into account the standardization of learning processes and the course evaluation on the part of the students and lecturers, the course restructuring in the scope of this project resulted in significant improvements over previous years. The high rate of student activity during the lecture and the certain degree of standardization of teaching approaches and materials has fulfilled all purposes. The evaluation results improved significantly in almost all aspects. In particular, implementing numerous active methods enabled in-depth learning, promoting long-term availability of learned competences and reducing superficial accumulation of knowledge.2 Therefore, the first hypothesis was clearly confirmed: a structured teaching concept, which is based on operationalized learning objectives and a high rate of student activity, increases student satisfaction. The high correlation of the percentage of correct answers for each examination question between Hannover and Berlin shows that the use of uniform teaching material and application of identical didactic methods at the two study sites resulted in high objectivity and repeatability in instruction and thus a high reproducibility of examination results.
Similarly, implementation of the electronic examination format created a higher level of fairness in the assessment of students by eliminating person-dependent factors. Thus the second hypotheses was also confirmed: close collaboration and communication of lecturers with a high level of standardization in instruction produces high reproducibility in electronic examination results.
As part of this project, a central catalog of learning objectives was developed for both university locations. In previous years, partly identical learning objectives were followed implicitly. However, these had not been clearly defined and presented. To make the success of the didactic and methodical restructuring measurable, we attempted a retrospective learning objective classification for the examination topics covered in Hannover from 2011 to 2013. In this process, sets of questions related to explicit 2014 learning objectives were identified; on this basis, we attempted a comparison of the years. One difficulty we encountered clearly was that it was difficult to achieve the retrospective classification of our conventional, paper-based exams administered between 2011 and 2013 due to the numerous influencing factors. We nevertheless consider the results of the comparison valid, indicating that factors leading to fluctuations in the examination results in previous years were minimized through restructuring of the course and examination format in 2014.
Due to administrative difficulties, evaluation of the course in the summer semester of 2014 had to occur online and not during the lecture through PowerVote, the interactive voting system, as was the case in 2013. This might have led to distortions in the results. In the conventional method of evaluation, the students assess the course in the presence of the lecturer while present in the course. Kordts-Freudinger and Geithner21 argue that this may lead to strong social control of the participants, who may then be inclined to complete the evaluation as quickly as possible and more probably positively. In 2013, the entire collective of students participated in the evaluation. It can thus be concluded that all schools of thought were represented to the same extent. In online course evaluations, however, there is a self-selection of critical course participants due to the greater anonymity of the format.21 Furthermore, the 2013 evaluation took place before the examination, whereas the 2014 evaluation took place only afterwards. Despite these facts, categories evaluated in the 2014 evaluations improved or were rated similarly well compared to the previous year with the exception of two cases tending in the other direction. For example, the assessment of the exercise book scored slightly lower in 2014. Due to the numerous arithmetical tasks in the lecture and the homework assignments, which were clearly tailored to the current teaching content, students in the summer semester of 2014 may not have needed further practice, or they may have seen the additional exercises from the exercise book as too general compared to the specific weekly tasks.

Conclusion

Didactics and methods courses for teaching biometry and epidemiology are not popular among lecturers in the field of veterinary medicine. However, the differentiated approach of didactic and methodical measures as part of restructuring of a basic course on biometry and epidemiology resulted in significant improvement in student satisfaction and reproducibility in examination results. These improvements resulted from the connection of biometrical and epidemiological subject content to veterinary medical issues, as well as the employment of activating methods (improving student involvement) during lectures. Through this restructured teaching format, we expect improved long-term availability of learned competences, as well as their transferability, which needs to be explicitly evaluated in future studies. During this project, colleagues from other subjects at the other German-speaking veterinary schools expressed great interest in our implementation of didactic and methodical approaches to courses. Our measures were extremely well received during various presentations of our research project. Although seminars on teaching methods are increasingly offered at universities, these seminars are often too generally oriented and there is insufficient time and relevance for lecturers to participate in such training opportunities voluntarily.
In general, we feel the communication among individual departments of veterinary schools is in need of improvement. In particular, the success and consistency of veterinary education would benefit from information exchanges among members of staff from biometry and epidemiology, clinics, and institutions. The individual subject disciplines rely on efficient communication and cooperation. Students are in need of basic biometrical and epidemiological knowledge and skills for interdisciplinary teamwork in the scope of future careers. In this regard, we do not want to train statisticians but rather to lay the foundations for successful cooperation in future.22
With the success of this cooperative project, we have shown how constructive it can be to increase communication among lecturers of biometry and epidemiology at various locations. Therefore, an increase in communication among lecturers at the various German-speaking veterinary schools is strongly recommended.

Acknowledgments

We would like to thank Prof. Dr. Jan P. Ehlers and the e-learning team for their helpful advice in the scope of this project. Furthermore, our thanks go to Dagmar Kuhnke for technical assistance.
Financial support for this project was provided by KELDAT, a competence center of Veterinary Education built by the German Speaking Veterinary Universities.

Footnotes

a
PowerVote GmbH, http://www.powervote.com
b
LimeSurvey open source survey application available from http://www.limesurvey.org

REFERENCES

1.
Stelzer-Rothe T, Brinker T. Kompetenzen in der Hochschullehre: Rüstzeug für gutes Lehren und Lernen an Hochschulen [Competences in higher education: tools for good teaching and learning at universities]. 2nd ed. Rinteln: Merkur-Verlag; 2008
2.
Fabry G. Medizindidaktik: ein Handbuch für die Praxis [Medical education: a practical guide]. 1st ed. Bern: Huber; 2008
3.
Miles S, Price GM, Swift L, et al. Statistics teaching in medical school: opinions of practising doctors. BMC Med Educ. 2010;10(1):75. https://doi.org/10.1186/1472-6920-10-75. Medline:21050444
4.
Bland JM. Teaching statistics to medical students using problem-based learning: the Australian experience. BMC Med Educ. 2004;4(1):31. https://doi.org/10.1186/1472-6920-4-31. Medline:15588318
5.
Dhand NKT, Peter C. Scenario-based approach for teaching biostatistics to veterinary students. Durban, South Africa: IASE/ ISI Satellite; 2009
6.
Duffield T, Lissemore K, Sandals D. Teaching the principles of health management to first year veterinary students. J Vet Med Educ. 2003;30(1):64–6. https://doi.org/10.3138/jvme.30.1.64. Medline:12733097
7.
Renkl A. Träges Wissen: wenn Erlerntes nicht genutzt wird [Inert knowledge: if acquired knowledge is not used]. Psychol Rundsch. 1996;47(2):78–92
8.
Zeimet R, Kreienbrock L, Doherr MG. Teaching biostatistics and epidemiology in the veterinary curriculum: what do our fellow lecturers expect? J Vet Med Educ. 2015;42(1):53–65. https://doi.org/10.3138/jvme.0314-029R2. Medline:25572336
9.
Trigwell K, Prosser M, Waterhouse F. Relations between teachers' approaches to teaching and students' approaches to learning. High Educ. 1999;37(1):57–70. https://doi.org/10.1023/A:1003548313194
10.
Stary J. Das didaktische Kernproblem—Verfahren und Kriterien der didaktischen Reduktion [The didactic core problem—methods and criteria of didactic reduction]. In: Berendt B, Voss H-P, Wildt J, editors. Neues Handbuch Hochschullehre: Lehren und Lernen effizient gestalten [New handbook for higher education: make teaching and learning efficient]. Stuttgart: Raabe; 2004. p. 1–22. Chapter A 1.2
11.
Mager RF. Lernziele und Unterricht. 1st ed. Weinheim: Beltz; 1994.
12.
Dubs R. Gut strukturiert und zielgerichtet. Tipps zur Vorbereitung und Durchführung von Vorlesungen [Well structured and targeted. Advice for preparing and conducting lectures]. In: Berendt B, Voss H-P, Wildt J, editors. Neues Handbuch Hochschullehre: Lehren und Lernen effizient gestalten [New handbook for higher education: make teaching and learning efficient]. Stuttgart: Raabe; 2004. p. 1–23. Chapter E 2.5
13.
Gagne RM, Briggs LJ, Wager WW. Principles of instructional design. 1st ed. New York: Holt, Rinehart and Winston; 1974
14.
Kornacker J, Venn M. Einsatz aktivierender Methoden in der Hochschuldidaktik: Steigerung des Lernerfolges durch Aktivierung in Großgruppen [Use of activating methods in teaching at universities: increasing learning success through activation of large groups]. In: Berendt B, Wildt J, Szczyrba B, editors. Neues Handbuch Hochschullehre: Lehren und Lernen effizient gestalten [New handbook for higher education: make teaching and learning efficient]. Stuttgart: Raabe; 2013. p. 1–34. Chapter C 2.24
15.
Voss H-P. Die Vorlesung. Probleme einer traditionellen Veranstaltungsform und Hinweise zu ihrer Lösung [The lecture. Problems of a traditional teaching form and instructions for resolving them]. In: Berendt B, Voss H-P, Wildt J, editors. Neues Handbuch Hochschullehre: Lehren und Lernen effizient gestalten [New handbook for higher education: make teaching and learning efficient]. Stuttgart: Raabe; 2002. p. 1–10. Chapter E 2.1
16.
Brümmer A. Gesundheit, Krankheitshäufigkeiten und Todesursachen bei Retrievern: Auswertungen einer Besitzer-Befragung [Health, disease frequency, and causes of death among retrievers: evaluation of owner surveys] [dissertation]. Gießen, Germany: Justus-Liebig-Universität Gießen; 2008
17.
Rindermann H, Amelang M. Das Heidelberger Inventar zur Lehrveranstaltungs-Evaluation (HILVE): Handanweisung [The Heidelberg Inventory for Course Evaluation (HILVE): hand instruction]. 1st ed. Heidelberg: Asanger Roland Verlag; 1998
18.
Brauer M. An der Hochschule lehren [Teaching at the university]. 1st ed. Berlin: Springer VS; 2014.
19.
Sullivan LM, Hooper L, Begg MD. Effective practices for teaching the biostatistics core course for the MPH using a competency-based approach. Public Health Rep. 2014;129(4):381–92. Medline:24982544
20.
Rauch G, Muche R, Vonthein R, editors. Zeig mir Biostatistik! Ideen und Material für einen guten Biometrie-Unterricht [Show me biostatistics! Ideas and material for good biometrical education]. 1st ed. Berlin: Springer Spektrum; 2014. https://doi.org/10.1007/978-3-642-54336-4
21.
Kordts-Freudinger R, Geithner E. Online- versus Papier-Evaluation in der Hochschuldidaktik. Ein Erfahrungsbericht [Online versus paper evaluation in university teaching. An experience report]. Personal- und Organisationsentwicklung in Einrichtungen der Lehre und Forschung (P-OE) [Personal and organizational development in institutions of teaching and research]. 2011;2/3:73–77. Available from: http://www.universitaetsverlagwebler.de/inhalte/poe-2%2B3-2011.pdf
22.
Burkholder I. Coole Biometrie—Eiskalt erwischt! [Cool biometry—caught cold handed!]. In: Rauch G, Muche R, Vonthein R, editors. Zeig mir Biostatistik! Ideen und Material für einen guten Biometrie-Unterricht [Show me biostatistics! Ideas and material for good biometrical education]. Berlin: Springer Spektrum; 2014. p. 15–24

Appendix 1

Overall Learning Objective

The aim of this lecture is to provide students the basics of population-based studies and the most important statistical methods for analyzing relevant veterinary medical data. Students should understand the needs, possibilities, and limitations of basic statistical analyses and should be able to perform simple statistical calculations.

Categories

1.
Generic skills
2.
Basic concepts
3.
Descriptive statistics
4.
Distribution and probabilities
5.
Inductive statistics
Table A1: Learning objectives for the introductory course in biometry and epidemiology within the veterinary curriculum
Cat.No.Learning objectiveDescription
11.1Definition and function of biometry (biostatistics) and epidemiologyStudents will be able to define biometry and epidemiology and describe the tasks thereof
11.2Definition of veterinary public healthStudents will be able to define the concept of veterinary public health
22.1Relationship between population parameter (true value, e.g., prevalence) and corresponding estimated value of a sampleStudents will be able to explain the concept of random sampling and the relationship between population parameters and corresponding estimated values of a sample
22.2Definition and interpretation of epidemiological measures such as prevalence, (cumulative) incidence, incidence density, mortalityStudents will be able to apply and asses central epidemiological measures of morbidity (prevalence, [cumulative] incidence, and mortality [total, disease-specific])
33.1Level of measurement values (nominal, ordinal, interval, ratio), including descriptive statisticsStudents will be able to assign levels of measurement values in practical examples and elucidate differences between various scales
33.2Calculation of central tendency and dispersion measuresStudents will be able to characterize data by using known central tendency and dispersion measures (mean, median, mode, minimum, maximum range, quantiles, proportion)
33.3Meaning of arithmetic mean, standard deviation, and standard errorStudents will be able to assess the importance of basic descriptive measures (objective 3.2) by analyzing data appropriately with the help of these measures
33.4Creating graphics for the description of measured values, and their distributions and relative frequenciesStudents will be able to illustrate frequency distributions of measured values graphically according to their level of measurement (circle graph, bar graph, histogram, box plot)
33.5Interpretation of graphics for measured value distributionsStudents will be able to interpret data presented by different graphics correctly
44.1Calculating probabilitiesStudents will be able to apply the basic rules for calculating probabilities (addition, multiplication, conditional probabilities)
44.2Bayes Theorem about conditional probabilitiesStudents will be able to use the Bayes theorem to calculate predictive values of diagnostic tests
44.3Diagnostic test characteristicsStudents will be able to explain the difference between apparent and true prevalence and evaluate the quality of a diagnostic test based on diagnostic test characteristics (sensitivity, specificity)
44.4Truth content/diagnostic value of a test result (positive and negative predictive value)Students will be able to calculate and interpret positive and negative predictive values to judge the validity of a diagnostic test result
44.5Understanding probability and randomness, knowledge of simple probability functions (binomial distribution, normal distribution)Students will be able to describe the difference between continuous and discrete probability functions and characterize variables based on probabilities and distributions
44.6Definition and meaning of Bernoulli and binomial distributionStudents will be able to describe the model of the Bernoulli experiment and the resulting binomial distribution and apply the binomial distribution for calculating probabilities
44.7Knowledge of the Gaussian distribution, assessment of normalityStudents will be able to use the normal distribution as a special continuous probability distribution and assess if a normal distribution can be assumed for present data
44.8Normal ranges for continuous (interval scale) measurements (e.g., blood or urine parameters)Students will be able to evaluate results of medical findings using clinical reference values (“normal ranges”)
55.1Definition and calculation of the CI for an estimated population parameterStudents will be able to calculate and interpret CIs for an estimated population parameter (mean, proportion) from a random sample
55.2Difference between normal ranges and CIsStudents will be able to distinguish between clinically relevant normal ranges and statistically based CIs
55.3Link between sample size and CIsStudents will be able to explain the influence of sample size on the width of the CI
55.4Concept of sample size for epidemiological studiesStudents will be able to define the concept of simple random sampling and explain the influence of sample size on precision of estimated parameters (CIs)
55.5Type I (alpha) error, type II (beta) error, and power on the example of one-sample testsStudents will be able to explain null hypothesis, alternative hypothesis, type I error, type II error, and power on the example of statistical one-sample tests
55.6Interpretation of the p value of a statistical testStudents will be able to explain statistical significance on the p value of a statistical test and type I error (alpha)
55.7Selection of an appropriate statistical tests for comparison of means or proportions between two groupsStudents will be able to define the concept of simple statistical test procedures including the conditions of their applicability (level of measurement values, assumptions)
55.8Statistical two-group comparisonsStudents will be able to perform correct statistical methods in a practical example for comparison of measurements between two experimental groups (t-test, Chi-square test) and interpret the results
55.9Correlations between continuous measurementsStudents will be able to calculate and interpret correlation coefficients (Pearson, Spearman) and parameter of a simple linear regression model
55.10Link between binary disease outcome and risk factorStudents will be able to calculate and interpret the relative risk (RR) and odds ratio (OR) based on cross-tabulated information

Information & Authors

Information

Published In

Go to Journal of Veterinary Medical Education
Journal of Veterinary Medical Education
Volume 43Number 4Winter 2016
Pages: 332 - 343
PubMed: 27404550

History

Published online: 12 July 2016
Published in print: Winter 2016

Key Words

  1. didactics
  2. education
  3. learning objectives
  4. learning outcomes
  5. statistics
  6. students

Authors

Affiliations

Ramona Zeimet
Biography: Ramona Zeimet, Master of Education, is Research Assistant in the Department of Biometry, Epidemiology and Information Processing, University of Veterinary Medicine Hannover, Bünteweg 2, D-30559 Hannover, Germany. Email: [email protected]. Her area of research is biostatistical education in veterinary medicine.
Lothar Kreienbrock
Biography: Lothar Kreienbrock, Prof. Dr. rer. nat., is Head of the Department of Biometry, Epidemiology and Information Processing, University of Veterinary Medicine Hannover, Bünteweg 2, D-30559 Hannover, Germany. Email: [email protected].
Marcus G. Doherr
Biography: Marcus G. Doherr, Prof. Dr. med. vet., PhD, Dipl. ECVPH, is Head of the Institute of Veterinary Epidemiology and Biostatistics, Dept. Veterinary Medicine, Free University of Berlin, Germany. Email: [email protected].

Metrics & Citations

Metrics

VIEW ALL METRICS

Related Content

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Format





Download article citation data for:
Ramona Zeimet, Lothar Kreienbrock, and Marcus G. Doherr
Journal of Veterinary Medical Education 2016 43:4, 332-343

View Options

View options

PDF

View PDF

EPUB

View EPUB

Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Figures

Tables

Media

Share

Share

Copy the content Link

Share on social media

About Cookies On This Site

We use cookies to improve user experience on our website and measure the impact of our content.

Learn more

×