Ricardian Explorer Early Results

by Alberto Isgut (aisgut@wesleyan.edu)
Economics Department
Wesleyan University

Experiments are becoming a prominent teaching tool in economics. The interactive nature of classroom experiments allows students experience economic concepts as active participants. Furthermore, instructors can facilitate discussion using the Socratic method after the experiment complementing traditional delivery of material in a lecture format. Holt (1999) provides a comprehensive analysis of the history and practice of employing experiments for instructional purposes. Classroom economic experiments are often done using paper and pencil. This implementation significantly constrains instructors because of time requirements and class size considerations. Instructors are therefore often reluctant to consider experiments as a teaching tool. Yet some key economic concepts can be very powerfully demonstrated by having students act them out for themselves. Several venues are available for instructors who would like to adopt this teaching method. Bergstrom and Miller (2000) wrote an introductory microeconomics textbook that presents new material solely through experiments. Journal of Economic Education regularly publishes articles on classroom experiments. “Teaching tips” section of Economic Inquiry periodically features examples of experiments. Journal of Economic Perspectives contains an ongoing single-column that provides short descriptions of various experiments. The Southern Economic Association, the Western Economic Association and the Economic Science Association annual conferences regularly have sessions on experiments in the classroom. National Science Foundation sponsors a faculty workshop on classroom experiments.

Despite successful experiences in a number of schools, including liberal arts colleges like Wesleyan, most of the evidence of teaching effectiveness of games so far has been anecdotal. However, there are several studies under way that are comparing performance on specific exam questions in classes that have been exposed to instruction through experiments and those that followed a standard lecture protocol while covering the same material.

We have begun outlining a methdology for such a study within our learning objects project (see http://learningobjects.wesleyan.edu/about/assessment.html ). Without having fully implemented our plans, we have begun to to test the effectiveness of the Ricardian Explorer game as a tool for teaching comparative advantage bt comparing the student's scores in the midterm exam on a question about the Ricardian model in two different International Trade courses. Students of the first course, in the Spring of 2002, played the Ricardian explorer game towards the end of the semester and after the midterm exam. In contrast, students of the second course, in the Spring of 2003, played it before the midterm exam. Because the midterm exam question on the Ricardian model was very similar in structure, a possible indicator of the pedagogical effectiveness of the Ricardian Explorer game is to compare the mean scores for that question in both courses. The total scores for the tests, the score for the question on the Ricardian model, and the average score for the other questions are presented, for the two courses, in Table 1:

Table 1: Raw data from tests

    Spring 2002 Test     Spring 2003 Test
Student Total Ricardo Other Student Total Ricardo Other
               
1 80.0 70.0 82.5 1 57.0 85.0 50.0

2

89.0

85.0

90.0

2

56.0

70.0

52.5

3

98.0

95.0

98.8

3

71.0

85.0

67.5

4

79.0

92.5

75.6

4

68.0

70.0

67.5

5

86.0

100.0

82.5

5

58.0

60.0

57.5

6

90.5

92.5

90.0

6

96.0

80.0

100.0

7

90.0

90.0

90.0

7

80.0

85.0

78.8

8

80.0

60.0

85.0

8

72.0

95.0

66.3

9

65.0

67.5

64.4

9

50.0

60.0

47.5

10

81.0

90.0

78.8

10

97.0

100.0

96.3

11

76.0

90.0

72.5

11

85.0

85.0

85.0

Average

83.1

84.8

82.7

12

55.0

55.0

55.0

       

13

69.0

70.0

68.8

       

14

89.0

70.0

93.8

       

15

61.0

65.0

60.0

       

16

87.0

60.0

93.8

       

17

67.0

65.0

67.5

       

18

77.0

100.0

71.3

       

19

83.0

95.0

80.0

Average

83.1

84.8

82.7

Average

72.5

76.6

71.5

Although in both courses students performed relatively better in the Ricardian model question than in the rest of the test, the difference in favor of the Ricardian model question is more marked in the second course, when students were exposed to the Ricardian explorer game before taking the test.

To conduct a statistical test is problematic because of the heterogeneity of the students’ performance. As the table shows the average for the test went down from 83.1 in the first course to 72.5 in the second. One way to account for this heterogeneity is to scale the scores of the Ricardian question by the overall average for the test. An alternative scaling method is to divide each score for the Ricardian question by the average of the other questions in the test. These alternative scores, which I refer to as measure 1 and measure 2, are presented in Table 2.

Table 2: Scaled data

Spring 2002 Test   Spring 2003 Test
Student Measure 1 Measure 2 Student Measure 1 Measure 2
         
     

1

1.17

1.70

1

0.84

0.85

2

0.97

1.33

2

1.02

0.94

3

1.17

1.26

3

1.14

0.96

4

0.97

1.04

4

1.11

1.22

5

0.83

1.04

5

1.20

1.21

6

1.10

0.80

6

1.11

1.03

7

1.17

1.08

7

1.08

1.00

8

1.31

1.43

8

0.72

0.71

9

0.83

1.26

9

0.81

1.05

10

1.38

1.04

10

1.08

1.14

11

1.17

1.00

11

1.08

1.24

12

0.76

1.00

Average

1.020

1.032

13

0.97

1.02

St. Dev.

0.155

0.167

14

0.97

0.75

     

15

0.90

1.08

     

16

0.83

0.64

     

17

0.90

0.96

     

18

1.38

1.40

     

19

1.31

1.19

     

Average

1.056

1.107

     

St. Dev.

0.200

0.254

The transformed data shows that the students of the Spring 2003 test did better in the Ricardian model question both with respect to the overall class average (1.032 against 1.020) and with respect to their own performance in the other questions (1.107 against 1.056). Unfortunately, the standard errors are pretty large, so given the small size of the samples the t statistic for the mean differences is not statistically significant at conventional levels. (The t statistics are 0.55 for the first measure and 0.97 for the second). We can conclude that although the results suggest that the Ricardian Explorer game is an effective tool for teaching the Ricardian model of trade and the concept of comparative advantage, more testing with larger class sizes is still needed.

Back to top