In a traditional science class, the teacher stands at the front of the class lecturing to a largely passive group of students. Those students then go off and do back-of-the-chapter homework problems from the textbook and take exams that are similar to those exercises.
The research has several things to say about this pedagogical strategy, but I’ll focus on three findings—the first about the retention of information from lecture, the second about understanding basic concepts, and the third about general beliefs regarding science and scientific problem-solving. The data I discuss were mostly gathered in introductory college physics courses, but these results are consistent with those of similar studies done in other scientific disciplines and at other grade levels. This is understandable, because they are consistent with what we know about cognition.
Retaining Information
Lectures were created as a means of transferring information from one person to many, so an obvious topic for research is the retention of the information by the many. The results of three studies — which can be replicated by any faculty member with a strong enough stomach — are instructive.
The first is by Joe Redish, a highly regarded physics professor at the University of Maryland. Even though the students thought his lectures were wonderful, Joe wondered how much they were actually learning. So he hired a graduate student to grab students at random as they filed out of class at the end of the lecture and ask, “What was the lecture you just heard about?” It turned out that the students could respond with only with the vaguest of generalities.
Zdeslav Hrepic, N. Sanjay Rebello, and Dean Zollman at Kansas State University carried out a much more structured study. They asked 18 students from an introductory physics class to attempt to answer six questions on the physics of sound and then, primed by that experience, to get the answers to those questions by listening to a 14-minute, highly polished commercial videotaped lecture given by someone who is supposed to be the world’s most accomplished physics lecturer.
On most of the six questions, no more than one student was able to answer correctly.
In a final example, a number of times Kathy Perkins and I have presented some non-obvious fact in a lecture along with an illustration, and then quizzed the students 15 minutes later on the fact. About 10 percent usually remember it by then. To see whether we simply had mentally deficient students, I once repeated this experiment when I was giving a departmental colloquium at one of the leading physics departments in the United States. The audience was made up of physics faculty members and graduate students, but the result was about the same—around 10 percent.
Given that there are thousands of traditional science lectures being given every day, these results are quite disturbing. Do these findings make sense? Could this meager transfer of information in lectures be a generic problem?
These results do indeed make a lot of sense and probably are generic, based on one of the most well-established—yet-widely ignored—results of cognitive science: the extremely limited capacity of the short-term working memory. The research tells us that the human brain can hold a maximum of about seven different items in its short-term working memory and can process no more than about four ideas at once.
Exactly what an “item” means when translated from the cognitive science lab into the classroom is a bit fuzzy. But the number of new items that students are expected to remember and process in the typical hour-long science lecture is vastly greater. So we should not be surprised to find that students are able to take away only a small fraction of what is presented to them in that format.
Understanding Basic Concepts
We physicists believe that one of the great strengths of physics is that it has a few fundamental concepts that can be applied very widely. This has inspired physics-education researchers to study how well students are actually learning the basic concepts in their physics courses, particularly at the introductory level.
These researchers have created some good assessment tools for measuring conceptual understanding. Probably the oldest and most widely used of these is the Force Concepts Inventory (FCI) (see Hestenes, 1992 in “Resources” below). This instrument tests students’ mastery of the basic concepts of force and motion, which are covered in every first-semester postsecondary physics course.
The FCI is composed of carefully developed and tested questions that usually require students to apply the concepts of force and motion in a real-world context, such as explaining what happens when a car runs into a truck. The FCI — now administered in hundreds of courses annually — normally is given at the beginning and end of the semester to see how much students have learned during the course.
Richard Hake compiled the FCI results from 14 different traditional courses and found that in the traditional lecture course, students master no more than 30 percent of the key concepts that they didn’t already know at the start of the course.
Similar sub-30-percent gains are seen in many other unpublished studies and are largely independent of lecturer quality, class size, and institution. The consistency of those results clearly demonstrates that the problem is in the basic pedagogical approach: The traditional lecture is simply not successful in helping most students achieve mastery of fundamental concepts.
Pedagogical approaches involving more interactive engagement of students show consistently higher gains on the FCI and similar tests.
Affecting beliefs regarding science and scientific problem-solving
Students believe certain things about what physics is and how one goes about learning the discipline, as well as how one solves problems in physics. If you interview a lot of people, you find that their beliefs lie on a spectrum that ranges from “novice” to “expert.”
My research group and others have developed survey instruments that can measure where on this scale a person’s beliefs lie.
What do we mean by a “novice” in this context?
Adapting the characterization developed by David Hammer, novices see the content of physics instruction as isolated pieces of information— handed down by an authority and disconnected from the world around them — that they can only learn by memorization. To the novice, scientific problem-solving is just matching the pattern of the problem to certain memorized recipes.
Experts—i.e., physicists—see physics as a coherent structure of concepts that describe nature and that have been established by experiment. Expert problem-solving involves employing systematic, concept-based, and widely applicable strategies. Since this includes being applicable in completely new situations, this strategy is much more useful than the novice problem-solving approach.
Once you develop the tools to measure where people’s beliefs lie on this expert-to-novice scale, you can see how students’ beliefs change as a result of their courses. What you would expect, or at least hope, is that students would begin their college physics course somewhere on the novice side of the scale and that after completing the course they would have become more expert-like in their beliefs.
What the data say is just the opposite.
On average, students have more novicelike beliefs after they have completed an introductory physics course than they had when they started; this was found for nearly every introductory course measured. More recently, my group started looking at beliefs about chemistry. If anything, the effect of taking an introductory college chemistry course is even worse than for taking physics.
So we are faced with another puzzle about traditional science instruction. This instruction is explicitly built around teaching concepts and is being provided by instructors who, at least at the college level, are unquestionably experts in the subject. And yet their students are not learning concepts, and they are acquiring novice beliefs about the subject. How can this be?
We will discuss that in Part 3.
REFERENCES:
W. Adams et al. (2005), Proceedings of the 2004 Physics Education Research Conference, J. Marx, P, Heron, S. Franklin, eds., American Institute of Physics, Melville, NY, p. 45.
R. Hake (1998), The American Journal of Physics. 66, 64.
D. Hammer (1997), Cognition and Instruction. 15, 485.
D. Hestenes, M. Wells, G. Swackhammer (1992), The Physics Teacher. 30, 141.
Z. Hrepic, D. Zollman, N. Rebello. “Comparing students’and experts’ understanding of the content of a lecture,” to be published in Journal of Science Education and Technology. A pre-print is available at http://web.phys.ksu.edu/papers/2006/Hrepic_comparing.pdf
E. Mazur (1997), Peer Instructions: A User’s Manual, Prentice Hall, Upper Saddle River, NJ.
G. Novak, E. Patterson, A.Gavrin, and W. Christian (1999), Just-in-Time Teaching: Blending Active Learning with Web Technology, Prentice Hall, Upper Saddle River, NJ.
K. Perkins et al. (2005), Proceedings of the 2004 Physics Education Research Conference, J. Marx, P. Heron, S. Franklin, eds., American Institute of Physics, Melville, NY, p. 61.
E. Redish (2003), Teaching Physics with the Physics Suite, Wiley, Hoboken, NJ.
*****
Originally presented in Change magazine, September/October 2007.
Comments