Northwest Science and Technology Magazine
NWS&T Home / Issues / Fall 2008 / Education Contact the Editor
ContributorsNo People in this issueNo Lab Notes in this issueNo Grant Watch in this issueBooksNo Calendar in this issue


Table of Contents
Cover Story
Earth Sciences
Education
Environment
Innovators
Life Sciences
Northwest Explorer
Soundings
Technology
 

Debunking The MCAT's Drill And Kill Reputation

The envelope clutched between your hands contains the final piece of your medical school application, the results of a recent MCAT exam. A year full of flashcards and practice tests boiled down to one skinny white envelope. Images of the future are swarming around in your mind. Late nights at the library in a crimson Harvard sweatshirt, or a study break at the beach, laid out on a UCLA beach towel. However, these images quickly fade as you open the envelope and your heart sinks: the score is lower than expected. What to do?

Well, those medical school applicants hoping to convince an admission board that their low MCAT scores only reflect an inability to recall meaningless facts and not their ability to critically solve problems, may want to start revising their admission essays.

Work published in a recent issue of Science contradicts past criticism that the MCAT promotes blind memorization and factual recall over problem-solving skills. To evaluate whether this criticism was valid, a team of researchers at the University of Washington used Bloom's Taxonomy to quantify the level of learning required to answer questions taken from a set of biology- based standardized tests.

Originally published in 1956 by Benjamin Bloom at the University of Chicago, Bloom's Taxonomy was designed as a tool to classify test questions into six different learning domains. The Bloom's framework starts at the knowledge level, one that requires the test taker to simply recall facts, terms, and basic concepts. At the opposite end of the spectrum are the higher order levels of analysis, synthesis and evaluation. The language that distinguishes these higher order questions include the phases: compare and contrast, predict the outcome, and explain why you agree or disagree with the following statement.

"What we set out to test were the claims out in the literature that the MCAT, the AP Biology exam, and most tests from first year medical school courses are low level drill-and-kill exams,” says Scott Freeman, one of the authors of the paper. According to Freeman, their study was the first time Bloom's was used to characterize biology-based standardized tests.

To explore these claims, exam questions from the different sources were compiled, identically reformatted, and placed in a random order. Once compiled, they were then given to three education experts who determined the level of learning required to answer each question by applying the Bloom's Taxonomy assay.

"Bloom's is not a perfect method, but its six levels provide nice and convenient levels of basic assessment,” says Mary Pat Wenderoth, a lecturer and science education researcher at the UW.

"For a particular topic you can understand it at different levels,” adds Freeman, "and Bloom's Taxonomy is a framework for understanding this human cognition.” Freeman and his team found that not only did a large portion of the MCAT evaluate higher-order critical thinking, but questions taken from the MCAT were far better at testing higher-order learning than questions collected from first year medical school classes.

This myth surrounding the MCAT seems to stem from the fact that substantial portions of the MCAT and other standardized tests are made up of multiple-choice questions. Traditionally, multiple-choice questions have been viewed as lower-level questions that are easy create and grade. Yet, Freeman's study has shown that this is not the case with the MCAT and is quick to point out that, "yes, low level multiple-choice questions are easy to write and easy to grade but most instructors who are careful about what they do will say ‘no this is college and I want my students to learn how to think.'”

So what make Association of American Medical Colleges (AAMC), the organization that administers the MCAT, so good at writing these well thought out questions?

"It is very hard and takes a long time to write high level multiple choice questions test questions and the MCAT and GRE can do it because they have lots of money and time,” notes Freeman. However, for a medical school instructor at a large research university, finding the time to create wonderful multiple-choice questions is near impossible.

According the AAMC, the purpose the MCAT is to "assess the examinee's problem solving, critical thinking, writing skills, and knowledge of science concepts and principles prerequisite to the study of medicine.” Despite past criticism the MCAT has traditionally been thought as a good indicator of medical success. However, one of the ironies of Freeman's study is that the MCAT tests at a higher level than the first year medical school courses the researchers examined.

"This makes sense in the bigger context because classically the first two years of medical education have been taught in the fact- driven format stressing basic science,” says Freeman. "Students then go into clinical work in the later years (of medical school) which is when they first have to learn how to analyze and deal with diagnostic tests and data.”

Freeman's hope is that their research will change this idea and encourage medical schools to modify their curriculums and start moving away from just this "fact driven” coursework that makes up most of the first years of a medical students life.

Joe Baio is a Ph.D. student in chemical engineering at the University of Washington.


Print ArticleEmail FriendWrite Editor

Education
University of Washington

Articles and images appearing on this Web site may not be reproduced without permission   |   Site by Publications Services
This website is best viewed at a 1024x768 screen resolution with the latest version of Internet Explorer or Netscape Navigator.

Elapsed time: 0.21661 seconds