The Development of Expertise in Scientific Research
Media
Part of The Development of Expertise in Scientific Research
- Title
- The Development of Expertise in Scientific Research
- extracted text
-
The Development of Expertise in
Scientific Research
DAVID F. FELDON
Abstract
Scientific research is a fundamental mechanism for both advancing human knowledge about the natural world and facilitating technological innovations that fuel
economic development. As such, understanding and optimizing the pathways to
expertise and professional success in this endeavor is vital to ensure sustained
intellectual and financial benefits of scientific research. This essay summarizes the
research on the development of expertise in the sciences from the psychology of
science and research on graduate education. Examining new research trends that
present an emerging picture of a specific trajectory for the development of research
skills and frame the development of scientific research skills as direct outcomes of
specific training practices, new directions for research that bridge the cognitive and
socialization lenses are identified.
INTRODUCTION
Scientific research is a fundamental mechanism for both advancing human
knowledge about the natural world and facilitating technological innovations that fuel economic development. As such, understanding and
optimizing the pathways to expertise and professional success in this
endeavor is vital to ensure sustained intellectual and financial benefits of
scientific research. As a general practice, science uses empirical evidence
and logical inference to identify generalizable principles that explain the
mechanisms of natural phenomena. It also reflects standards of quality
judged by replicability (i.e., empirical findings supporting a conclusion
can be obtained consistently across studies) and parsimony (i.e., the ability
of an empirically supported theory to explain the widest possible range
of instances in the simplest way). Although the nature of scientific work
has changed dramatically in recent decades with regard to pace, structure
of research teams, relevant technologies, and sources of funding (Austin
& McDaniels, 2006), the core facets of expertise in conducting scientific
Emerging Trends in the Social and Behavioral Sciences.
Robert Scott and Marlis Buchmann (General Editors) with Stephen Kosslyn (Consulting Editor).
© 2016 John Wiley & Sons, Inc. ISBN 978-1-118-90077-2.
1
2
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
research have remained stable: Scientists must systematically collect, analyze, and interpret data about the natural world and effectively present their
conclusions in a manner that meets the standards of rigor, relevance, and
novelty.
Expertise in scientific research requires more nuanced consideration, however. The science disciplines (e.g., biology and chemistry) can be understood
as distinct intellectual endeavors within which scholars share common sets
of research questions, methods of inquiry, and intellectual approaches to
solving problems (Kuhn, 1962). Disciplines manifest their differences from
one another through distinctions in the use and value of different forms of
evidence (e.g., logical, observational, or experimental) or the privileging of
data-driven (inductive) or theory-driven (deductive) modes of reasoning
(Bauer, 1992). Thus, each discipline maintains its own “epistemic culture”
(Knorr-Cetina, 1997, p. 260) with its own norms, jargon, theories, and essential skills that can overlap to varying degrees with another. Accordingly,
expertise is most precisely characterized within a disciplinary context, and
even categories of skills frequently conceptualized across the sciences make
reference to the state or standards of the discipline. For example, research
skills that are commonly measured across disciplines include identifying
and framing a meaningful and productive question for investigation based
on the existing state of knowledge in the researcher’s discipline, formulating
a testable research hypothesis based on a specific question, designing a
valid experiment or empirical test of the hypothesis, and interpreting data
by relating results to the original hypothesis and drawing appropriate,
supportable conclusions (Kardash, 2000). Collectively, these skills result in
the construction of disciplinary arguments within a scientific discipline, the
mastery of which is considered essential for successful scientists (Kiley &
Wisker, 2009).
Beyond the component skills required to produce valid and accepted
research, scientists also become recognized as experts for the importance of
their research findings in terms of the impact that they have on the accepted
understanding or practical application of the phenomena of interest. Increasingly, interdisciplinary research is proving effective for understanding many
complex problems, despite the significant challenges it poses for scientists
to communicate and collaborate across disciplinary lines (Lattuca, 2001).
However, its practice arises from the border traffic between disciplines,
which serve as the foundations for complementary modes of expertise
(Frodeman & Mitcham, 2007). As such, the discrete disciplinary foundations
of scientific expertise collectively frame a larger problem, which is essential
for productive interdisciplinary research.
The Development of Expertise in Scientific Research
3
FOUNDATIONAL RESEARCH
Two traditions frame the current understanding of expertise development in
research. The first is the psychology of science, which characterizes the cognitive mechanisms of scientific reasoning. The second is socialization theory,
which frames the development of research competence as a transactional
process of learning to participate as a member of a specific disciplinary
community—often under a mentor’s guidance in the form of a cognitive
apprenticeship.
COGNITION IN SCIENTIFIC PROBLEM SOLVING
In the cognitive tradition, solving problems for which solutions are not yet
known is often referred to as the search of a problem space. The space is
composed of all possible routes to move from the initial state of knowledge
(i.e., all relevant known and unknown factors) to the solution (Newell &
Simon, 1972). Experts navigate smaller problem spaces than novices because
the number of possible routes is constrained by the knowledge of the individual attempting to solve the problem. Strategies known to be ineffective
or inapplicable within the constraints of the problem or disciplinary lens are
excluded from the solution search preemptively.
Klahr and Dunbar (1988) conceptualize scientific problem solving as the
simultaneous search of two adjacent, interacting problem spaces: the theoretical space (What is the correct explanation or principle?) and the methodological space (What are the effective means of investigating its validity?).
To resolve the theoretical problem, scientists select a hypothesis based on a
synthesis of current knowledge in the field that yields the most likely explanation or appropriate governing principle. Selection of this conjecture constrains the possible range of research designs because it specifies what type of
events must be observed and what measures should be used to align with the
theoretical framework identified. Similarly, selection of a research methodology constrains the possible findings and derivative conclusions because
choices have been made about which aspects of the phenomenon will and
will not be examined. Thus, what is known about a problem constrains the
range of possible solution paths through the problem space, which possible
solution paths are attempted determine the data obtained, and those data in
turn inform the conclusions reached. As each conclusion shapes the theoretical understanding of the problem, the knowledge-based constraints on the
search of the methodological problem space change, resulting in new experimental designs that can in turn yield novel insights to advance the theories
used to explain the phenomenon.
Given the large number of possible paths in the search of each problem
space and the many reciprocal interactions that can change the problem space
4
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
constraints, an essential component of research training is developing the
ability to define research problems as narrowly as possible to establish a problem space of manageable size within which to work (Klahr & Simon, 2001).
One trait of expert scientists working in their own domains of expertise is
the use of optimized strategies in the design of new research. When asked to
design experiments to solve problems out of their field of specialization, the
efficiency of their problem-solving strategies decreases (Schraagen, 1993).
Scientists also use a number of well-documented cognitive strategies to aid
in their navigation of the theoretical and methodological problem spaces.
For example, expert scientists make extensive use of analogical reasoning to
focus on solution paths that have increased likelihoods of success. Across
many studies, experts’ verbalized thinking includes a consistent pattern of
finding similarities between new problems and those that have been previously solved (Nersessian & Chandrasekharan, 2009). A second key practice is mental simulation, in which scientists conceptualize likely outcomes
by mentally constructing new scenarios within their mental models of relevant theory and projecting how known mechanisms will operate under new
conditions (Christensen & Schunn, 2009). In addition, across both conceptual and physical experimentation, experts utilize external representations
of observed phenomena and their thinking about them to manage mental
load such as diagrams, specialized notation systems, and standardized computational algorithms (Cheng & Simon, 1995).
SOCIALIZATION AND COGNITIVE APPRENTICESHIP AS MECHANISMS OF EXPERTISE
DEVELOPMENT
The training of scientists overwhelmingly takes place in university graduate programs. Attainment of the PhD is intended to signal the readiness of a
scientist to contribute to their chosen discipline through the independent production of research (Lovitts, 2001). The process of graduate education is often
conceptualized as a process of acculturating a student into their chosen discipline (Austin & McDaniels, 2006). Models of graduate student socialization
entail both formal and informal interactions with both formal mentors and
“significant others … in a department, a laboratory, a disciplinary network,
or a university and its resources” (Pearson & Brew, 2002, p. 141).
Among these varied influences, the faculty mentor is generally held to be
primary with the responsibility for structuring a student’s cognitive apprenticeship. As a concept, cognitive apprenticeship was originally developed as
an instructional paradigm for informal secondary school settings. However,
it is now a fixture in graduate education to the extent that it is considered
“the signature pedagogy of doctoral education” (Golde, Conklin Bueschel,
Jones, & Walker, 2009, p. 54).
The Development of Expertise in Scientific Research
5
Cognitive apprenticeships are intended to be a structured way to make
visible the problem-solving processes of the scientist through modeling,
scaffolding, and coaching in order to nurture the mentee’s development
of expertise. This process is especially important for the sciences because
much of the work performed is conceptual in nature and thus not directly
observable. Indeed, Delamont and Atkinson (2001) suggest that successful
students “master … tacit, indeterminate skills and knowledge, produce
usable results, and become professional scientists [who] learn to write public
accounts of their investigations which omit the uncertainties, contingencies,
and personal craft skills” (p. 88). Consequently, graduate student engagement in supervised research activity is linked both to students’ perceptions
of their own research abilities and their identities as scientists (Holley,
2009). To account for the broader group of individuals who interact around
disciplinary topics and the process of research, distributed mentorship can
occur through a “community of practice” in which “the participants actively
communicate about and engage in the skills involved in expertise, where
expertise is understood as the practice of solving problems and carrying
out tasks in a domain” (Collins, Brown, & Holum, 1991, p. 16). Specific to
the context of research training that typically takes place within research
laboratory settings, the community of practice takes on the form of “cascade
mentoring” in a research lab setting, in which “postdoctoral fellows mentor
senior graduate students, senior graduate students mentor junior graduate
students, and junior graduate students mentor undergraduates” (Golde
et al., 2009, p. 57).
Cascade mentoring likely plays a critical role in the preparation of future
scientists because the traditional faculty mentor role is “marked by neglect,
abandonment, and indifference” (Johnson, Lee, & Green, 2000, p. 136) due
to research incentive mechanisms that value research productivity over
instructional responsibilities (Anderson et al., 2011). A supportive community of practice is particularly critical at the initial stages of graduate work,
as many novice students do not fully understand the most effective ways to
navigate their training opportunities or the expectations held for their work
(Golde & Dore, 2001). However, expressions of concern about poor or erratic
graduate research mentorship throughout the doctoral process are prevalent
(Lovitts, 2001), to the extent that some researchers in the field suggest it
may be unavoidable (Johnson et al., 2000). While students consistently
gain exposure to and practice conducting research-relevant tasks, evidence
suggests that faculty do not necessarily change the level of challenge or
nature of the work assigned to their mentees in response to their progressive
skill development (Maher, Gilmore, Feldon, & Davis, 2013).
6
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
CUTTING-EDGE RESEARCH
Graduate research training is intended to provide students with the necessary experiences and guidance to develop from a consumer into a producer of research (Weidman, 2010). This process involves a clear sense of
skill development, that is, of learning the skills necessary for experts in the
field. However, there is surprisingly little research assessing the effects of specific training practices on the extent or efficiency with which targeted skills
are acquired. Consequently, current research on the training of scientists has
shifted the focus of inquiry from descriptive studies to deeper understanding of the developmental pathways traveled by students as they progress
toward the status of independent researchers. There are also an increasing
number of studies that evaluate the impacts of specific training components
in controlled studies. This work is exciting because it helps to develop a more
mechanistic understanding of the training process to the point that it could
inform training decisions made by faculty and the structuring of graduate
programs.
TRAJECTORIES IN THE DEVELOPMENT OF RESEARCH SKILLS
One approach to the conceptualization of skill development is that of threshold concepts, which identifies knowledge and skills that act as bottlenecks to
further expertise development. These concepts are inherently challenging to
understand, and once attained, entail a fundamental restructuring of knowledge that must precede deeper comprehension of advanced disciplinary perspectives. Through interviews with experienced faculty mentors, Kiley and
Wisker (2009) identify three potential threshold concepts: the need to create
knowledge, the need for rigorous analysis (i.e., objectivity in the evaluation
of argument), and a developed sense of paradigm. Attainment of these skills
is considered to be neither gradual nor linear and requires mastery of certain
knowledge or skills before the attainment of others. This implies that thresholds are part of a developmental sequence and that they fundamentally alter
students’ abilities to master other subsequent concepts.
The new contribution of these ideas, which otherwise align well with conventional wisdom regarding the nature of independent scholarship, is the
assumption that these thresholds are crossed in a developmental sequence
and that they fundamentally alter students’ abilities to master other, subsequent concepts. If empirical data reflect a series of thresholds that are intrinsic
to research skill development, then doctoral programs could increase their
efficiency and effectiveness by sequencing educational experiences to target
the attainment of threshold concepts upon which others are contingent.
Timmerman, Feldon, Maher, Strickland, and Gilmore (2013) provide converging and support for these thresholds at a more nuanced level using a
The Development of Expertise in Scientific Research
7
sample of graduate students from across science disciplines. If research skills
develop sequentially, then scores on the performance of some skills should
be systematically higher than scores on the performance of others at a given
point in time. However, if all skills develop in parallel or the sequencing of
skill development is arbitrary, predictive relationships among skills would
not be evident. In a quantitative analysis of graduate students’ performance
on written research proposals, students’ demonstration of some skills were
contingent upon higher scores in others. That is, basic proficiency on certain key skills was not demonstrated unless higher levels of performance
were evident in other areas. Specifically, strong performance in the use of
primary literature predicted students’ ability to situate their work within
their respective disciplines and their ability to appropriately frame research
problems (i.e., establish testable hypotheses) predicted lower levels of performance on other skills (e.g., data analysis and drawing conclusions based
on data), suggesting that proficiency in the former are necessary precursors
to the development of the latter.
These trajectories in skill development do not seem to vary based on the
point in students’ education where they begin research training. Gilmore,
Vieyra, Timmerman, Feldon, and Maher (2015) found that first-year graduate
students who had participated in research experiences as undergraduates
demonstrated higher levels of research skills at the beginning of their graduate studies than peers without those experiences. After one academic year,
several of the initial advantages remained statistically significant. Further,
performance gaps between students further along the skill development
trajectory and those with lower levels of proficiency tend to widen over
time, indicating an acceleration of competency development with increased
opportunities to engage in supervised research (Feldon, Maher, Peugh, &
Roksa, 2016).
EVALUATION OF SPECIFIC RESEARCH TRAINING PRACTICES
Research skill development is not a purely cognitive phenomenon. It also
entails interaction with the environment—the research laboratory, the academic program, and the larger discipline—as well as specific behaviors, such
as mentored research activities, writing for publication (faculty–student
coauthoring), and graduate student teaching. The importance of these
aspects of socialization for persistence in doctoral programs is well documented (Lovitts, 2001), but direct impacts and interactions among these
activities for skill development have not been thoroughly researched. Most
scientists receive little or no training in instructional practices (Bianchini,
Whitney, Breton, & Hilton-Brown, 2001) and rely on recollections of their
own graduate school experiences to inform their decisions on how best to
train the students under their supervision. Therefore, new research on the
8
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
effectiveness of specific practices is especially important for improving the
quality of research training.
Studies of the impacts of mentorship traditionally examine levels of scholarly productivity as the primary outcome. Paglis, Green, and Bauer (2006)
found that mentored research experiences, coupled with faculty–student
coauthoring, in the second year of a PhD program positively predicted
the quantity of published research 4 years later. Similar results have been
obtained in other studies (e.g., Kademani, Kalyane, Kumar, & Mohan, 2005).
However, publications are not inherently a valid measure of scientific expertise in the sense of individual skills because work is typically distributed
over multiple authors who may make greater or lesser contributions to
various aspects of the project (Feldon, Maher, & Timmerman, 2010).
While not an examination of skill outcomes per se, one recent study compared graduate students’ assessments of their own research skills with their
mentors’ assessments (Feldon, Maher, Hurst, & Timmerman, 2015). It also
compared the students’ and mentors’ perspectives with performance-based
assessments conducted by blind raters in the students’ respective disciplines
as a way to understand mentors’ ability to accurately gauge student skill
development. The findings reflected very poor alignment across the three
sources of assessment. Specifically, students and their faculty mentors disagreed about whether specific research skills were strengths or weaknesses
of the students in 44% of cases. Further, neither the students nor their mentors were able to predict blind-rated performance at better than chance levels.
This raises important concerns about the assumption that faculty mentors
hold privileged insights into students’ skill development because mentors’
opinions of students’ research skills are frequently utilized to both inform
instructional within the context of the student’s graduate training, as well as
decisions about the structure of an academic program. Further, such evaluations serve as the foundation for subsequent letters of recommendation and
the provision of additional opportunities for students perceived to be highly
skilled (Green & Bauer, 1995).
Research on the importance of coauthoring with faculty as a graduate training experience for future scientists has become prevalent in recent years.
The specific pedagogical practices employed by faculty when writing with
graduate students have been documented and framed within a context of
socialization to the relevant academic discipline in response to calls for more
deliberate approaches to coauthorship as a training vehicle (Kamler, 2008).
Qualitative research across multiple countries has provided evidence supporting its importance for skill development. However, only one study to
date has directly compared the skill development of students coauthored
with faculty mentors with those who did not. It found that faculty–student
coauthorship uniquely accounted for 7% of the variance in skill development
The Development of Expertise in Scientific Research
9
over the course of an academic year, representing a moderate positive effect
of coauthorship (Feldon, Shukla, & Maher, 2016).
Graduate students in the sciences are often assigned teaching responsibilities at the undergraduate level. However, the culture at many research universities holds that teaching interferes with the development of research skill
by limiting the time a student can spend working in the laboratory (Anderson
et al., 2011). To test the validity of this assumption, Feldon et al. (2011) implemented a controlled study of graduate students across science and engineering disciplines, comparing the skill growth of students who participated in
supervised research and did or did not also have teaching responsibilities.
Results indicated that graduate students with both research and teaching
experiences demonstrated significantly greater skill growth in the areas of
framing testable hypotheses and designing experiments. Growth in other
research skills did not differ significantly between the groups, indicating that
the benefits of both teaching and research activities did not hinder skill development in other aspects of research.
KEY ISSUES FOR FUTURE RESEARCH
The existing scholarship on training in the sciences predominantly utilizes a
socialization framework (Gardner, 2010), in which socialization is defined as
“a process of internalizing the expectations, standards, and norms of a given
society, which includes learning the relevant skills, knowledge, habits, attitudes, and values of the group that one is joining” (Austin & McDaniels, 2006,
p. 400). The society that doctoral students aspire to join is that of the scientists
publishing and conducting research within their chosen discipline. Graduate
students are expected to learn their new roles and with them, the requisite
skills, values, attitudes and expectations, although the opportunities to do so
are not necessarily equally available to all students (Weidman, 2010). These
issues are further complicated in the context of interdisciplinary research
endeavors, which require both graduate students and their faculty mentors
to redefine traditional boundaries and disciplinary norms (Rhoten, 2004).
While valuable as means to highlighting many aspects of the doctoral student experience, and inequities within those experiences, previous research
on graduate research training elides a crucial aspect of that experience: skill
development. Skills are observed to develop, but there is little empirical
evidence of how they are developed, whether or how their development
varies across different groups of students, and in what ways they might
contribute to other outcomes of graduate education, such as persistence
and scholarly productivity. Those studies that do address skills generally
examine feelings of preparedness for conducting independent research
rather than measures of discrete skills (Delamont & Atkinson, 2001). As
10
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
demonstrated by the Feldon et al. (2015) study described previously, such
self-report measures do not accurately predict demonstrable skills. Thus,
establishing robust operational definitions for research skills and utilizing
performance-based measures of skill development are essential for better
understanding the mechanisms and enhancing the quality of expertise
development in scientific research (Lovitts, 2007).
Understanding the interplay among experiences, motivation, and skills
will also provide novel insights into reducing the observed inequity in
research training outcomes. Women and minority populations are consistently underrepresented in many science disciplines. While previous
research has considered inequalities in socialization experiences, skills
represent a crucial missing link. If students from different sociodemographic
groups enter their research training with differential levels of preparation
and research experience (Kim & Sax, 2009), and if new skills build upon
existing skills, this initial difference may produce growing inequalities in
skill levels across different groups of students as they progress through
graduate school (Feldon, Maher, et al., 2016).
Further, identifying the experiences and training mechanisms that directly
influence the development of specific research skills will provide a foundation for new training approaches. Despite burgeoning efforts to train
graduate students to engage in interdisciplinary science, for example, empirical data regarding associations between practices and outcomes are almost
nonexistent (Vanstone et al., 2013). Various interventions and strategies
already exist to address skill gaps and differential success in research training
programs. For example, many “boot camp” interventions emphasize statistical data analysis skills. However, as discussed above, progressions of skill
development seem to begin with the effective reading and use of primary
literature and the ability to generate testable hypotheses (Kiley & Wisker,
2009; Timmerman et al., 2013), with data analysis skills not developing until
later on. Thus, it is not clear that early interventions targeting analysis would
be effective or an efficient use of limited training resources. Understanding
how skills are developed over time and how they interact with relevant
motivational constructs can optimize the creation and implementation of
training experiences that both maximize students’ skill development and
reduce inequality by identifying the key predictors of success and the pivotal
time points in the development of expertise in the sciences.
REFERENCES
Anderson, W. A., Banerjee, U., Drennan, C. L., Elgin, S. C. R., Epstein, I. R., Handelsman, J. . . . Warner, I. M. (2011). Changing the culture of science education at
research universities. Science, 331, 153.
The Development of Expertise in Scientific Research
11
Austin, A. E., & McDaniels, M. (2006). Preparing the professoriate of the future: Graduate student socialization for faculty roles. In J. C. Smart (Ed.), Higher education:
Handbook of theory and research (Vol. 21, pp. 397–456). Dordrecht, The Netherlands:
Springer.
Bauer, H. H. (1992). Scientific literacy and the myth of the scientific method. Chicago:
University of Illinois Press.
Bianchini, J. A., Whitney, D. J., Breton, T. D., & Hilton-Brown, B. A. (2001). Toward
inclusive science education: University scientists’ views of students, instructional
practices, and the nature of science. Science Education, 86, 42–78.
Cheng, P., & Simon, H. A. (1995). Scientific discovery and creative reasoning with
diagrams. In S. Smith, T. Ward & R. Finke (Eds.), The creative cognition approach.
Cambridge, MA: MIT Press.
Christensen, B. T., & Schunn, C. C. (2009). The role and impact of mental simulation
in design. Applied Cognitive Psychology, 23, 327–344.
Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making
thinking visible. American Educator, 6, 38–46.
Delamont, S., & Atkinson, P. (2001). Doctoring uncertainty: Mastering craft knowledge. Social Studies of Science, 31, 87–107.
Feldon, D. F., Maher, M. A., Hurst, M., & Timmerman, B. (2015). Faculty mentors’,
graduate students’, and performance-based assessments of students’ research skill
development. American Educational Research Journal, 52, 334–370.
Feldon, D. F., Maher, M., Roksa, J., & Peugh, J. (2016). Cumulative advantage in the
skill development of STEM graduate students: A mixed methods study. American
Educational Research Journal, 53, 132–161.
Feldon, D. F., Maher, M., & Timmerman, B. (2010). Performance-based data in the
study of STEM graduate education. Science, 329, 282–283.
Feldon, D. F., Peugh, J., Timmerman, B. E., Maher, M. A., Hurst, M., Strickland, D., .
. . Stiegelmeyer, C. (2011). Graduate students’ teaching experiences improve their
methodological research skills. Science, 333(6045), 1037–1039.
Feldon, D. F., Shukla, K., & Maher, M. A. (2016). Faculty–student coauthorship as a
means to enhance STEM graduate students’ research skills. International Journal of
Researcher Development.
Frodeman, R., & Mitcham, C. (2007). New directions in interdisciplinarity: Broad,
deep, and critical. Bulletin of Science, Technology & Society, 27, 506–514.
Gardner, S. K. (2010). Contrasting the socialization experiences of doctoral students
in high- and low-completing departments: A qualitative analysis of disciplinary
contexts at one institution. The Journal of Higher Education, 81, 61–81.
Gilmore, J. A., Vieyra, M., Timmerman, B. E., Feldon, D. F., & Maher, M. A. (2015).
The relationship between undergraduate research participation and subsequent
research performance of early career STEM graduate students. The Journal of Higher
Education, 86, 834–863.
Golde, C. M., Conklin Bueschel, A., Jones, L., & Walker, G. E. (2009). Advocating
apprenticeship and intellectual community: Lessons from the Carnegie initiative
on the doctorate. In R. G. Ehrenberg & C. V. Kuh (Eds.), Doctoral education and
faculty of the future (pp. 53–64). Ithaca, NY: Cornell University Press.
12
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
Golde, C. M., & Dore, T. M. (2001). At cross purposes: What the experiences of doctoral
students reveal about doctoral education. Philadelphia, PA: Pew Charitable Trusts.
www.phd-survey.org.
Green, S., & Bauer, T. (1995). Supervisory mentoring by advisers: Relationships with
doctoral student potential, productivity, and commitment. Personnel Psychology,
48, 537–562.
Holley, K. (2009). Animal research practices and doctoral student identity development in a scientific community. Studies in Higher Education, 34, 577–591.
Johnson, L., Lee, A., & Green, B. (2000). The Ph.D. and the autonomous self: Gender,
rationality, and postgraduate pedagogy. Studies in Higher Education, 25, 135–147.
Kademani, B. S., Kalyane, V. L., Kumar, V., & Mohan, L. (2005). Nobel laureates: Their
publication productivity, collaboration and authorship status. Scientometrics, 62,
261–268.
Kamler, B. (2008). Rethinking doctoral publication practices: Writing from and
beyond the thesis. Studies in Higher Education, 33, 284–294.
Kardash, C. (2000). Evaluation of undergraduate research experience: Perceptions of
undergraduate interns and the faculty mentors. Journal of Educational Psychology,
92, 191–201.
Kiley, M., & Wisker, G. (2009). Threshold concepts in research education and evidence of threshold crossing. Higher Education Research & Development, 28, 431–441.
Kim, Y. K., & Sax, L. J. (2009). Student–faculty interaction in research universities: Differences by student gender, race, social class, and first-generation status. Research
in Higher Education, 50, 437–459.
Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12, 1–55.
Klahr, D., & Simon, H. A. (2001). What have psychologists (and others) discovered
about the process of scientific discovery? Current Directions in Psychological Science,
10, 75–79.
Knorr-Cetina, K. (1997). What scientists do. In T. Ibáñez & L. Íñiguez (Eds.), Critical
social psychology. London, England: Sage Publications.
Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of
Chicago Press.
Lattuca, L. R. (2001). Creating interdisciplinarity: Interdisciplinary research and teaching
among college and university faculty. New York, NY: Vanderbilt University Press.
Lovitts, B. (2001). Leaving the ivory tower: The causes and consequences of departure from
doctoral study. Lanham, MD: Rowman & Littlefield.
Lovitts, B. E. (2007). Making the implicit explicit: Creating performance expectations for
the dissertation. Sterling, VA: Stylus Publishing.
Maher, M. A., Gilmore, J. A., Feldon, D. F., & Davis, T. E. (2013). Cognitive apprenticeship and the supervision of science and engineering research assistants. Journal
of Research Practice, 9(2, Article M5).
Nersessian, N. J., & Chandrasekharan, S. (2009). Hybrid analogies in conceptual
innovation in science. Cognitive Systems Research, 10, 178–188.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice Hall.
The Development of Expertise in Scientific Research
13
Paglis, L., Green, S. G., & Bauer, T. (2006). Does adviser mentoring add value? A
longitudinal study of mentoring and doctoral student outcomes. Research in Higher
Education, 47, 451–476.
Pearson, M., & Brew, A. (2002). Research training and supervision development.
Studies in Higher Education, 27, 135–150.
Rhoten, D. (2004). Interdisciplinary research: Trend or transition. Items & Issues (Social
Science Research Council), 5, 6–11.
Schraagen, J. (1993). How experts solve a novel problem in experimental design. Cognitive Science, 17, 285–305.
Timmerman, B., Feldon, D. F., Maher, M., Strickland, D., & Gilmore, J. A. (2013).
Performance-based assessment of graduate student research skills: Timing, trajectory, and potential thresholds. Studies in Higher Education, 38, 693–710.
Vanstone, M., Hibbert, K., Kinsella, E. A., McKenzie, P., Lingard, L., Pitman, A.,
& Wilson, T. (2013). Interdisciplinary doctoral research supervision: A scoping
review. Canadian Journal of Higher Education, 43(2), 42–67.
Weidman, J. C. (2010). Doctoral student socialization for research. In S. K. Gardner
& P. Mendoza (Eds.), On becoming a scholar: Socialization and development in doctoral
education (pp. 29–44). Sterling, VA: Stylus Publishing Inc.
DAVID FELDON SHORT BIOGRAPHY
David Feldon is an associate professor of instructional technology and learning sciences and director of the STE2M (Science, Technology, Engineering,
Education, Mathematics) Center at Utah State University. His research examines two primary lines of inquiry: The first characterizes the cognitive components of expertise as they contribute to effective and innovative problem
solving, as well as how they affect the quality of instruction that experts
can provide. The second examines the development of research skills within
STEM disciplines as a function of instruction and other educational support
mechanisms. Recent findings from this work have been published in Science,
The Journal of Higher Education, the Journal of Research in Science Teaching, and
the American Educational Research Journal. Dr. Feldon earned his PhD in educational psychology and his MS in instructional technology from the University
of Southern California, completed his postdoctoral fellowship at UCLA, and
has held tenure-track positions at the University of South Carolina, Washington State University, and the University of Virginia before joining the USU
faculty.
RELATED ESSAYS
The Role of Data in Research and Policy (Sociology), Barbara A. Anderson
Expertise (Sociology), Gil Eyal
The Evidence-Based Practice Movement (Sociology), Edward W. Gondolf
14
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
Causation, Theory, and Policy in the Social Sciences (Sociology), Mark C.
Stafford and Daniel P. Mears
What is Special about Specialization? (Archaeology), Anne P. Underhill
Translational Sociology (Sociology), Elaine Wethington
-
The Development of Expertise in
Scientific Research
DAVID F. FELDON
Abstract
Scientific research is a fundamental mechanism for both advancing human knowledge about the natural world and facilitating technological innovations that fuel
economic development. As such, understanding and optimizing the pathways to
expertise and professional success in this endeavor is vital to ensure sustained
intellectual and financial benefits of scientific research. This essay summarizes the
research on the development of expertise in the sciences from the psychology of
science and research on graduate education. Examining new research trends that
present an emerging picture of a specific trajectory for the development of research
skills and frame the development of scientific research skills as direct outcomes of
specific training practices, new directions for research that bridge the cognitive and
socialization lenses are identified.
INTRODUCTION
Scientific research is a fundamental mechanism for both advancing human
knowledge about the natural world and facilitating technological innovations that fuel economic development. As such, understanding and
optimizing the pathways to expertise and professional success in this
endeavor is vital to ensure sustained intellectual and financial benefits of
scientific research. As a general practice, science uses empirical evidence
and logical inference to identify generalizable principles that explain the
mechanisms of natural phenomena. It also reflects standards of quality
judged by replicability (i.e., empirical findings supporting a conclusion
can be obtained consistently across studies) and parsimony (i.e., the ability
of an empirically supported theory to explain the widest possible range
of instances in the simplest way). Although the nature of scientific work
has changed dramatically in recent decades with regard to pace, structure
of research teams, relevant technologies, and sources of funding (Austin
& McDaniels, 2006), the core facets of expertise in conducting scientific
Emerging Trends in the Social and Behavioral Sciences.
Robert Scott and Marlis Buchmann (General Editors) with Stephen Kosslyn (Consulting Editor).
© 2016 John Wiley & Sons, Inc. ISBN 978-1-118-90077-2.
1
2
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
research have remained stable: Scientists must systematically collect, analyze, and interpret data about the natural world and effectively present their
conclusions in a manner that meets the standards of rigor, relevance, and
novelty.
Expertise in scientific research requires more nuanced consideration, however. The science disciplines (e.g., biology and chemistry) can be understood
as distinct intellectual endeavors within which scholars share common sets
of research questions, methods of inquiry, and intellectual approaches to
solving problems (Kuhn, 1962). Disciplines manifest their differences from
one another through distinctions in the use and value of different forms of
evidence (e.g., logical, observational, or experimental) or the privileging of
data-driven (inductive) or theory-driven (deductive) modes of reasoning
(Bauer, 1992). Thus, each discipline maintains its own “epistemic culture”
(Knorr-Cetina, 1997, p. 260) with its own norms, jargon, theories, and essential skills that can overlap to varying degrees with another. Accordingly,
expertise is most precisely characterized within a disciplinary context, and
even categories of skills frequently conceptualized across the sciences make
reference to the state or standards of the discipline. For example, research
skills that are commonly measured across disciplines include identifying
and framing a meaningful and productive question for investigation based
on the existing state of knowledge in the researcher’s discipline, formulating
a testable research hypothesis based on a specific question, designing a
valid experiment or empirical test of the hypothesis, and interpreting data
by relating results to the original hypothesis and drawing appropriate,
supportable conclusions (Kardash, 2000). Collectively, these skills result in
the construction of disciplinary arguments within a scientific discipline, the
mastery of which is considered essential for successful scientists (Kiley &
Wisker, 2009).
Beyond the component skills required to produce valid and accepted
research, scientists also become recognized as experts for the importance of
their research findings in terms of the impact that they have on the accepted
understanding or practical application of the phenomena of interest. Increasingly, interdisciplinary research is proving effective for understanding many
complex problems, despite the significant challenges it poses for scientists
to communicate and collaborate across disciplinary lines (Lattuca, 2001).
However, its practice arises from the border traffic between disciplines,
which serve as the foundations for complementary modes of expertise
(Frodeman & Mitcham, 2007). As such, the discrete disciplinary foundations
of scientific expertise collectively frame a larger problem, which is essential
for productive interdisciplinary research.
The Development of Expertise in Scientific Research
3
FOUNDATIONAL RESEARCH
Two traditions frame the current understanding of expertise development in
research. The first is the psychology of science, which characterizes the cognitive mechanisms of scientific reasoning. The second is socialization theory,
which frames the development of research competence as a transactional
process of learning to participate as a member of a specific disciplinary
community—often under a mentor’s guidance in the form of a cognitive
apprenticeship.
COGNITION IN SCIENTIFIC PROBLEM SOLVING
In the cognitive tradition, solving problems for which solutions are not yet
known is often referred to as the search of a problem space. The space is
composed of all possible routes to move from the initial state of knowledge
(i.e., all relevant known and unknown factors) to the solution (Newell &
Simon, 1972). Experts navigate smaller problem spaces than novices because
the number of possible routes is constrained by the knowledge of the individual attempting to solve the problem. Strategies known to be ineffective
or inapplicable within the constraints of the problem or disciplinary lens are
excluded from the solution search preemptively.
Klahr and Dunbar (1988) conceptualize scientific problem solving as the
simultaneous search of two adjacent, interacting problem spaces: the theoretical space (What is the correct explanation or principle?) and the methodological space (What are the effective means of investigating its validity?).
To resolve the theoretical problem, scientists select a hypothesis based on a
synthesis of current knowledge in the field that yields the most likely explanation or appropriate governing principle. Selection of this conjecture constrains the possible range of research designs because it specifies what type of
events must be observed and what measures should be used to align with the
theoretical framework identified. Similarly, selection of a research methodology constrains the possible findings and derivative conclusions because
choices have been made about which aspects of the phenomenon will and
will not be examined. Thus, what is known about a problem constrains the
range of possible solution paths through the problem space, which possible
solution paths are attempted determine the data obtained, and those data in
turn inform the conclusions reached. As each conclusion shapes the theoretical understanding of the problem, the knowledge-based constraints on the
search of the methodological problem space change, resulting in new experimental designs that can in turn yield novel insights to advance the theories
used to explain the phenomenon.
Given the large number of possible paths in the search of each problem
space and the many reciprocal interactions that can change the problem space
4
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
constraints, an essential component of research training is developing the
ability to define research problems as narrowly as possible to establish a problem space of manageable size within which to work (Klahr & Simon, 2001).
One trait of expert scientists working in their own domains of expertise is
the use of optimized strategies in the design of new research. When asked to
design experiments to solve problems out of their field of specialization, the
efficiency of their problem-solving strategies decreases (Schraagen, 1993).
Scientists also use a number of well-documented cognitive strategies to aid
in their navigation of the theoretical and methodological problem spaces.
For example, expert scientists make extensive use of analogical reasoning to
focus on solution paths that have increased likelihoods of success. Across
many studies, experts’ verbalized thinking includes a consistent pattern of
finding similarities between new problems and those that have been previously solved (Nersessian & Chandrasekharan, 2009). A second key practice is mental simulation, in which scientists conceptualize likely outcomes
by mentally constructing new scenarios within their mental models of relevant theory and projecting how known mechanisms will operate under new
conditions (Christensen & Schunn, 2009). In addition, across both conceptual and physical experimentation, experts utilize external representations
of observed phenomena and their thinking about them to manage mental
load such as diagrams, specialized notation systems, and standardized computational algorithms (Cheng & Simon, 1995).
SOCIALIZATION AND COGNITIVE APPRENTICESHIP AS MECHANISMS OF EXPERTISE
DEVELOPMENT
The training of scientists overwhelmingly takes place in university graduate programs. Attainment of the PhD is intended to signal the readiness of a
scientist to contribute to their chosen discipline through the independent production of research (Lovitts, 2001). The process of graduate education is often
conceptualized as a process of acculturating a student into their chosen discipline (Austin & McDaniels, 2006). Models of graduate student socialization
entail both formal and informal interactions with both formal mentors and
“significant others … in a department, a laboratory, a disciplinary network,
or a university and its resources” (Pearson & Brew, 2002, p. 141).
Among these varied influences, the faculty mentor is generally held to be
primary with the responsibility for structuring a student’s cognitive apprenticeship. As a concept, cognitive apprenticeship was originally developed as
an instructional paradigm for informal secondary school settings. However,
it is now a fixture in graduate education to the extent that it is considered
“the signature pedagogy of doctoral education” (Golde, Conklin Bueschel,
Jones, & Walker, 2009, p. 54).
The Development of Expertise in Scientific Research
5
Cognitive apprenticeships are intended to be a structured way to make
visible the problem-solving processes of the scientist through modeling,
scaffolding, and coaching in order to nurture the mentee’s development
of expertise. This process is especially important for the sciences because
much of the work performed is conceptual in nature and thus not directly
observable. Indeed, Delamont and Atkinson (2001) suggest that successful
students “master … tacit, indeterminate skills and knowledge, produce
usable results, and become professional scientists [who] learn to write public
accounts of their investigations which omit the uncertainties, contingencies,
and personal craft skills” (p. 88). Consequently, graduate student engagement in supervised research activity is linked both to students’ perceptions
of their own research abilities and their identities as scientists (Holley,
2009). To account for the broader group of individuals who interact around
disciplinary topics and the process of research, distributed mentorship can
occur through a “community of practice” in which “the participants actively
communicate about and engage in the skills involved in expertise, where
expertise is understood as the practice of solving problems and carrying
out tasks in a domain” (Collins, Brown, & Holum, 1991, p. 16). Specific to
the context of research training that typically takes place within research
laboratory settings, the community of practice takes on the form of “cascade
mentoring” in a research lab setting, in which “postdoctoral fellows mentor
senior graduate students, senior graduate students mentor junior graduate
students, and junior graduate students mentor undergraduates” (Golde
et al., 2009, p. 57).
Cascade mentoring likely plays a critical role in the preparation of future
scientists because the traditional faculty mentor role is “marked by neglect,
abandonment, and indifference” (Johnson, Lee, & Green, 2000, p. 136) due
to research incentive mechanisms that value research productivity over
instructional responsibilities (Anderson et al., 2011). A supportive community of practice is particularly critical at the initial stages of graduate work,
as many novice students do not fully understand the most effective ways to
navigate their training opportunities or the expectations held for their work
(Golde & Dore, 2001). However, expressions of concern about poor or erratic
graduate research mentorship throughout the doctoral process are prevalent
(Lovitts, 2001), to the extent that some researchers in the field suggest it
may be unavoidable (Johnson et al., 2000). While students consistently
gain exposure to and practice conducting research-relevant tasks, evidence
suggests that faculty do not necessarily change the level of challenge or
nature of the work assigned to their mentees in response to their progressive
skill development (Maher, Gilmore, Feldon, & Davis, 2013).
6
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
CUTTING-EDGE RESEARCH
Graduate research training is intended to provide students with the necessary experiences and guidance to develop from a consumer into a producer of research (Weidman, 2010). This process involves a clear sense of
skill development, that is, of learning the skills necessary for experts in the
field. However, there is surprisingly little research assessing the effects of specific training practices on the extent or efficiency with which targeted skills
are acquired. Consequently, current research on the training of scientists has
shifted the focus of inquiry from descriptive studies to deeper understanding of the developmental pathways traveled by students as they progress
toward the status of independent researchers. There are also an increasing
number of studies that evaluate the impacts of specific training components
in controlled studies. This work is exciting because it helps to develop a more
mechanistic understanding of the training process to the point that it could
inform training decisions made by faculty and the structuring of graduate
programs.
TRAJECTORIES IN THE DEVELOPMENT OF RESEARCH SKILLS
One approach to the conceptualization of skill development is that of threshold concepts, which identifies knowledge and skills that act as bottlenecks to
further expertise development. These concepts are inherently challenging to
understand, and once attained, entail a fundamental restructuring of knowledge that must precede deeper comprehension of advanced disciplinary perspectives. Through interviews with experienced faculty mentors, Kiley and
Wisker (2009) identify three potential threshold concepts: the need to create
knowledge, the need for rigorous analysis (i.e., objectivity in the evaluation
of argument), and a developed sense of paradigm. Attainment of these skills
is considered to be neither gradual nor linear and requires mastery of certain
knowledge or skills before the attainment of others. This implies that thresholds are part of a developmental sequence and that they fundamentally alter
students’ abilities to master other subsequent concepts.
The new contribution of these ideas, which otherwise align well with conventional wisdom regarding the nature of independent scholarship, is the
assumption that these thresholds are crossed in a developmental sequence
and that they fundamentally alter students’ abilities to master other, subsequent concepts. If empirical data reflect a series of thresholds that are intrinsic
to research skill development, then doctoral programs could increase their
efficiency and effectiveness by sequencing educational experiences to target
the attainment of threshold concepts upon which others are contingent.
Timmerman, Feldon, Maher, Strickland, and Gilmore (2013) provide converging and support for these thresholds at a more nuanced level using a
The Development of Expertise in Scientific Research
7
sample of graduate students from across science disciplines. If research skills
develop sequentially, then scores on the performance of some skills should
be systematically higher than scores on the performance of others at a given
point in time. However, if all skills develop in parallel or the sequencing of
skill development is arbitrary, predictive relationships among skills would
not be evident. In a quantitative analysis of graduate students’ performance
on written research proposals, students’ demonstration of some skills were
contingent upon higher scores in others. That is, basic proficiency on certain key skills was not demonstrated unless higher levels of performance
were evident in other areas. Specifically, strong performance in the use of
primary literature predicted students’ ability to situate their work within
their respective disciplines and their ability to appropriately frame research
problems (i.e., establish testable hypotheses) predicted lower levels of performance on other skills (e.g., data analysis and drawing conclusions based
on data), suggesting that proficiency in the former are necessary precursors
to the development of the latter.
These trajectories in skill development do not seem to vary based on the
point in students’ education where they begin research training. Gilmore,
Vieyra, Timmerman, Feldon, and Maher (2015) found that first-year graduate
students who had participated in research experiences as undergraduates
demonstrated higher levels of research skills at the beginning of their graduate studies than peers without those experiences. After one academic year,
several of the initial advantages remained statistically significant. Further,
performance gaps between students further along the skill development
trajectory and those with lower levels of proficiency tend to widen over
time, indicating an acceleration of competency development with increased
opportunities to engage in supervised research (Feldon, Maher, Peugh, &
Roksa, 2016).
EVALUATION OF SPECIFIC RESEARCH TRAINING PRACTICES
Research skill development is not a purely cognitive phenomenon. It also
entails interaction with the environment—the research laboratory, the academic program, and the larger discipline—as well as specific behaviors, such
as mentored research activities, writing for publication (faculty–student
coauthoring), and graduate student teaching. The importance of these
aspects of socialization for persistence in doctoral programs is well documented (Lovitts, 2001), but direct impacts and interactions among these
activities for skill development have not been thoroughly researched. Most
scientists receive little or no training in instructional practices (Bianchini,
Whitney, Breton, & Hilton-Brown, 2001) and rely on recollections of their
own graduate school experiences to inform their decisions on how best to
train the students under their supervision. Therefore, new research on the
8
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
effectiveness of specific practices is especially important for improving the
quality of research training.
Studies of the impacts of mentorship traditionally examine levels of scholarly productivity as the primary outcome. Paglis, Green, and Bauer (2006)
found that mentored research experiences, coupled with faculty–student
coauthoring, in the second year of a PhD program positively predicted
the quantity of published research 4 years later. Similar results have been
obtained in other studies (e.g., Kademani, Kalyane, Kumar, & Mohan, 2005).
However, publications are not inherently a valid measure of scientific expertise in the sense of individual skills because work is typically distributed
over multiple authors who may make greater or lesser contributions to
various aspects of the project (Feldon, Maher, & Timmerman, 2010).
While not an examination of skill outcomes per se, one recent study compared graduate students’ assessments of their own research skills with their
mentors’ assessments (Feldon, Maher, Hurst, & Timmerman, 2015). It also
compared the students’ and mentors’ perspectives with performance-based
assessments conducted by blind raters in the students’ respective disciplines
as a way to understand mentors’ ability to accurately gauge student skill
development. The findings reflected very poor alignment across the three
sources of assessment. Specifically, students and their faculty mentors disagreed about whether specific research skills were strengths or weaknesses
of the students in 44% of cases. Further, neither the students nor their mentors were able to predict blind-rated performance at better than chance levels.
This raises important concerns about the assumption that faculty mentors
hold privileged insights into students’ skill development because mentors’
opinions of students’ research skills are frequently utilized to both inform
instructional within the context of the student’s graduate training, as well as
decisions about the structure of an academic program. Further, such evaluations serve as the foundation for subsequent letters of recommendation and
the provision of additional opportunities for students perceived to be highly
skilled (Green & Bauer, 1995).
Research on the importance of coauthoring with faculty as a graduate training experience for future scientists has become prevalent in recent years.
The specific pedagogical practices employed by faculty when writing with
graduate students have been documented and framed within a context of
socialization to the relevant academic discipline in response to calls for more
deliberate approaches to coauthorship as a training vehicle (Kamler, 2008).
Qualitative research across multiple countries has provided evidence supporting its importance for skill development. However, only one study to
date has directly compared the skill development of students coauthored
with faculty mentors with those who did not. It found that faculty–student
coauthorship uniquely accounted for 7% of the variance in skill development
The Development of Expertise in Scientific Research
9
over the course of an academic year, representing a moderate positive effect
of coauthorship (Feldon, Shukla, & Maher, 2016).
Graduate students in the sciences are often assigned teaching responsibilities at the undergraduate level. However, the culture at many research universities holds that teaching interferes with the development of research skill
by limiting the time a student can spend working in the laboratory (Anderson
et al., 2011). To test the validity of this assumption, Feldon et al. (2011) implemented a controlled study of graduate students across science and engineering disciplines, comparing the skill growth of students who participated in
supervised research and did or did not also have teaching responsibilities.
Results indicated that graduate students with both research and teaching
experiences demonstrated significantly greater skill growth in the areas of
framing testable hypotheses and designing experiments. Growth in other
research skills did not differ significantly between the groups, indicating that
the benefits of both teaching and research activities did not hinder skill development in other aspects of research.
KEY ISSUES FOR FUTURE RESEARCH
The existing scholarship on training in the sciences predominantly utilizes a
socialization framework (Gardner, 2010), in which socialization is defined as
“a process of internalizing the expectations, standards, and norms of a given
society, which includes learning the relevant skills, knowledge, habits, attitudes, and values of the group that one is joining” (Austin & McDaniels, 2006,
p. 400). The society that doctoral students aspire to join is that of the scientists
publishing and conducting research within their chosen discipline. Graduate
students are expected to learn their new roles and with them, the requisite
skills, values, attitudes and expectations, although the opportunities to do so
are not necessarily equally available to all students (Weidman, 2010). These
issues are further complicated in the context of interdisciplinary research
endeavors, which require both graduate students and their faculty mentors
to redefine traditional boundaries and disciplinary norms (Rhoten, 2004).
While valuable as means to highlighting many aspects of the doctoral student experience, and inequities within those experiences, previous research
on graduate research training elides a crucial aspect of that experience: skill
development. Skills are observed to develop, but there is little empirical
evidence of how they are developed, whether or how their development
varies across different groups of students, and in what ways they might
contribute to other outcomes of graduate education, such as persistence
and scholarly productivity. Those studies that do address skills generally
examine feelings of preparedness for conducting independent research
rather than measures of discrete skills (Delamont & Atkinson, 2001). As
10
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
demonstrated by the Feldon et al. (2015) study described previously, such
self-report measures do not accurately predict demonstrable skills. Thus,
establishing robust operational definitions for research skills and utilizing
performance-based measures of skill development are essential for better
understanding the mechanisms and enhancing the quality of expertise
development in scientific research (Lovitts, 2007).
Understanding the interplay among experiences, motivation, and skills
will also provide novel insights into reducing the observed inequity in
research training outcomes. Women and minority populations are consistently underrepresented in many science disciplines. While previous
research has considered inequalities in socialization experiences, skills
represent a crucial missing link. If students from different sociodemographic
groups enter their research training with differential levels of preparation
and research experience (Kim & Sax, 2009), and if new skills build upon
existing skills, this initial difference may produce growing inequalities in
skill levels across different groups of students as they progress through
graduate school (Feldon, Maher, et al., 2016).
Further, identifying the experiences and training mechanisms that directly
influence the development of specific research skills will provide a foundation for new training approaches. Despite burgeoning efforts to train
graduate students to engage in interdisciplinary science, for example, empirical data regarding associations between practices and outcomes are almost
nonexistent (Vanstone et al., 2013). Various interventions and strategies
already exist to address skill gaps and differential success in research training
programs. For example, many “boot camp” interventions emphasize statistical data analysis skills. However, as discussed above, progressions of skill
development seem to begin with the effective reading and use of primary
literature and the ability to generate testable hypotheses (Kiley & Wisker,
2009; Timmerman et al., 2013), with data analysis skills not developing until
later on. Thus, it is not clear that early interventions targeting analysis would
be effective or an efficient use of limited training resources. Understanding
how skills are developed over time and how they interact with relevant
motivational constructs can optimize the creation and implementation of
training experiences that both maximize students’ skill development and
reduce inequality by identifying the key predictors of success and the pivotal
time points in the development of expertise in the sciences.
REFERENCES
Anderson, W. A., Banerjee, U., Drennan, C. L., Elgin, S. C. R., Epstein, I. R., Handelsman, J. . . . Warner, I. M. (2011). Changing the culture of science education at
research universities. Science, 331, 153.
The Development of Expertise in Scientific Research
11
Austin, A. E., & McDaniels, M. (2006). Preparing the professoriate of the future: Graduate student socialization for faculty roles. In J. C. Smart (Ed.), Higher education:
Handbook of theory and research (Vol. 21, pp. 397–456). Dordrecht, The Netherlands:
Springer.
Bauer, H. H. (1992). Scientific literacy and the myth of the scientific method. Chicago:
University of Illinois Press.
Bianchini, J. A., Whitney, D. J., Breton, T. D., & Hilton-Brown, B. A. (2001). Toward
inclusive science education: University scientists’ views of students, instructional
practices, and the nature of science. Science Education, 86, 42–78.
Cheng, P., & Simon, H. A. (1995). Scientific discovery and creative reasoning with
diagrams. In S. Smith, T. Ward & R. Finke (Eds.), The creative cognition approach.
Cambridge, MA: MIT Press.
Christensen, B. T., & Schunn, C. C. (2009). The role and impact of mental simulation
in design. Applied Cognitive Psychology, 23, 327–344.
Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making
thinking visible. American Educator, 6, 38–46.
Delamont, S., & Atkinson, P. (2001). Doctoring uncertainty: Mastering craft knowledge. Social Studies of Science, 31, 87–107.
Feldon, D. F., Maher, M. A., Hurst, M., & Timmerman, B. (2015). Faculty mentors’,
graduate students’, and performance-based assessments of students’ research skill
development. American Educational Research Journal, 52, 334–370.
Feldon, D. F., Maher, M., Roksa, J., & Peugh, J. (2016). Cumulative advantage in the
skill development of STEM graduate students: A mixed methods study. American
Educational Research Journal, 53, 132–161.
Feldon, D. F., Maher, M., & Timmerman, B. (2010). Performance-based data in the
study of STEM graduate education. Science, 329, 282–283.
Feldon, D. F., Peugh, J., Timmerman, B. E., Maher, M. A., Hurst, M., Strickland, D., .
. . Stiegelmeyer, C. (2011). Graduate students’ teaching experiences improve their
methodological research skills. Science, 333(6045), 1037–1039.
Feldon, D. F., Shukla, K., & Maher, M. A. (2016). Faculty–student coauthorship as a
means to enhance STEM graduate students’ research skills. International Journal of
Researcher Development.
Frodeman, R., & Mitcham, C. (2007). New directions in interdisciplinarity: Broad,
deep, and critical. Bulletin of Science, Technology & Society, 27, 506–514.
Gardner, S. K. (2010). Contrasting the socialization experiences of doctoral students
in high- and low-completing departments: A qualitative analysis of disciplinary
contexts at one institution. The Journal of Higher Education, 81, 61–81.
Gilmore, J. A., Vieyra, M., Timmerman, B. E., Feldon, D. F., & Maher, M. A. (2015).
The relationship between undergraduate research participation and subsequent
research performance of early career STEM graduate students. The Journal of Higher
Education, 86, 834–863.
Golde, C. M., Conklin Bueschel, A., Jones, L., & Walker, G. E. (2009). Advocating
apprenticeship and intellectual community: Lessons from the Carnegie initiative
on the doctorate. In R. G. Ehrenberg & C. V. Kuh (Eds.), Doctoral education and
faculty of the future (pp. 53–64). Ithaca, NY: Cornell University Press.
12
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
Golde, C. M., & Dore, T. M. (2001). At cross purposes: What the experiences of doctoral
students reveal about doctoral education. Philadelphia, PA: Pew Charitable Trusts.
www.phd-survey.org.
Green, S., & Bauer, T. (1995). Supervisory mentoring by advisers: Relationships with
doctoral student potential, productivity, and commitment. Personnel Psychology,
48, 537–562.
Holley, K. (2009). Animal research practices and doctoral student identity development in a scientific community. Studies in Higher Education, 34, 577–591.
Johnson, L., Lee, A., & Green, B. (2000). The Ph.D. and the autonomous self: Gender,
rationality, and postgraduate pedagogy. Studies in Higher Education, 25, 135–147.
Kademani, B. S., Kalyane, V. L., Kumar, V., & Mohan, L. (2005). Nobel laureates: Their
publication productivity, collaboration and authorship status. Scientometrics, 62,
261–268.
Kamler, B. (2008). Rethinking doctoral publication practices: Writing from and
beyond the thesis. Studies in Higher Education, 33, 284–294.
Kardash, C. (2000). Evaluation of undergraduate research experience: Perceptions of
undergraduate interns and the faculty mentors. Journal of Educational Psychology,
92, 191–201.
Kiley, M., & Wisker, G. (2009). Threshold concepts in research education and evidence of threshold crossing. Higher Education Research & Development, 28, 431–441.
Kim, Y. K., & Sax, L. J. (2009). Student–faculty interaction in research universities: Differences by student gender, race, social class, and first-generation status. Research
in Higher Education, 50, 437–459.
Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12, 1–55.
Klahr, D., & Simon, H. A. (2001). What have psychologists (and others) discovered
about the process of scientific discovery? Current Directions in Psychological Science,
10, 75–79.
Knorr-Cetina, K. (1997). What scientists do. In T. Ibáñez & L. Íñiguez (Eds.), Critical
social psychology. London, England: Sage Publications.
Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of
Chicago Press.
Lattuca, L. R. (2001). Creating interdisciplinarity: Interdisciplinary research and teaching
among college and university faculty. New York, NY: Vanderbilt University Press.
Lovitts, B. (2001). Leaving the ivory tower: The causes and consequences of departure from
doctoral study. Lanham, MD: Rowman & Littlefield.
Lovitts, B. E. (2007). Making the implicit explicit: Creating performance expectations for
the dissertation. Sterling, VA: Stylus Publishing.
Maher, M. A., Gilmore, J. A., Feldon, D. F., & Davis, T. E. (2013). Cognitive apprenticeship and the supervision of science and engineering research assistants. Journal
of Research Practice, 9(2, Article M5).
Nersessian, N. J., & Chandrasekharan, S. (2009). Hybrid analogies in conceptual
innovation in science. Cognitive Systems Research, 10, 178–188.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice Hall.
The Development of Expertise in Scientific Research
13
Paglis, L., Green, S. G., & Bauer, T. (2006). Does adviser mentoring add value? A
longitudinal study of mentoring and doctoral student outcomes. Research in Higher
Education, 47, 451–476.
Pearson, M., & Brew, A. (2002). Research training and supervision development.
Studies in Higher Education, 27, 135–150.
Rhoten, D. (2004). Interdisciplinary research: Trend or transition. Items & Issues (Social
Science Research Council), 5, 6–11.
Schraagen, J. (1993). How experts solve a novel problem in experimental design. Cognitive Science, 17, 285–305.
Timmerman, B., Feldon, D. F., Maher, M., Strickland, D., & Gilmore, J. A. (2013).
Performance-based assessment of graduate student research skills: Timing, trajectory, and potential thresholds. Studies in Higher Education, 38, 693–710.
Vanstone, M., Hibbert, K., Kinsella, E. A., McKenzie, P., Lingard, L., Pitman, A.,
& Wilson, T. (2013). Interdisciplinary doctoral research supervision: A scoping
review. Canadian Journal of Higher Education, 43(2), 42–67.
Weidman, J. C. (2010). Doctoral student socialization for research. In S. K. Gardner
& P. Mendoza (Eds.), On becoming a scholar: Socialization and development in doctoral
education (pp. 29–44). Sterling, VA: Stylus Publishing Inc.
DAVID FELDON SHORT BIOGRAPHY
David Feldon is an associate professor of instructional technology and learning sciences and director of the STE2M (Science, Technology, Engineering,
Education, Mathematics) Center at Utah State University. His research examines two primary lines of inquiry: The first characterizes the cognitive components of expertise as they contribute to effective and innovative problem
solving, as well as how they affect the quality of instruction that experts
can provide. The second examines the development of research skills within
STEM disciplines as a function of instruction and other educational support
mechanisms. Recent findings from this work have been published in Science,
The Journal of Higher Education, the Journal of Research in Science Teaching, and
the American Educational Research Journal. Dr. Feldon earned his PhD in educational psychology and his MS in instructional technology from the University
of Southern California, completed his postdoctoral fellowship at UCLA, and
has held tenure-track positions at the University of South Carolina, Washington State University, and the University of Virginia before joining the USU
faculty.
RELATED ESSAYS
The Role of Data in Research and Policy (Sociology), Barbara A. Anderson
Expertise (Sociology), Gil Eyal
The Evidence-Based Practice Movement (Sociology), Edward W. Gondolf
14
EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES
Causation, Theory, and Policy in the Social Sciences (Sociology), Mark C.
Stafford and Daniel P. Mears
What is Special about Specialization? (Archaeology), Anne P. Underhill
Translational Sociology (Sociology), Elaine Wethington
