only search



HomeAboutBooksJournal ArticlesBook ChaptersAutobiographyeBookInterviewsAdditional TalksBook ReviewsSociologySpanishSymposiaComments

The Implications of Contextualizing Psychological Methods for Education in Psychology

 Richard Walsh-Bowers

Wilfrid Laurier University 

Abstract: In his book Constructing the Subject, Kurt Danziger (1990) examined historical constructions of the investigative situation in psychology between members of the research team and their human sources of data. He also wrote about psychology’s methodological imperative, the social context of taken-for-granted methodological norms, and with Katalin Dzinas the origins of the term “variable.” He and others showed that, once investigative norms were established, they became deeply embedded in the ideology, mythology, and workaday practice of psychologists' scientific culture. Questioning the value of “purely technical training” in psychological methods, Danziger (1993) recommended critical, interdisciplinary education in methods. I explore the implications of Danziger’s standpoint for teaching psychological methods in undergraduate and graduate courses. After reporting the coverage that authors of current textbooks in the history of psychology and research methods have given to the social context of psychological methods, I discuss Danziger’s concerns about academic psychologists indoctrinating students in methodological technology without a critical-historical and philosophical education in psychological methods.  

In his seminal 1990 book Constructing the Subject, Kurt Danziger examined psychologists’ historical constructions of the investigative situation. The central points of his work on contextualizing psychologists’ research methods are that investigative practices change over time and methodology by itself is not a sufficient condition for the discipline’s progress (Danziger, 1994).  

By reviewing research papers published in Psychology journals from 1879 to 1939, Danziger (1990) found diverse models of conducting human research with different social arrangements between the parties, depending on the institutional context. In addition, he identified five functions or roles in any investigative situation with humans: Designer of the study, data collector/administrator of the study, human sources of data (known today as “participants”), data analyst, and author of the research report. Theoretically, in any given study these functions can be shared among the members of an investigative team and the “participants.” 

Thus, in Wilhelm Wundt’s studies, the investigative model consisted of shared roles among team members and the most desired role was human data-source. Meanwhile in contemporary French clinical research, investigative practice consisted of a dominant medical-experimenter and a subordinate “subject,” typically a medical patient, exemplified by Jean-Martin Charcot’s studies of hypnosis. 

The Anglo-American model of experimenters testing anonymous masses of “subjects” evolved from Francis Galton’s testing for individual differences. In this model and the French model, the participants’ sole function is to provide data; investigators and their assistants play all other roles: planning, administering, analyzing, and authoring. By the 1920s, there was little evidence in US Psychology journals of the Wundtian model. It became virtually extinct in natural-science psychologists' realm until in recent decades when some feminist and community psychologists began to encourage citizen participation in research roles other than data source.

In other publications Danziger has written about psychologists’ methodological imperative, the social context of taken-for-granted methodological norms, and with Katalin Dzinas the origins of the term “variable” (Danziger & Dzinas, 1997). This literature shows that, once investigative norms were established, they became entrenched in the ideology, mythology, and workaday practice of psychologists' scientific culture (Walsh-Bowers, 1995, 2004, 2007; Winston, 2004).  

But rather than using the term “methodology” to denote a general framework of specific research methods, Danziger prefers the term “investigative practice,” because it encompasses the social and non-rational aspects of planning and conducting research as well as the rational and carefully implemented procedures which psychological-scientists follow (Brock, 2006). “Investigative practice” implies that all scientific work is at least partially subjective and is shaped by social historical conditions. Furthermore, specific investigative practices exemplify “both the reach and the limits of such scientific ideals as objectivity, measurability, repeatability, and cumulative knowledge acquisition,” as historian Mitchell Ash (2003, p. 251) put it. 

Mainstream psychologists consider measurement and statistical analysis essential components of psychological research. For them, “studying something scientifically means measuring it” (Mitchell, 2002, p. 6). Adherents to this belief, known as the “quantitative imperative,” assume that psychological attributes are equivalent to natural objects, thus are measurable and statistically analyzable. But rather than neutral, psychologists’ research methods and techniques for data-analysis are products of social history and are laden with epistemological assumptions (Hornstein, 1988). Therefore, contrary to the idealized picture of highly objective, psychological research that current APA (2007) expectations, textbook authors, and course instructors typically paint, psychologists’ investigations are fraught with problems about claims to producing objective knowledge and universal applicability. 

Reflecting on undergraduate and graduate education in Psychology, Danziger (1993) questioned the value of purely technical training in psychological methods. He recommended instead critical, interdisciplinary education in methods. In this paper I explore the implications of this standpoint for teaching psychological methods. First, I report the coverage that authors of current textbooks in the history of Psychology and in research methods give to the social historical context of those methods. Then I discuss Danziger’s concerns about academic psychologists indoctrinating students in methodological technology without a critical-historical and philosophical education in investigative practice. 

Textbook Coverage of Methods 

History of Psychology Texts

Most authors of history of Psychology textbooks have ignored investigative practice, even though it has been pivotal in psychologists’ quest for scientific legitimacy. Yet taking investigative practice for granted in a history of Psychology course is problematic, because modes of conducting research have a social history as much as any other aspect of Psychology and research is psychologists’ raison d’etre. 

In history of Psychology textbooks published over the last five years, many authors ignore investigative practice entirely (e.g., Greenwood, 2009; Hergenhahn, 2006; King, Viney, & Woody, 2009; Schultz & Schultz, 2012). Nevertheless, three other sets of authors do not. In one chapter John Benjafield (2010) addresses psychologists’ philosophy of science, the history of experimental and correlational methods, and qualitative methods. Wade Pickren and Alexandra Rutherford (2010) discuss methodology in a chapter on the discipline’s contested subject-matter during its founding decades, centred on the introspection-behaviourism debate. Graham Richards (2009) devotes a chapter to various problems entailed in measuring psychological phenomena. Yet overall, the coverage that the three sets of authors give is limited in scope and, excepting Richards, is relatively uncritical.  

These findings on authors’ coverage of investigative practice in history of Psychology textbooks are hardly surprising, because at least since Edwin Boring’s (1950) A History of Experimental Psychology, the social function of these texts has been celebration of the discipline’s presumed linear progression to scientific respectability. However, celebratory accounts of psychologists’ investigative practice reproduce an origin-myth about psychological research methods that reinforces the privileged status of quantitative experimentation.  

In point of fact, early psychologists shared the very term “experimental” with contemporary psychic and spiritualist investigators. Consequently, most early US psychologists sought to distance themselves from association with those who studied extrasensory perception and life after death to preserve their claim to natural-science legitimacy at a time when physiologists and physicists vigorously challenged it (Coon, 1992). In addition, many early psychologists avidly investigated practical problems in society. Thus, the administrative demands of institutions in late 19th and early 20th-century society, particularly, education, the military, and industry, largely shaped psychologists’ investigative practice (Danziger, 1990). 

Furthermore, quantitative experimentation was not psychologists’ only method. In the discipline’s founding decades some investigators (e.g., Edward Titchener) employed qualitative experimentation. In addition, two types of psychological experimentation emerged around World War II: controlled experiments with grouped data and the less familiar type, experimental control (or “experimental analysis”) of individual performance, in which B. F. Skinner and his devotees specialized. 

Research-Methods Texts

For decades, psychologists have argued that the future of our discipline rests on undergraduate training, that methodology courses are crucial in socializing students in the doctrine that Psychology is a natural science, and that such courses allegedly enhance critical thinking, for example, by encouraging students to reject belief in paranormal phenomena (APA, 2007, 2008). However, Kurt Danziger is concerned that psychologists have been indoctrinating students in methodological technology rather than providing a critical-historical and philosophical education in psychological methods from which to appreciate the strengths and weaknesses of particular methods.  

With this concern in mind I reviewed current textbooks on psychological methods to ascertain to what extent they foster training or education, indoctrination or critical reflection (Goodwin, 2011; Gravetter & Forzano, 2009; Graziano, Raulin, & Cramer, 2009; Leary, 2008). Not surprisingly, I found that they treat epistemological issues superficially and ignore the social context of investigative practice. Overall, current textbooks reflect mainstream psychologists’ commitment to the quantitative experiment as their methodological “gold standard.” 

In 2007, APA published Guidelines for the Undergraduate Psychology Major (APA, 2007), specifying 18 learning objectives for research methods and statistics. Although current research suggests that actual classroom implementation of these guidelines is quite lax (Tomcho et al., 2009), the guidelines stipulate that instructors are to bust myths about research and teach students proper methodological standards. In this context “critical thinking” about research means analyzing the methodological and statistical strengths and weaknesses of research publications according to current standards; “critical thinking” does not mean questioning Psychology’s epistemological assumptions about investigative practice or understanding its social historical contexts.  

Yet authors could employ a critical approach to methods as Tim Rogers did in his 1995 textbook on psychological testing. Rogers’s approach was to examine the epistemological and social limits to psychologists’ quantified knowledge. These limits include the following pedagogically useful points: 

First, mainstream psychologists have assumed that psychological objects are measurable. They make their particular phenomenon of interest (e.g., memory or the self) publicly observable in numerical form and then subject this phenomenon to experimental or correlational testing. But instructors could encourage students to consider whether psychologists can measure psychological phenomena, which are largely experiential, in the same way as natural scientists measure physical objects (Hornstein, 1988).

Secondly, instructors could acquaint students with the problem of psychologists’ naming and interpreting their phenomena of interest. Naming a measured phenomenon does not mean that the phenomenon has objective, non-discursive existence. Moreover, simple or complex statistical manipulations of test scores cannot convert abstract quantities into objects existing in concrete reality, even though psychologists might treat them that way. Numbers can only tell part of the story of a phenomenon, because they represent an abstracted aspect of psychologists' understanding. Moreover, measurement and statistics per se cannot provide an explanation of the meaning of observed numerical relations. That requires interpretation of observations.  

Mainstream psychologists are fond of recalling Edward Thorndike’s remark that every existing object is measurable. But, as Graham Richards (2009) put it, “everything that can be measured does not necessarily exist” (p. 280), meaning that psychologists’ interpretations of what they have measured are largely subjective and not determined by the data themselves.     

Besides, psychologists’ phenomena of interest shift across social historical conditions, requiring modifications of measurement and interpretation. As such, different psychologists investigate and interpret the same phenomenon differently because of different socio-political contexts. For example, psychologists in one context, Nazi Germany, considered authoritarianism socially desirable, but psychologists in another social context, post-war US society, associated it with Nazi atrocities. 

Secondly, if investigators have not met the assumptions for statistical techniques in actual practice, why have psychologists behaved otherwise in their discourse, for example, by speaking of “empirically-validated treatments” as if this claim were an absolute?   

Thirdly, absolute accuracy in psychological measurement is impossible. All psychologists can do is estimate accuracy (Cowles, 2001). Compounding the problem of the relative accuracy of measurement is the fact that the two parties in the investigative situation – the members of the investigative team and participants – contribute unintended variation to that situation. Authors and instructors could encourage students to reflect on the implication of the limits to accuracy of measurement for investigators’ claims to certainty about their findings. 

Fourthly, in examining psychologists’ historical use of statistics, instructors could review with students the “significance” for psychologists’ scientific culture of the recommendations from a 1999 APA task force struck to rectify psychologists’ historical misuse of inferential statistics (Wilkinson & the Task Force on Statistical Inference, 1999). The fact of psychologists’ difficulties in meeting their espoused quantitative standards prompts at least two questions for student and faculty reflection:  

If in their investigative enterprise, psychologists historically linked their scientific identity with expertise in statistical analysis, but that expertise has been questionable, how credible have been psychologists’ claims to knowledge and scientific legitimacy?

 A Critical-Historical and Philosophical Framework for Teaching Psychological Methods

The net effect on psychologists’ culture of conventional undergraduate and graduate education in Psychology, whether in methods or history courses, seems to be that authors, instructors, and students take investigative practice for granted. Apparently, they assume that current practice represents the epitome of scientific objectivity rather than recognizing that psychological research methods are social-historical products laden with socio-political and epistemological ramifications.  

The purely technical training in psychological methods that mainstream psychologists provide to their students, mandated by APA (2007, 2008) prescriptions, suits the technological-professional role in society that psychologists have adopted since the era of James Cattell and Hugo Munsterberg. Mainstream psychologists do not question their capacity to practice “such scientific ideals as objectivity, measurability, repeatability, and cumulative knowledge-acquisition” to re-quote Mitchell Ash (2003, p. 251). Rather, their scientific mission is to place psychological knowledge at society’s disposal to administer citizens’ adaptation to the status quo, which is a point that Kurt Danziger made decades ago. 

Yet, another world is possible. Not only could students benefit from learning how psychologists came to adopt the investigative traditions in which we have been enculturated. But also instructors committed to critical, interdisciplinary education and to contextualizing psychological methods could use foundational courses in the history of Psychology and in research methods to heighten students’ understanding of the cultural, philosophical, and scientific roots and challenges associated with diverse, historically-constituted investigative practices.  

Rather than sweeping the dubious aspects of past Psychology under the carpet, such as its questionable assumptions about measurement and its historical misuse of statistical inference, authors and instructors could assist students in confronting that past to become more engaged with course-content. By scrutinizing psychologists’ various investigative practices, authors and instructors can facilitate students encountering the historical tension between objectivity and subjectivity in Psychology. The pedagogical intention here is to foster critical awareness instead of methodological indoctrination. My premises are that instructors and authors should not be protecting adult Psychology majors from viewing psychologists’ investigative realities and that putting Psychology in its social-historical place, to paraphrase Graham Richards (2009), can evoke students’ interest in improving psychologists’ practices. By improving the quality of education in foundational courses we improve the quality of our science and profession, which is largely what Kurt Danziger’s work on contextualizing investigative practice is all about. 



American Psychological Association. (2007). APA guidelines for the undergraduate psychology major. Washington, DC: Author. 

American Psychological Association. (2008). Teaching, learning, and assessing in a developmentally coherent curriculum. Washington, DC: Author. 

Ash, Mitchell G. (2003). Psychology. In Theodore M. Porter & Dorothy Ross (Eds.), The Cambridge history of science, Vol. 7 (pp. 252-274). Cambridge, UK: Cambridge University Press. 

Benjafield, John. (2010). A history of psychology. 3rd ed. New York: Oxford University Press. 

Boring, Edwin G. (1950). A history of experimental psychology. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall. 

Brock, Adrian C. (2006). Rediscovering the history of psychology: Kurt Danziger interviewed by Adrian C. Brock. History of Psychology, 9, 1-16. 

Coon, Deborah J. (1992). Testing the limits of sense and science: American experimental psychologists combat spiritualism, 1880-1920. American Psychologist, 47, 143-152. 

Danziger, Kurt. (1990). Constructing the subject: Historical origins of psychological research. Cambridge, England: Cambridge University Press. 

Danziger, Kurt. (1993). Psychological objects, practice, and history.  Annals of Theoretical Psychology, 8, 15-47. 

Danziger, Kurt. (1994). Does the history of Psychology have a future? Theory & Psychology, 4, 467-484. 

Danziger, Kurt, & Dzinas, Katalin. (1997). How psychology got its variables. Canadian Psychology, 38, 43-48. 

Goodwin, C. James. (2011). Research in psychology: Methods and design. 6th ed. Hoboken, NJ: John Wiley & Sons. 

Gravetter, Frederick J., & Forzano, Lori-Ann B. (2012). Research methods for the behavioral sciences, 4th ed. Belmont, CA: Wadsworth.   

Graziano, Anthony M., Raulin, Michael L., & Cramer, Kenneth M. (2009). Research methods: A process of inquiry. Toronto: Pearson Education Canada. 

Greenwood, John D. (2009). A conceptual history of psychology. New York: McGraw-Hill. 

Hergenhahn, B. R. [sic] (2006). An introduction to the history of psychology. 5th ed. Belmont, CA: Thomson Wadsworth. 

Hornstein, Gail A. (1988). Quantifying psychological phenomena: Debates, dilemmas, and implications. In Jill G. Morawski (Ed.), The rise of experimentation in American psychology (pp. 1-34). New Haven, CN: Yale University Press. 

King, D. Brett, Viney, Wayne, & Woody, William Douglas. (2009). A history of psychology: Ideas and context. 4th ed. Boston: Pearson, Allyn and Bacon.  

Leary, Mark R. (2008). Introduction to behavioral research methods. 5th ed. Boston: Allyn and Bacon. 

Mitchell, Joel. (2002). The quantitative imperative: Positivism, naïve realism and the place of qualitative methods in psychology. Theory & Psychology, 13, 5-31. 

Richards, Graham. (2009).Putting psychology in its place: A critical historical introduction. 3rd ed. London: Taylor and Francis. 

Rogers, Tim B. (1995). The psychological testing enterprise: An introduction. Pacific Grove, CA: Brooks/Cole. 

Schultz, Duane P., & Schultz, Sydney Ellen. (2012). 8th ed. A history of modern psychology. Belmont, CA: Wadsworth. 

Tomcho, Thomas J., Rice, Diana, Foels, Rob, Folmsbee, Leah, Vladescu, Jason, Lissman, Rachel, Matulewicz, Ryan, & Bopp, Kara. (2009). APA’s learning objectives for research  methods and statistics in practice: A multimethods analysis. Teaching in Psychology, 36, 84-89. 

Walsh-Bowers, Richard. (1995). The reporting and ethics of the research relationship in areas of interpersonal psychology, 1939-89. Theory & Psychology, 5, 233-250. 

Walsh-Bowers, Richard. (2004). Expanding the terrain of Kurt Danziger’s Constructing the subject: The research relationship in psychology. In Adrian Brock, Johann Louw, & Willem van Hoorn (Eds.), Rediscovering the history of psychology: Essays inspired by the work of Kurt Danziger (pp. 97-118). Dordrecht, The Netherlands: Kluwer Press. 

Walsh-Bowers, Richard. (2007). The research relationship in English-language European psychology journals: Forty years of American scientific colonization? In A. C. Brock and J. Louw (Eds.) History of Psychology and Social Practice. Special Issue of Social Practice/Psychological Theorizing. (pp. 35-50). Retrieved 5 November 2007 from http://sppt 

Wilkinson, Leland, & the Task Force on Statistical Inference. (1999). Statistical methods and psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604. 

Winston, Andrew S. (2004). Controlling the metalanguage: Authority and acquiescence in the history of method. In Adrian Brock, Johann Louw, & Willem van Hoorn (Eds.), Rediscovering the history of psychology: Essays inspired by the work of Kurt Danziger    (pp. 53-73). Dordrecht, The Netherlands: Kluwer Press.