Deni Elliott and I agree that one of the desirable requirements for the conduct of all scholarly research is expressed in the Hippocratic oath: “first, do no harm.” Hence, we both accept what to us is the relatively recent establishment of Institutional Review Boards that provide research protocol oversight intended to protect human (or other animal) subjects from harm caused by the conduct of that research.
The results of an IRB evaluation is only one of many standards researchers must keep in mind. While some may have problems with any externally imposed requirements, a rule like this will affect any “human subjects research” and reflect upon the reputation of researchers and the institutions and organizations to which they belong, as well as public perception of all research and all functions and institutions of higher education.
So, when evaluating the reputation and behavior of any researcher—for hiring, promotion, tenure, pay increases, recommendations or whatever—we should make certain that the candidate isn’t adopting an “end justifies the means” argument in deciding how human research subjects are treated.
But Elliott and I part company when it comes to her suggestion about how to ensure that IRBs are used for our research—I disagree that the logical checkpoint should be at the desk of the editors of journals and magazines—and even will argue below that there should be a blanket exemption for most academic media, journalism or communications research, just as there now is for almost all commercial research in these fields.
The Problem
Elliott maintains that academic publication or intended publication apparently is the primary criterion used by the Federal government (45 CFR 46.102) when determining whether or not research should be subject to these rules requiring IRB approval. Elliott quotes the CFR language: “research” means “a systematic investigation… designed to develop or contribute to generalizable knowledge.” An “investigator” can be either a professional or a student, and research includes any data obtained through “intervention or interaction” between the investigator and the subject. (Such intervention or interaction includes training and many demonstration or service activities as well as those intended to directly contribute to the organized body of human knowledge.) An “institution” (as in “Institutional Review Board”) is “any public or private entity or agency” and is not restricted to colleges and universities.
However, it’s not that simple. It should be noted that there are all sorts of caveats, exceptions and inclusions that must be dealt with. For example, in most societies, minors should be protected to a greater extent than adults. Some institutions-often affiliated with religious organizations--may have stricter regulations than others, particularly when religious dogma is involved. Those who financially support research (whether for altruistic or commercial purposes, whether the money comes from taxes or from a philanthropist’s pockets or crowdsourcing, and whether it comes in the form of a “grant” or a “contract”) often have goals and views that require additional moral decision-making on the part of researchers. And, as might be expected, conflicts of interest are inherently suspect.
The editorial checkpoint or sentry-post
As an editor, I do not think that placing the responsibility for making certain that IRBs are utilized and their dictates followed in the hands of the editor and the editorial review process is the way to go. Am I trying to avoid work? I’d be dishonest to deny that such a motivation exists. But I don’t accept the argument that the editorial desk is the appropriate place to apply the criterion that “this researcher met the standards of human subject research.”
Certainly, the editorial standards of any publication are important. Elliott implies that certification of IRB decisions can piggy-back on the “peer review” system. This system for accepting manuscripts has become the “gold standard” for social and physical science research. Unfortunately, it doesn’t always work. When someone checks the use statistics in a sample of scholarly journals (as Dennis T. Lowry did in his 1979 article, “An evaluation of empirical studies reported in seven journals in the ‘70s,” Journalism Quarterly, Summer, 1979, pp. 262-268, 282). Readers were shocked, shocked by the failure to employ appropriate techniques to ensure standards of statistical validity and reliability in much of the literature in our field. Peer review itself has come under serious attack more recently. An article in the October 4, 2013 issue of Science magazine reports on a concocted “cancer research report” that met few if any standards of scientific research, that was submitted to some 304 journals—and accepted by 157 of them! (Only 36 provided substantive comments to the author, and 16 of them eventually accepted the paper). Although the initial reaction to this paper was limited to its discussion of open access or contributory publications (as contrasted to those that required subscriptions, which the author, John Bohannon, has stated were beyond his resources) there is no intrinsic reason why this problem might not exist elsewhere.
Even though peer review may be better than no review, it is not a cure-all for badly conducted and written research. That’s the editor’s job—and, thank goodness, many editors spend far more time and energy on helping authors improve their work than they do with what I think is the sideshow of ensuring high quality (and such other criteria as diversity) by primarily acting as a traffic cop for the journal’s peer review system. For hundreds of years, editors have had a responsibility to their readers to ensure the quality of what they read. The brouhaha over Bohannon’s study chastising the practice of peer review (see, in particular, Paul Basken’s essay, “Critics say sting on open-access journals misses larger point,” Chronicle of Higher Education, online edition, October 4, 2013) isn’t unique. An October 3rd posting in Science by Michael Eisen briefly discussed similar “stings” in the past that tested (“proved”) peer review. For example, Eisen had used, a deeply-flawed paper about the DNA of arsenic to “expose flaws in peer-reviewed subscription-based journals” including Science itself, and a look at the “Sokol affair” shows a similar pattern.
Almost any issue of the Chronicle of Higher Education, or informal chat at the bar during a professional gathering, offers examples of someone in our field getting into trouble—with colleagues, deans, presidents, legislators, the general public—because of the publication of poorly done research, and there are literally dozens of examples each year of retracted “scientific” research findings in myriad disciplines. While, to some degree, this is part of the self-correcting process of scientific method, a few of these may be the result of dishonesty but, in my opinion, apparently far more are due to lack of adequate training and lack of care by the researcher.
With all of this on the menu (if not all on the plate) of every editor, why should the editor have to also take the time and energy to enforce a complex (and confusing) federal regulation about research using human subjects? The scholarly journal editor’s primary responsibility is to the journal’s audience and to the editor’s own conscience, and only secondarily to the author of a research article or anyone else. To exercise this responsibility, editors need to do what they can to improve the literature of their field—for example, to think of importance and significance rather than providing gossip, scandal, “gee-whiz” pseudo-science, and similar content intended primarily for titillation and entertainment, or even slavishly following the rigid turns and quirks of style guides such as MLA and APA while cringing at how hard they make it for the reader to learn about relevant related literature. (Remember when MLA citations provided only the name of the city of publication and not the name of the publisher, and see today similar reasoning in the APA citation style that assumes that there are so few people with the same last name and given initial that there couldn’t possibly be confusion.)
In my opinion, when used to evaluate a faculty member (or even a student), publication is merely a way of determining that a candidate for promotion, tenure, and the like has had a potentially valuable idea or hypothesis and has tested it rigorously, as all good scholars should be able to do. All faculty members—even deans—should be able to conduct such evaluations, regardless of their familiarity with the particular jargon and characteristics of various disciplines. This is as true of historical and purely descriptive research in the humanities as it is for experimental research in the social or physical sciences—particularly so as there obviously was disagreement in Elliott’s roomful of media studies professors as to the proper location of media studies.
So, my suggestion is to leave this work where it belongs: with the academic departments involved, and whatever mechanisms (promotion and tenure committees, administrators and administrative bodies, etc.) have been established in a given institution. (Even if an editor wants to get involved, it merely is a redundancy to departmental evaluations.) This returns the question of whether someone properly used IRBs for research leading to submitted-for-publication manuscripts to the departments that need this information in order to ensure that faculty members are accountable to their academic discipline, department, colleagues and students. As said earlier, the editor’s responsibility is to publish ideas and support (in the form of data and logic) for these ideas for the benefit of the reader (or viewer, or listener). That’s a pretty big job in itself—I edited a scholarly journal for more than a dozen years, and managed graduate programs for twice that length of time, and—believe me—it is easier to deal with commentary and opinion (much of the content of Media Ethics magazine during the past quarter-century—including this piece) than it is to supervise all aspects the quality of the research being submitted to the journals in the field.
Is this trip necessary?
But let me ask a more basic question. Couldn’t academic communication/journalism research and other publications, for the most part, automatically be placed under the Federal definition of “minimal risk?” This is defined in 45 CFR 46.102 as the “probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.”
The Federal regulations exempt research designed to appear in the popular press from adhering to IRB rules. But isn’t all journalistic research tied in some manner to the popular press? Aren’t our students being trained and educated to produce the information, knowledge, and wisdom desired and needed by the public?
The very word “subject” has two professional meanings for journalists—whether practicing, studying, or teaching. Most fields have only one. The journalistic two: first, a subject is a person (or other entity) who is the focus or target of a story. Second, subjects are those sources who contribute to a story, as participants, bystanders or other witness or source of reaction. This concept is closer to the idea of experimental studies using “human subjects” but, since in most instances these subjects are passive and are not manipulated to react to independent and dependent variables, and are unlikely to be harmed, they too usually are “special cases” when doing media-related research.
While reporters generalize from interviews to produce stories that constitute a form of generalized or generalizable knowledge, and commercial research firms probably survey thousands of people a week that result in hundreds of reports, does this automatically make all interviewees “human subjects” for the purposes expressed in the Code of Federal Regulations? I think not.
Could a reporter digging for a story about a person in the public eye-checking documents, interviewing individuals from the target’s past-cause harm to the target subject by publishing it? Almost certainly. And one doesn’t have to be a Jerry Sandusky to be so harmed. But, while attempting to avoid doing harm, in the interests of benefiting the public as it makes decisions—or even meeting the desires of a journalist for a good story that will advance her or his career, and the desire of readers, listeners and viewers for additional knowledge about the leaders of their society—it isn’t unreasonable for the journalist to consider their client’s—the public’s—right to know as trumping the public subject’s right to privacy.
Clearly, the conditions are always ripe for making ethical decisions with respect to such a story, balancing whether the public really needs the information against the target subject’s desire to keep the information private…and the distinction between public and private subjects becomes ever more important.
Journalism is a special case in this complex situation, as is the judicial system—and Elliott points this out. Justice can be and should be served by both. The role of the journalist is to provide citizens in a democracy with the knowledge they need in order to make rational decisions in that society. There are many variants on public need for transparency, accuracy and fairness. For example, on a nitty-gritty level, in some countries, little or no information about arrests is published—although the disadvantages of such a course may affect the public adversely if they don’t know what is going on in their community or there are no checks or balances on the law enforcement function.
Over many years, in most U. S. media outlets, the names of certain classes of victim (particularly victims of sexual crimes) have been redacted—but the names of the accused tend to be fair game. Is this, itself, fair? Clearly it doesn’t give the public the whole story. (In most countries, it is felt that knowledge that someone has been charged with a crime supposedly gives the people confidence in law enforcement and related institutions, and may protect potential future victims. There is about as much firm data supporting this view as there is for the benefits of the protection of entire classes of victim).
Elliott’s “Solution”
But let’s go back to “research.” Elliott reminds us that there have been horrible pain and other consequences up to and including death visited upon people in the name of “medical research.” This is why the protection of human subjects research became a matter of public concern in the first place. The Tuskegee studies of venereal disease are cousins to the Nazi experiments to determine how long it takes for someone to die of exposure to frigid water. And, no matter how useful some of the resulting data may be—it is right and proper to prevent or condemn such treatment of any human beings. Don’t forget: punishment meted out by the judicial system may discourage potential future malefactors from future crime—as may information supplied by the press about the penalties imposed.
It is, I think, a good thing that the medical research model does not really fit the field of journalism. In fact, it might be argued that traditional medical/surgical/pharmaceutical research has a flaw that makes the researcher ignore the possibility of the sort of harm that IRBs are designed to prevent. This flaw would be hard to find in communications research. Here it is: the “gold standard” for medical research design typically requires the researcher to establish a “control group” which does not receive the treatment being tested. Even though that means that members of the control group do not receive what may be the optimum treatment (even though medical ethics usually require that control subjects receive whatever the current treatment may be) this potential harm is accepted in the interests of proving the reliability and validity of the study—and, of course, the treatment. Unfortunately, medical/pharmaceutical studies take time to come to fruition, and so it is entirely possible that the control group never receives the new treatment—and is thereby harmed.
To maintain that journalists or media scholars are not doing rigorous research is sophistry. The best journalists tend to be superb researchers as well as writers. To maintain credibility, they have to be. The research that goes into any study from which valid and reliable conclusions can be drawn is still research, whether it eventually appears in the National Enquirer or The New York Times or the Journal of Mass Media Ethics. And, while engaged in anything that could be called an educational venture (as student, professor, or professional), the training alone probably falls under the “research” rubric when dealing with journalism.
During the many years since I started studying and teaching mass communication, I have run into situations where individuals were harmed, even physically harmed. But none of the studies with which I’ve been associated (as student or researcher) led to lasting harm, so far as I can tell. But a study doesn’t have to risk physical harm: there also are researches to be avoided or modified to prevent emotional or mental harm. Think of the studies about obedience to authority conducted by Stanley Milgram at Yale. He made some subjects believe that they had been ordered to administer an increasingly severe electric shock to another person when wrong answers were received. In point of fact, nobody actually received an electric shock (the supposed subject, in another room, was an actor)—but most of those assigned to push the button did so—sometimes under protest, but they did it just the same. (This study was conducted in the aftermath of the “I was only obeying orders” plea in the war crimes trial of Nazi Adolph Eichmann). That there was no electric shock was unimportant: for anyone to deny that the shocker might feel guilt, even lasting guilt, would be short-sighted. Harm would be done to the shocker, if not the shockee.
Here’s another example, one in which harm was avoided, in the days before IRBs were mandated. After showing volunteer subjects a violent and bloody traffic safety film in a study to validate a paper-and-pencil test to measure anxiety levels, a test that could lead to many useful and important academic publications, the subjects were also shown the equivalent of an Internet kitten photos to lower any anxiety the safety film might have created. While this study might have been fair game to an IRB—it could have led to safer driving (a good thing) or some sleepless nights (a bad thing)—the lesson to be learned here is that the researcher and his colleagues were concerned enough about the subjects to go to lengths (which used up valuable research time) to reduce the possibility of emotional harm.
Although there are myriad requirements that are considered to be part of journalistic or other mass communication ethics, their application is never simple. Arguments that one IRB may apply different criteria than another IRB at another institution, or that “journalism has its own function, one that the IRB doesn’t understand (as argued above),” or “nobody could possibly be harmed by this particular protocol” (for example, observation rather than promoting action, even action so innocuous as asking for an opinion) have some validity.
Although Deni Elliott and I may continue to disagree over the importance of IRB approval of human subject research in the fields of journalism and media studies, I believe that the current system is functionally inefficient—even if legally based and philosophically moral—for much, even most, of the scholarly study of journalism and communication. The obvious solution? Retain IRBs dealing with human subject research, enforce their use in the normal course of internal faculty governance and not on the editorial desk, but change some of the definitions so that they are applied more appropriately to our field and the deadlines and interviews that are an integral part of it.
Additionally, journalism is time-sensitive to an extent that few other socially significant institutions are. If something significant occurs: a politician speaks on a college campus, a doomed airship flies overhead, a terrible tragedy occurs in a school or Mother Nature goes on a rampage, it is time for “fire bell” research. Someone interested in the effects of communication can’t wait, if valid and reliable data are to be produced, and certainly not for the convenience of an IRB.
- The author of this comment, John Michael Kittross, has edited Media Ethics magazine for the past quarter-century, and was editor of the Journal of Broadcasting for the period 1959-1972. He taught at the University of Southern California, Temple University and at Emerson College. His favorite teaching assignment was “research methods.” He may be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..