Paper presented at the WAPOR regional conference on "Elections, News Media and Public Opinion" in Pamplona, Spain, November, 24-26, 2004.

US Election 1948: The First Great Controversy about Polls, Media, and Social Science

Hans L Zetterberg,
ValueScope, Stockholm

Abstract

The nation-wide polling of electorates is an invention by American journalists in the 1930s, not by social scientists. The 1948 failure of the U.S. pre-election polls was an occasion for social scientists to stake explicit claims on the opinion research published in mass media. The Social Science Research Council organized a "Committee on Polls and Election Forecasts," that concluded that the 1948 polls were not up to the standards of science. Their report was discussed at a conference at Iowa University from which a verbatim account is preserved (Meyer and Saunders 1949). This paper reviews the main criticisms, and how pollsters, particularly George H. Gallup and his international network, later coped with them. We deal with issues raised at the Iowa conference such as the separation of the research question from the interview question, the reporting of winners or merely candidates ahead prior to an election, the question wording and use of secret ballots, the extension of interview period into proximity with Election Day, probability sampling, turnout measures, allocation of undecided but likely voters. We also mention some issues that did not get attention in the 1949 discussions: post-stratification by vote in previous election, Election Day polling, and what happens to the voters when a pre-election poll is published. The methodological advances since 1948 have come mostly from researchers with access to nation-wide interviewing set-ups, a fact which explains why the advances often come from researchers in private organizations.

 

Polling: An Invention in Journalism

Opinion polling is a child of the newspaper world. Only later did the academic world of social science enter as a stern stepfather. This paper deals with the confrontation of professors serving the needs of science, and pollsters serving the needs of journalism, that took place in the wake of the failure of the polls in the 1948 US presidential election.

Nation-wide polling for newspapers had started in 1935. The previous half century had seen a tremendous increase in the number of newspapers and their circulation and advertising revenues. The journalistic contents of the papers had changed from an exclusive dealing with news to an increasing involvement with features. Features in American newspapers had began as comic strips, special pages for sport and entertainment, fashion, holiday travel, religious sermons, serialized novels, et cetera. European newspaper men long thought that journalism was degraded and diverted from the task of delivering the news by the many features. But the readers wanted features; circulation increased, advertising space and revenue increased, and increased income allowed the papers to do a more far-reaching and cavernous journalistic job.

In this positive spiral, opinion polls became a late addition to the line of features in media. Elmo Roper polled for one single publication, the business magazine Fortune. George H. ("Ted") Gallup did not deliver his stories directly to any paper. He had a partner in Chicago, Harold Anderson, who ran Publisher-Hall Syndicate, a business providing papers with editorial material. This included both features and columnists such as Sylvia Porter, who wrote about finance so that any American could understand. Gallup furnished Anderson with a new and unique product that no one else in his line of business had. Anderson loved the Gallup material and did its marketing with enthusiasm. He offered it in the first place to the biggest paper in each city. This strategy was copied from the early success of Associated Press that had started by giving a sole franchise to one paper in each city. At best over 200 papers subscribed to the Gallup releases. A market researcher, Archibald Crossley, thought that his firm could duplicate this success and for a few years his firm had opinion polling as a side line.

These three pollsters, Roper, Gallup and Crossley developed two research products for the media:

A critical task in all public opinion polling on issues is the choice of topics for the questioning. It is important to ask questions revealing the public's concerns rather than the concerns of the pollsters and their editors. Gallup solved this problem in the late 1930s by regularly asking "What is the most important problem facing the country today?" He did not want to define all issues, or to have his editors take all the initiatives. Nor did he want to poll for interest groups. In the ideal issue poll, his respondents, the general public, should have the main say in defining the issues! Their views on the issues should be known to the public and to the leaders of the public through the media without any filters or restrictions. The full measure of Gallup’s contribution is not only a scientific application of sampling, interviewing, and quantification of responses, but a social innovation, i.e. polling of the public, on the issues defined by the public, for the benefit of the public.

Elmo Roper who interviewed the general public to publish his result to the elites of the business community, the readers of Fortune, did not on the surface meet Gallup’s ideals. But the Fortune polls became widely cited in more popular media, so at least in the beginning, the difference tended to be ignored.

In this paper I will not deal with issue polls but with electoral standing polls. We shall follow how Gallup coped with the failure of his own poll and those of his colleagues Archibald Crossley and Elmo Roper in the presidential election in 1948.

It is essential to remember that public opinion research for the general public has its roots in journalism, not social science. A number of social scientists became involved in large-scale government survey research during World War II. Two research centers with nation-wide fieldwork for opinion research and experience in survey research for the war effort moved to universities in the 1940s. One of them, NORC at Chicago had roots in the journalistic tradition of polling. The other, SRC at the University of Michigan had its origin in the Department of Agriculture in Washington DC.

To sum up, on the scene of the United States in the years after World War II we have four major institutions for national public opinion research of which three have journalists or former journalists as driving directors.

The 1948 Failure of the U.S. Polls of Electoral Standing

In the presidential election 1948 in the United States, The Gallup, Roper, and Crossley polls had created a widespread certainty of Truman’s defeat. Some papers, including The Chicago Tribune, a subscriber to the Gallup poll, even announced Truman’s loss in front-page headlines before the returns were in. Instead of facilitating a transition in government, the polls had misled the presidential candidates and all other politicians, the Washington bureaucrats, the media, and the public.

Table 1. Election Results and Polling Results, Presidential Election 1948

Candidate

Party

Elec-toral Votes

Percent Popular
Votes

Final
Gallup
Estimate

Final
Roper
Estimate

Final
Crossley
Estimate

Harry Truman

Democrat

303

49.6%

44.5%

38

45%

Thomas Dewey

Republican

189

45.1%

49.5%

53

50%

Henry Wallace

Progressive

0

Strom Thurmond

States’rights

39

The failure did not affect foreign polling operations nearly as much, but even European pollsters were occasionally accused of engaging in an intellectual scam. More important, academic survey researchers in the United States became upset as their research method became publicly questioned. Rensis Likert (1948) rushed to the nearest issue of Scientific American and argued that no matter how wrong Gallup, Roper, and Crossley had been in the election, "it would be as foolish to abandon this field as it would be to give up any scientific inquiry which, because of faulty methods and analysis, produced inaccurate results." The shock caused an intellectual crisis for polling and flurry of reflections. The failure of the polls in 1948 brought the academic and private survey researchers together.

Social Science Research Council organized a "Committee on Polls and Election Forecasts." It was composed of academics from mathematical statistics (Frederick Mosteller of Harvard University, Chairman of the Committee), political science, social psychology, sociology, history and some others. The pollsters gave the committee all their source material from their pre-election studies and full access to their staffs. The committee engaged a technical team that worked on and re-tabulated the data for three weeks.

In their 400 page report (Mosteller et al. 1949), the committee concluded that the pollsters had overreached their ability to pick the winner in the 1948 election; the possibility of a close election was obvious from their own data and the level of measurement errors in past elections.

These errors were not new; the committee found previous instances in which they had been documented. In all, the conclusions from the committee were grim reading for Gallup, Roper, and Crossley.

After the publication of the conclusions from the Social Science Research Council, a conference on the topic was held in February 1949 at the University of Iowa under the title Polls and Public Opinion (Meyer and Saunders 1949; all page references in the present paper from conference contributions are from this work). At that time the full staff report was still not available to the participants, although many of them were aware of at least some of its findings. The Chairmen of the sessions were mostly from the University of Iowa, for example, Robert Sears, of the Psychology Department and Director of the famous Iowa Child Research Station. Some prominent outsiders were also invited to chair sessions; thus F. Stuart Chapin, the Head of the Sociology Department at the University of Minnesota chaired the session on future trends of opinion sampling at the symposium.

There were many editors and several persons from schools of journalism at the conference. Gideon Seymour, Vice President and executive editor of the Minneapolis Star and Tribune, was the main spokesman of journalism. Somewhat surprisingly, Mr. Seymour proved to be a satisfied client of polls. True, his paper had been misleading by publishing the polls about the presidential race. But the local poll had been correct! And, as all editors of newspapers know, the local audience is bread and butter, and the national audience is a luxury. Mr. Seymour liked the idea that he had presented a state poll in the news section of his paper showing that Minnesota’s voters preferred the winning Hubert Humphrey (Democrat) for senator, while his editorial page had endorsed the re-election of Senator Joe Ball (Republican). Polls had solved a common dilemma for an editor-in-chief; editorial opinion must at times be allowed to differ from the readers’ opinion, and, thanks to the polls, the paper had an easy way to publish both.

Francis H. Russell from the State Department and Morris S Hansen of the Bureau of the Census represented the federal bureaucracy. Dr. Hansen was active in the discussions; here spoke a future top statistician. Although he represented the census with its total numeration of the population, he spoke glowingly about samples. But he underlined the need to use probability sampling, which the pollsters had not.

Well-known university survey researchers were invited. Bernard Berelsen of Chicago University demonstrated the usefulness of looking at historical events when interpreting opinions, an implied criticism of both journalists and pollsters who focused on the situation of the day. C. Dodd, Director of the Public Opinion Laboratory at the University of Washington in Seattle, and some of his coworkers presented relevant suggestions from experiments in their statewide surveys. Hadley Cantril from Princeton University had a paper read. Paul F. Lazarsfeld from Columbia University participated with many spirited comments. He brought most of the laughter to the proceedings, one by telling his experience from a recent trip to Europe:

I just came back from Europe and I spent election week in Norway and election week end in Sweden, and the way things are discussed there is so: "Do you have a ‘Gallup’ yourself?" "Has Crossley’s ‘Gallup’ been better than Roper’s ‘Gallup’?" (laughter). And I tried to understand what they meant by "Crossley’s ‘Gallup,’" and then I found out that their notion of the word "Gallup" is the American word for "polls." What I need is a "Lazarsfeld’s Gallup." I have definitely lost status by claiming that I know Mr. George Gallup, who does excellent polls, because no one really believed me. (p.194)

Lazarsfeld held that the SSRC report was not particularly constructive as a basis for future work.

The head of NORC, Clyde W. Hart, was active at the conference. He took a sympathetic stand to polling for media, and emphasized that academic researchers also had a responsibility to advance the opinion research used in journalism. Clyde Hart also said, "Our samples are always samples of discrete individuals in a population; they are not samples of a public" (p 28). This seemingly innocuous remark pointed to a serious gap between academic theory and research. Public opinion, as conceived by social and political philosophers, including the fathers of democracy, was different. As specified by early students of public opinion such as Lowell (1913) and Lippmann (1922), public opinions emerge in lasting and functioning networks, "publics." These authors assumed that a public had such a density that the participants could talk and argue about a common issue so that everyone's view became known and influenced by everyone else's view. They called the result of this process "public opinion," not necessarily unanimous, but one that emerged after all sides had been heard and considered.

Hart emphasized the need for better integration of theory and research in this field and many academic interventions in the symposium called for a better theory of public opinion. This integration of theory and survey research may have been more relevant for polling on issues. Since the problem of the conference was an electoral standing poll, Hart’s theoretical concerns tended to be bypassed without conclusions.

None from the Survey Research Center in Michigan bothered to come to the Iowa conference. SRC leaked the message that they had had an unpublished survey with a probability sample in the field at the time of the 1948 election. It had showed Truman’s lead.

Neither did Elmo Roper, a big promoter of polling, come to the conference; he had been more off the mark than the other pollsters in the 1948 election. Stevens J. Stock from the Opinion Research Corporation in Princeton attended from the polling industry and, of course, Archibald Crossley and George Gallup.

The pollsters at the conference graciously accepted most of the committee’s criticism but apparently smarted under its know-it-all tone. Dr. Gallup said:

Up to this point we in the field of public opinion research have had to carry the ball ourselves with comparatively little help, but with plenty of criticism, from the social scientists.

The Social Science Research Council has laid the groundwork for a healthy and continuing partnership. I for one am ready at all times to work with any social scientist or any group of social scientists on any problem of public opinion research (p. 223).

Professor Samuel A. Stouffer of Harvard University, represented the Social Science Research Council Committee on Polls and Election Forecasts. He was one of the co-authors of the report; the main author, Frederick Mosteller, could not attend. Stouffer was a stellar person in the field in those days, working on a summary and evaluation and methodological lessons from over 500 survey studies of American soldiers in World War II (Stouffer et al. 1949).

Samuel Stouffer did not backtrack one iota from the committee’s conclusions that the 1948 polls were not up to the standards of science.

However, Stouffer added:

I may say that the more I worked on this report, the more I felt the debt that we owe to these men who have been willing to risk their own money in going out and trying to learn something about American political behavior. The pollsters have been ahead of the universities; the universities have been tagging along behind. This is no time for the universities to say, "Oh, well, this is something we don't want any part of." This is the time, it seems to me, for the universities to say that in this device, this new invention that we have, lies one of the great opportunities for developing an effective social science. And we have a responsibility, we in the universities, to do our best to help improve these techniques. With the improvement of these techniques I think we can have every confidence that we are going to improve social science, and I think we are going to help our country, because I believe in this work as an instrument of democracy. (p 214.)

We may summarize the evaluation of the polling experience in the 1948 presidential election by separating George H. Gallup’s scientific application of survey sampling and interviewing from his social invention of polling of the public, on the issues defined by the public, for the benefit of the public. The scientific application was on the right track but must be improved. The social invention was a brilliant and promising instrument for democracy.

Development Work After 1948

Let us now turn our attention to how George H. Gallup and other pollsters responded to the call from social science to improve polling techniques, and how, in real practice, they coped with the many issues that the 1948 polling failure had raised. Much of my information on this process comes from the yearly conferences of Gallup International Association.

Separation of the Research Question from the Interview Question

The questions to be answered by an opinion research project are very different from the questions in the interviews. The answers to the interview question "How are you going to vote?" do not answer the research question on who is ahead in the election. The latter type of question – which the Germans call Problemstellung and Hadley Cantril (p 303) in his paper for the Iowa meeting called "the problem of setting our problem" – requires much more. It calls for a conceptual analysis which, in turn, may suggest, among other things, indicators of turnout, of leanings among the undecided, and of the general climate of opinion about the coming election, its history so far, and its expected future course.

The 1948 debacle in the United States started a long debate on the concepts of pre-election polls and voting intentions, and also much experimentation about research indicators necessary to do the calculations in a pre-election poll. All this work became based on a separation of the research question from the interview questions, a routine nowadays, but a novelty among the pollsters of the 1940s.

"Who Will Win?" or "Who Is Ahead?"

The pre-election polls do not measure the outcome of an election. They measure who is ahead during an election campaign at the time of the interviewing. They do not answer the question "Who will win?" but the question "Who was ahead when we last looked?" This is one of the lasting insights from 1948 and a central element in the concept of a pre-election poll.

The claims in releases from pollsters after 1948 rather quickly adopted this change in writing and speech habits. The presentations of poll findings by journalists have more slowly adopted such a stance; a newsman does not readily give up the temptation to have tomorrow’s news of the election outcome already in today’s edition.

To know who is ahead helps both politicians in office and out of office in their planning. As we have noted, the transition of governments is one of the critical events in a democracy. It may cause crisis for individuals, upheavals for institutions, and strain on the whole democratic system. It is a great responsibility to publish a pre-election poll, and only the well-prepared should do it.

In countries with federal constitutions the step between poll findings and election outcome is particularly complicated. The SSRC Committee had pointed out that Dewey could have won the 1948 election by carrying Ohio, California, and Illinois, states which he lost by less than one percent. With this hindsight, Archibald Crossley (p 164) showed the Iowa symposium that slightly over 29,000 voters, not enough to fill one side of a Big Ten football stadium, could have accomplished Dewey’s victory. He pleaded guilty to straining the capability of his poll by presenting Truman as the loser.

The Main Interview Question in a Pre-election Poll: The Secret Ballot

George H. Gallup worked hard at the art of asking interview questions. Even before 1948 he had found it useful to approach any polling problem with five types of interview questions: (1) information questions that screen out the people who are informed from the uninformed so we can classify their opinions separately, (2) open-end questions that draw the respondent out, (3) closed-end questions that present the specific alternatives, (4) reason-why questions which get at the causes as far as they are conscious and can be stated by the respondents, and (5) intensity questions that ask how strongly the respondents feel (Gallup 1947).

In the search for the most valid way of asking the main question in a pre-election poll the Gallup Poll had the benefit of Lawrence Benson’s work on the secret ballot (Benson 1941). Also in a large-scale survey one could duplicate an accustomed voting procedure in a real election by giving the respondents printed ballots. Their chosen ballot was put in an envelope and/or in a ballot box provided by the interviewer. The interviewer did not know for whom his or her respondent had voted. The interviewees appreciated this procedure and its seriousness. The solution was superior to the usual questions and answers in a questionnaire. The number who reported themselves as undecided about how to vote declined significantly. In 1948 only a sub-sample had been interviewed by Gallup with the secret ballot, and no differences had been found in the distribution of candidate supports between the sub-sample and the full sample. Nevertheless, for the future, Gallup advocated the use of the secret ballot in all electoral standing polls.

The answers to the secret ballot question was secret to the interviewers but not to the researchers in the home office in Princeton. It had the code number of the questionnaire written on its back. The information could be tabulated against other data in the questionnaire in the usual fashion.

In Gallup International Association some members soon found the secret ballot to be an indispensable tool in election research. In some countries a single party dominates the political scene. This was the case with the Congress Party in India during the decades immediately after its independence. Eric da Costa of The Indian Institute of Public Opinion Private Ltd. found that, under these circumstances, an opposition voter feels more comfortable and willing to express his views in a secret ballot.

Sometimes a party or a candidate has policies so extreme that sympathizers are afraid to show them for fear of becoming socially isolated. Or, they simply want to avoid hurting the feelings of mainstream people. For years after World War II in France, pollsters grossly underreported the strength of the Communists. Hélène Riffault and Jean Stoetzel of l’Institut Français d’Opinion Publique (IFOP) eventually got it right after Dr. Gallup had shown how to ask the voting question by secret ballot. They went further than the Americans. The face-to-face interviewing with a ballot box that they used meant that the interviewer would not know the respondent’s political preference. However, they did not use any code number on the ballots. In addition, they asked no background questions and no other questions at all except the ones on voting in the coming and last elections. This privacy did overcome the Communist tradition of suspicion from the war-years of cell work.

The procedure was expensive, but a great intellectual victory. Such extraordinary ingenuity in research and courage in business investment became parts of the early folklore in Gallup International Association.

Poll to the Last Day

Paul F. Lazarsfeld, Bernard Berelson, and Hazel Gaudet (1948), in their study of the presidential election 1944 in the local community of Erie County in the state of New York, had repeated interviews with the same persons during and after the campaign. Almost none of the voters had changed their minds during the campaign. This finding has turned out to be the exception rather than the rule in election research. The simplest lesson from 1948 for the pollsters and their publishers was to cover the last weeks and days before the election with continuous polling.

Perhaps a handful in a million changes their class position in the final week of an election campaign. If you vote entirely according to your class, there would not be much change in party and candidate preference in the last few days before an election. Class is a major determinant of voting, albeit not the only one. Ethnicity, religion, regional traditions, et cetera, and also the election issues and the image that political candidates project, are related to voting. More important, "class voting" is a declining trend all over the world and "issue voting" is an increasing trend, and United States has lead this development. All this has made the final weeks of campaigning important for the outcome.

"Poll to the last day!" was the message that George H. Gallup brought to his foreign affiliates in Gallup International Association when they met in Paris in 1949.

Probability Sampling

All pollsters must give an answer to the most frequent question asked about polling: "How can you tell what the whole country thinks when you only interview only one or two thousand persons?"

The debacle after the 1948 election led the American Institute of Public Opinion in Princeton to change its fieldwork and move toward probability sampling away from quota sampling. It also distributed a pamphlet to explain probability sampling to a wider audience. This decision to go from "good practice" to "best practice" in sampling was a direct result of criticism from mathematical statisticians at the universities and at the Bureau of the Census. At the time, these statisticians were not in the habit of making a distinction between good and best, and they were rather rigid in their views. They unanimously disparaged quota sampling. It obtained a bad image, except in some market research that did not need too much accuracy to give enough guidance to the clients.

George H. Gallup’s decision to abandon quota sampling was not primarily grounded in public relations. Given the seriousness with which Gallup took his polling, the change to probability sampling was a natural but slow decision. At the Iowa meeting he was still hesitant: "I confess I get irritated by some of our critics who assume that area sampling is the answer to all of our problems…" (p. 221). However, he also felt that only the best was good enough in the Gallup Poll. Its implementation of probability in all steps of the sampling could be very rigid as was the case in Gallup’s fieldwork for Samuel Stouffer’s book on civil liberties (Stouffer 1954). But in the main it was pragmatic enough to allow for speedy research at competitive costs.

Humphrey Taylor, Head of Louis Harris and Associates, commented with the hindsight of 1998 on the controversy between quota and probability sampling after the 1948 debacle:

Virtually all public opinion surveys conducted in the United States since then – whether conducted face-to–face or by telephone – have used some modified version of probability (or random) sampling. Indeed, for American researchers quota sampling is almost a dirty phrase.

The situation in Europe has been quite different. The great majority of face-to-face opinion surveys, including election surveys, conducted in France, Germany, Italy, the United Kingdom and other European countries have used some form of quota sampling, with the interviewers given considerable latitude to find and select respondents who fit the quota cells (usually based on sex, age, one or two socio-economic factors and other variables). Giving the interviewers this freedom to select whom to survey is unacceptable in the United States, but the European quota method has worked reasonably well over many years and has been widely accepted, not only by practitioners and their clients but also by many European academic researchers – something which Americans find very puzzling. (Taylor 1999, p 978)

Taylor exaggerates the acceptance of quota sampling in Europe. What he says is true, of course, for his own company that at the time he wrote operated in 80 countries worldwide. It is also true that the English, French, and German members in Gallup International Association did not follow the lead from Princeton to move toward probability sampling. But others did, including the Dutch and the Scandinavian members. Outside of Europe, Roy Morgan in Australia developed a scheme for probability sampling that included interview workloads and administration that was much admired in the polling community.

The fact that Princeton’s call for probability sampling was not followed unanimously in the Gallup International Association may be interpreted as a first and prime example of weakness in the voluntary structure of this association of research firms. However, there are also other considerations. Acceptance of probability sampling in polls outside the United States illustrates the awkwardness that may occur when methods from the social structure of the United States are transplanted to alien structures.

Probability sampling for omnibus research was implemented in three Scandinavian countries in the period 1955 to 1965. The Danes implemented a version of Likert’s area probability sampling. One may ask why they did not make use of the national registers of population and dwellings that were available in Denmark, nor of the election rolls prepared by the government. The American method is designed for a country without national registers and with a requirement that a voter must personally register to get on the election roll.

The Norwegians made use of their official registers in their area probability sample. In remote areas of a country they cut the density of the sample in half, and each interview from such areas was counted as two interviews. This is quite in line with the mathematical requirements in this kind of sampling; you need not use equal probability but you must have a known probability of each person’s chance to be included in the sample. You restore the sample before tabulation by weighting the interviews. This procedure to cope with expensive-to-get respondents, or hard-to-get ones, was called "sub-sampling and up-weighting." With the advent of computers it became more accurate and much easier to do. It became part of a general routine called "post-stratification." Each interview in the tabulation phase of a survey gets a weight that may be higher or lower than 1.0 to achieve an ideal sample.

When the Norwegian interviewers could not find a person that had been selected for an interview, they used a substitute from the same household. Here the Norwegians deviated from "best practice" of probability theory to "good practice." They could make a reasonable case that the error introduced by using a substitute was smaller than the error introduced by having a non-response.

When The Swedish Institute of Public Opinion (Sifo) was founded in 1955, major advances of sampling theory and practice for the social sciences had taken place in the United States, documented in works by Cochran (1953), Deming (1950), and Hansen, Hurwitz, & Madow (1953). In Sweden a doctoral thesis on the same topic was under way by Dalenius (1957). He was engaged by Sifo to construct a master selection of localities in which the interviews for national surveys would take place. Between 1955 and 1967 the Swedes used a sample random in all three steps. Two sub-samples with lower densities and up-weighting reduced the statistically relevant non-response rate to less than 10 percent. Like in neighboring Norway, one comprised expensive-to-interview people in the sparsely populated areas of the country. The other one comprised hard-to-interview people within the time frame of an omnibus, because of their absences from home, reluctance to see the interviewers, et cetera.

Probability Sampling with a Very Short Interview Period

George H. Gallup’s two lessons from the 1948 failure, "Poll to the last day" and "Use probability sampling," contained a dilemma. To achieve an adequate response rate, a probability sample, unlike a quota sample, requires a prolonged interviewing period – at best four to six weeks – with repeated efforts to find the hard-to-get respondents. While the final interviews in a typical probability sample may be done very close to Election Day, the bulk of them come from an earlier part of the campaign. And the hard-to-get interviewees reached in the final days are not representative of the electorate as a whole; they may have traveled more, they may have worked at irregular hours, they may have been in hospitals or in custody, or, they may have a different, less outgoing personality than those who were easy to get for an interview. A quota sample has no such concerns; it does not require a prolonged interviewing period.

In the late 1960s Karin Busch and I worked on this dilemma at Sifo. Our problem was to find a probability sample with several weeks to secure interviews but a few days to interview the chosen respondents. We have reported the solution at another WAPOR seminar and I shall not repeat it here. The statistical solution, learned from the literature and practiced earlier, had separated the persons to be interviewed into two groups: (1) a quick-access stratum, and (2) a hard-to-contact stratum. We broke with the habit of interviewing departments that were ashamed of their non-respondents and threw away their names and addresses. Instead we treated them as a most valuable resource for future interviews in the hard-to-get stratum that we constructed.

Post-stratification by Vote in Previous Election

In electoral forecasts as in other public opinion polls, interviews can be weighted – i.e. retroactive1y stratified – according to figures on how the respondents voted in previous elections to make the sample congruent with the voting outcome of this past election. George H. Gallup’s celebrated pre-election poll in 1936 was based on a mail survey adjusted by information about the respondents’ voting in the previous election. He admitted that Readers Digest had probably been right if they had used the method in 1936.

Post-stratification is routinely used to improve samples in terms of age, sex, and region, i.e. the same variables as are used for "pre-stratification." A serious concern is whether the interviewee’s voting in the previous election could be used among the post-stratification variables. This was not discussed much in the aftermath of 1948 because strict post-stratification had not been generally used in the pre-election polls. It emerged as methodological subject in the profession in the following decades. Mathematical statistics held out great prospect of gains in precision. And some pollsters believed that it could rescue bad samples and make them acceptable samples.

To permit post-stratification, an electoral forecast contains not only the question "What party/candidate do you intend to vote for?" but also "Which party/candidate did you vote for in the last election?" (The latter question, of course, is posed only to respondents who were entitled to vote in the previous election.) One advantage of post-stratification by vote cast in the last election is that it yields stable time series of changes in party sympathies, even with fairly small samples of interviewees. The high correlation that exists between individuals’ political preferences at various times is used to reduce sampling errors.

However, incidences of memory errors concerning votes previously cast had made the first generation of pollsters skeptical about uncritical, mechanical and routine post-stratification by vote in the previous election. For example, Eric da Costa had found that in India practically everybody said that they had voted for the Congress Party in the last election.

In the U.K., Henry Durant rejected it entirely for election forecasts on the basis of experience with the British elections of the 1950s and ‘60s. In Sweden, Karin Busch and Hans L Zetterberg found that such post-stratifications biased the results, so that the figures moved toward in the direction of the outcome of the previous election, not the current one. In his electoral forecasts in Norway, Björn Balstad developed a method for using information from votes in previous elections only if there were minor disparities between the previous election results and how the voters subsequently stated that they had voted. If this difference was large, Balstad relied on the unweighted figures. In France, Jean Stoetzel of IFOP, who also was a professor of Social Psychology at the Sorbonne, believed that, in the long run, the truth about the distribution of party sympathies lies somewhere between unweighted figures and figures post-stratified according to previous votes. The proportion depends on the mode of the country during the election campaign and must, in his view, be estimated separately by political experts. The experts then assess and check whether there is any reason to believe that a party’s sympathizers are more ashamed of their previous electoral behavior than others, or are drawn so strongly to a new party that they wish to maintain that they voted for it previously.

The fact that the current political opinion climate biases the answers to questions about past voting has a parallel in answers to questions about registration as a Democrat or Republican in the United States. Such registration is an aid to the parties provided by many states in the Union to allow the parties to have primary elections to select candidates for the main election. If answers about registration are used in adjusting samples there is a risk of a bias in the direction of past election campaigns.

Turnout Measures

In the United States the public’s participation in elections is low. At the Iowa symposium in 1948, Samuel Stouffer said:

… with the best of probability samples, if you get 85 percent of the people telling you whom they prefer, and only 50 percent are going to vote, you have a margin of error there that is positively staggering when you think of the problem of forecasting from it (p 210).

And the pollsters agreed that it was more difficult to measure turnout in a pre-election poll than to record the political preferences of the respondents.

The magnitude of this problem is not the same in all countries. In Australia the turnout is very high because you are fined if you do not vote. In countries with proportional representation turnout is usually higher than in countries where the winner in each constituency takes all. (If constituencies, as in the United States, are routinely gerrymandered by politicians to create safe re-elections, the turnout becomes particularly low.) In most democratic countries you have automatic voter registration and do not personally have to register in advance as in the United States. Nevertheless, in all these circumstances it remains a difficult task in a pre-election polling everywhere to estimate turnout.

Since it is socially desirable to vote, more people tell the pollsters that they will vote than actually do vote. To calculate turnout the pollster cannot trust a straight question and needs other screening indicators. Gallup’s Paul Perry was ready at the 1950 election with a scale measuring turnout. It had about a dozen questions, for example, are you a registered voter? (required in the US), how interested are you in this election? did you vote in the previous election? do you know the location of your voting station?

A probability sample comprises all sorts of respondents such as the shy, the socially incompetent, and the week and sickly, the alcoholics and other marginal elements. Many of these people do not go to the polls on Election Day. Unless the researcher with a fine probability sample has a very good turnout scale, he presents party preferences of a large number of non-voters. In a quota sample, where the interviewers on the spot select whom to interview within assigned quotas, fewer such people become included. This built-in exclusion of many non-voters has contributed to the relatively good record of quota samples in election research.

This good record in election research should not be generalized to all research areas, as is often done by market research firms. A quota sample may be a biasing choice if you measure incidents of outgoing social activities (which it may overestimate) or mental deficiencies or consumption of alcoholic beverages (which it may underestimate).

Allocation of the Undecided

In the 1948 pre-election polls, the pollsters presented the results as if the undecided at the time of the interviews would vote like those who had already made up their minds. Dr Gallup told the Iowa gathering: "I believe that these two errors in judgment – the failure to take a last-minute poll, and the decision to eliminate the "undecided" voters in our sample – account for most of the error in our 1948 predictions." (p 181)

He suggested as a solution that one might assume that most undecided would vote the way they had voted in the previous election; the voters would so to speak "return home" by the end of the campaign. Other pollsters have studied recent shifts among the voters and allocated the undecided according to the relative strengths of such documented shifts. Jean Stoetzel in France asked for each party about the probability that his respondents would vote for it in the election. Still others asked the undecided about their leanings toward one or anther candidate or party. There seem to be little consensus in the profession about the best method.

A way of reducing the number who reported that they were undecided, and thereby reducing the problem, was to use secret ballots in all interviews.

What Happens to the Voter When a Pre-election Poll is Published?

The non-voters in 1948, George H. Gallup, discovered in post-election polls, were more pro-Dewey than the voters. This circumstance was discussed only as a problem in measuring turnout. The alternative explanation is that these Republicans had learned from the published polls that Dewey would win, and, therefore, they did not bother to go to the polls. The thought that published polls could have such consequences was beyond the imagination of the analysts at the time.

Much early research instead focused on the possible existence of a "bandwagon effect," i.e. a general tendency to vote with the party or candidate who was ahead, or, the possibility of an "underdog effect", a tendency of some voters to give a helping hand in the form of a vote to those trailing in the polls. In spite of the fact that research evidence was inconclusive, the mere thought that such effects might be possible led some countries to ban the publishing of polls during election campaigns, at least in the final period of the campaign. This restriction in the freedom of information did not stop private polling by the parties or by the business elite. Sometimes such polls became published, not in domestic media – that was forbidden – but in foreign media.

A few decades after 1948, however, an unexpected development was noted. It was found that the knowledge of who is ahead in an election is usable information for any rational voter for his decision on Election Day. This phenomenon had long been known in legislative assemblies. There were no seats in the old Roman Senate. The senators walked around during deliberations to end up staying close to the colleague whose position in a debate they agreed with. Every member could watch the formation of majorities and minorities around the physical positions on the floor of the senators who argued for them, and to adjust his own arguments and vote accordingly. (This is the original meaning of "voting with your feet". To equate it with "exit" is a latter-day misunderstanding, now uncorrectable.) The roll-call vote in a modern senate may likewise allow a senator to change his vote prior to the announcement of the final count, as he sees how one after another of his colleagues cast their votes.

In making his final decision, a rational, ordinary voter in a modern democracy may also make use of knowledge from the polls of who is ahead in an election. So called "tactical voting" does occur.

In a system dominated by two parties, like the United States, the votes for candidates from "third parties" may vary with the size of the gap between the major candidates, as shown in the polls. If the gap is big, more voters may be inclined to use their vote, not primarily to elect a candidate since that is seen as a foregone outcome. Instead they use their vote to "send a message" by casting it for a third-party candidate with a more pointed agenda than that of the main candidates. If the gap is close, fewer voters may want to "waste" their ballots on a third party candidate.

In democracies with more strict proportional representation, no vote is wasted. In such systems, a multitude of parties, some very small, usually emerges. Rational voters can use the information on which parties are ahead and behind in the election race to promote a coalition government that is most congenial to them. Several systems of proportional representation also have a threshold of, say, three to five percent, that bars parties receiving less from getting any seats at all in parliament. In such a case the rational sympathizer of a bigger party may cast his vote for a relatively congenial small party that the polls have placed balancing on the barrier for parliamentarian representation. In this way the rational voter tries to make sure that the small party is available in parliament as a coalition partner to his regular party.

Election Day Polling

Another innovation since 1948 is exit polling on Election Day, i.e. interviewing as people leave selected polling places. Exit polls are used to give TV networks advanced information on election outcome while the votes are counted on election night. The first predictions reported to their viewers use early voting returns and a larger proportion of the exit poll results, later in the election-night program the exit results in the predictions are gradually replaced with actual votes.

In theory exit polls should be a very accurate approximation of the real voting. In practice this is not the case. When humans measure humans for scientific purposes, organizational competence and stability are sine qua non for accuracy. The exit polls are ad hoc organizations set up for one day of work and then abandoned. Interviewers cannot be given practice in advance, since there is no election in advance. They are supposed to interview every n:th voter leaving the polling station. Particularly during hours when the station is very busy they are tempted to select persons more accessible to them. Young interviewers may, for example, tend to pick young voters. Only a few interviewers work at each polling station. The interviewers are recruited to work for a whole day. Thus each interviewer makes many more interviews in an exit poll than in an omnibus survey. This gives sway to the partisan biases that are heightened in election times. Exit polls do not use the standard way of controlling interviewing bias pioneered by Elisabeth Noelle-Neumann, that is, to have many interviewers each of whom does only a handful interviews in each survey.
Gallup Korea in Seoul, founded and headed by Park Moo-ik, has contributed an alternative to exit polling. He uses 2,500 telephone interviews with those who say they had gone out to vote, which were conducted during the period of 9 am through 2 pm on the Election Day. He can report his estimates at 6 pm when the election stations close.

1987  
Candidate Roh,
Tae-
Woo
Kim,
Young-sam
Kim,
Dae-
jung
Kim,
Joung-phil
Others
The actual election
figure
36.6 28.0 27.1 8.1 0.2
Gallup Korea poll
prediction
34.4 28.7 28.0 8.4 0.5
Deviation 2.2 0.7 0.9 0.3 0.3
1992  
Candidate Kim,
Young-
sam
Kim,
Dae-
jung
Chung,
Ju-yung
Park,
Chan-
jong
Others
The actual election
figure
42.0 33.8 16.3 6.4 1.5
Gallup Korea poll prediction 39.5 31.1 15.7 12.4 1.2
Deviation 2.5 2.7 0.6 6.0 0.3


The procedure is not only an initial help to those who report on the stream of election results from voting districts. It has the added advantage that it tests the whole standard sampling and interviewing set-up of the research organization. This is one of the internal rationales for doing election polling. Elections provide one of the few available answers that test both the scientific and administrative designs of a survey organization. Note that the entire television audience is the jury judging the achieved degree of accuracy. To them it is a learning experience about the type of approximations achieved by modern surveys.

As we see in Table 2, Gallup Korea predicted by this method Kim Young-sam’s loss in 1987 and win in 1992 very closely. Moo-ik Park came to be nicknamed "Mr. One Percent". This is a nice journalistic ploy to celebrate that he accurately could pick the winner in these elections. But a social scientist in the Mosteller tradition of measuring deviations between pre-election polls and election results would give much more weight than the journalists to the very large deviation of the minor candidate Park Chan-jong in 1992, noting the nearly double the number of votes in the poll that in the voting count. Here you have in a nutshell the difference between the journalistic relevance and social science precision.

 

By Way of Conclusion

The 1949 report and conference on the failure of the polls in the United States in 1948 contained valid critique by social scientists of the methods of the pollsters of the 1940s. All of the procedures receiving valid criticisms have since been given appropriate solutions. Some problems took years of intelligence and research to solve. Many researchers have contributed to these solutions through initiatives, analyses, and experiments. Unfortunately, some 50 years later, we see new pollsters emerging, often in market research firms, who do not seem aware of these solutions. And many journalists who write up or present polling results are unaware of the problems defined in 1948 and their solutions.

The research teams giving us the major contributions to the 1948 problems would not necessarily be found among the academic social scientists. We would again have to mention George H. Gallup, his chief statistician Paul Perry, and also some members of the Gallup International Association. A parallel development took place in the INRA network. In Europe their advanced election methdology developwed primerily through  Elisabeth Noelle-Neumann and her chief statistician, Friedrich W. Tennstädt in Institut fϋr Demaskopie in Allensbach .

Why have the pollsters more than the academics taken the lead in developing electoral standing measures and issue polls? One answer is that the pollsters, much more than the social scientists at the universities, have access to regular nation-wide interviewing setups that provided immediate opportunities for developing and testing answers to issues about polling raised by themselves, by academics, by politicians and by journalists. To be sure, NORC at Chicago University and SRA at Michigan University have been very successful academic laboratories. However, the number of such academic facilities has not grown at nearly the pace of the polling organizations in the private sector. In addition, the latter have felt a pressure of commercial competition in developing their methodology. In recent years the competition between the in-house polls of the American TV-networks and their newspaper partners has been a fertile ground for advances in cost effectiveness, and also for polling methodology with new ways of selecting and contacting the interviewees and extracting results from their answers.

Albert Einstein came from a family of skilled instrument makers. Scientific progress thrives on closeness to instruments and other opportunities for observation.

 

References

Benson, Lawrence E, 1941. "Studies in Secret-Ballot Technique", Public Opinion Quarterly, vol 5, 79-88.

Cochran, W 1953. Sampling Techniques, Wiley, New York.

Dalenius, Tore 1957. Sampling in Sweden: Contributions to the method and theories of survey practice, Almquist & Wiksell, Stockholm.

Demming, W 1950. Some Theory of Sampling, Wiley, New York.

Hansen, M., H. W Hurwitz, & W Madow 1953. Sample Survey Method and Theory, Wiley, New York.

Gallup, George H., 1947. "The Quintamensional Plan of Question Design", Public Opinion Quarterly, vol 11, no. 3, 385-393.

--"-- & Rae, Saul F, (1940), The Pulse of Democracy ; the Public-Opinion Poll and how it Works, Simon & Schuster, New York.

Lazarsfeld, Paul F., Bernard Berelson, and Hazel Gaudet 1948. The people's choice; how the voter makes up his mind in a presidential campaign, 2nd ed., Columbia Universty Press,. New York, NY.

Likert, Rensis, 1948. "Public Opinion Polls," Scientific American, vol. 179, pp. 7-11.

Lippmann, Walter 1922. Public Opinion, Harcourt, Brace and Company, New York, NY.

Lowell, A. Lawrence, 1913. Public Opinion and Popular Government, Longmans Green, New York, NY.

Meier, Norman C., & Harold W. Saunders, (editors) 1949. The Polls and Public Opinion, The Iowa Conference on Attitude and Opinion Research Sponsored by the State University of Iowa, Holt, New York

Mosteller, Frederick, with the collaboration of Leonard W. Doob and others,1949. The Pre-election polls of 1948; report to the Committee on Analysis of Pre-election Polls and Forecasts, Social Science Research Council, New York.

Stouffer Samuel, 1955. Communism, Conformity, and Civil Liberties; a Cross-Section of the Nation Speaks its Mind, Doubleday, Garden City, N.Y.

Stouffer Samuel A. et al. 1949. The American Soldier, volume One and Two, Princeton University Press, Princeton, NJ.

Taylor, Humphrey, 1998. "Opinion Polling", in Colin McDonald and Phyllis Vangelder (editors) ESOMAR Handbook of Marketing and Opinion Research, 4th edition, Amsterdam, pp. 995-974.

Zetterberg, Hans L. 1967. "System Veckobuss", Sociologisk forskning, vol 4, pp 139-151.