Professor James E. Grunig: Basic Concepts for Research in Public Relations

James Grunig PRRom

Conceptualizing Quantitative Research in Public Relations. Part 3

In their book, Using Research in Public Relations, Broom and Dozier described five approaches to the use of research in public relations programs. Three of these approaches do not include research as an integral part of the ongoing management of such programs.

The first of these approaches is to use no research, which is all too common in public relations practice. A second is the use of informal research, such as talking to members of the public or the media, reading reports, or listening to unsolicited feedback from superiors or members of a public without systematically planning the research or analyzing the results. This is another common, but limited, approach.

A third is “media-event” research, in which organizations typically conduct a poll, for example, to determine how satisfied participants are with an organization or the extent to which they favor an organization’s policies. The organization then publicizes the results if they are favorable in order to marshal support for the organization or policy.

Organizations typically conduct the fourth type of research, evaluation-only research, to show managers or clients that programs have been effective. If the research shows that a program has not been effective, the organization typically downplays the results or is forced to discontinue a program. However, the research plays no role in planning or improving communication programs.

In contrast to these typical approaches to research in public relations, Broom and Dozier recommended what they called “the scientific management of public relations.” Just as research in science is used to develop, test, and revise theories, so research in scientifically managed public relations is used to develop, test, and revise communication programs. With the scientific      management of public relations, research also is a part of the communication process itself. Scientific management of public relations includes research at different levels, or units, of analysis; and it is based on research conducted both before and after a program is conducted. It can also be done with both quantitative and qualitative methods, although this chapter concentrates on quantitative methods.

Levels of Analysis in Public Relations Research

Public relations practitioners and scholars have strived for many years to explain the value of communication programs. Until recently, they have focused most of their efforts on the evaluation of individual communication programs, such as media relations, community relations, or employee relations. In fact, the root of “evaluation” is “value.” Focusing only on the evaluation of individual programs is too narrow, however, although evaluation should be an ongoing part of the scientific management of all communication programs.

In the Excellence project (L. Grunig et al., 2002), my colleagues and I searched the literature on organizational effectiveness for ideas that could explain the value of public relations beyond the effects of individual communication programs. We believed it was necessary to understand first what it means for an organization to be effective before we could explain how public relations makes it more effective. We learned that effective organizations achieve their goals, but that there is much conflict within the organization and with outside constituencies about which goals are most important. Effective organizations are able to achieve their goals because they choose goals that are valued by their strategic constituencies both inside and outside the organization and also because they successfully manage programs designed to achieve those goals.

Effective organizations choose and achieve appropriate goals because they develop relationships with their constituencies, which we in public relations call “publics.” Ineffective organizations cannot achieve their goals, at least in part, because their publics do not support and typically oppose management efforts to achieve what publics consider illegitimate goals.

Public relations makes an organization more effective, therefore, when it identifies the most strategic publics as part of strategic management processes and conducts communication programs to develop effective long-term relationships with those publics. As a result, we should be able to determine the value of public relations by measuring the quality of relationships with strategic publics. Furthermore, we should be able to evaluate individual communication programs by measuring their effects on indicators of a good relationship.

Organizations must be effective at four increasingly higher units of analysis —

(1) the program level, (2) the functional level, (3) the organizational level, and

(4) the societal level. Effectiveness at a lower level contributes to effectiveness at higher levels, but organizations cannot be said to be truly effective unless they have value at the highest of these levels. Research in public relations can be conducted to systematically plan how to increase effectiveness at each level and to evaluate the extent to which a public relations program has contributed to organizational effectiveness.

The program level refers to individual communication programs such as media relations, community relations, or employee relations that are components of the overall public relations function of an organization. Communication programs generally are effective when they meet specific objectives such as affecting the cognitions, attitudes, and behaviors of both publics and members of the organization.

The functional level refers to the evaluation of the overall public relations function of an organization, which typically includes several communication programs for different publics. Even though individual communication programs successfully accomplish their objectives, the overall public relations function might not be effective unless it is integrated into the overall management processes of an organization and has chosen appropriate publics and objectives for individual programs. The public relations function as a whole can be audited by comparing its structure and processes with those of similar departments in other organizations or with theoretical principles derived from scholarly research—a process called benchmarking. These audits can be conducted through self-review or peer review.

The organizational level refers to the contribution that public relations makes to the overall effectiveness of the organization. Public relations contributes to organizational effectiveness when it helps integrate the organization’s goals and behavior with the expectations and needs of its strategic publics. This contribution adds value—sometimes monetary—to the organization. Public relations adds value by building good, long-term relationships with strategic publics; and research can be used to monitor and evaluate the quality of these strategic relationships.

Research at the societal level refers to evaluations of the contribution that organizations make to the overall welfare of a society. Organizations have an impact beyond their own boundaries. They also serve and affect individuals, publics, and other organizations in society. As a result, organizations cannot be said to be effective unless they are socially responsible; and public relations adds value to society by contributing to the ethical behavior and the social responsibility of organizations.

Formative and Evaluative Research

As Broom and Dozier pointed out, evaluation research alone is of limited value. Evaluation research measures dependent variables only and is conducted after programs have been implemented. In contrast, scientists conduct research both to formulate theories and, after theories are specified, to evaluate and improve those theories. The same procedures should be used in the scientific management of public relations programs. Both formative and evaluative research should be used at all four levels of analysis. Public relations departments often are asked to provide evidence of their value at the societal or organizational level. Too often, however, they respond by conducting evaluation-only research, such as media monitoring, at the program level.

Public relations departments, therefore, should conduct formative research to identify strategic publics, to determine how the organization can communicate best to develop quality relationships with those publics, to develop departmental structures that facilitate communication with strategic publics, and to determine how the organization can align its behavior with the needs of its publics. Public relations departments should conduct evaluative research both to pretest and to post-test those programs, structures, and organizational policies and behaviors.

Quantitative and Qualitative Research

Research in public relations too often has been confined to the extremes of quantitative and qualitative research—large-scale, highly quantified, expensive, and intrusive public-opinion surveys of the general population or undisciplined informal research of poorly chosen research participants. Neither is very useful in formulating or evaluating programs for specific publics or in developing, maintaining, and evaluating relationships with the publics that both need or are affected by an organization.

In contrast, public relations professionals should choose from a full menu of quantitative and qualitative methods, each of which might be appropriate in different situations and each of which is equally scientific. Quantitative methods include surveys of and experiments with members of scientifically segmented publics. Qualitative methods include focus groups; structured, semi-structured, or unstructured interviews with key participants; or observations

of the behaviors of members of publics or of public relations professionals or other managers conducting their work.

Quantitative and qualitative methods do not work equally well at different levels of analysis or for both formative and evaluative research. For example, qualitative research (especially focus groups) is ideal for formative research at the program level, although it can also be used for evaluation at that level. Quantitative research can be especially valuable for segmenting publics and for evaluating outcomes at the program level. In many cases, both types of research can be used to provide complementary perspectives in both formative and evaluative research.

Process and Outcomes Evaluation

Public relations programs can be evaluated by measuring both the processes of communication programs and the outcomes of those programs. At the program level, measures of processes indicate how often and in what ways someone is communicating or his or her success in placing messages in a medium where it is possible but not assured that members of a public can attend to them. Often in the discussion of public relations metrics, process measures are termed measures of outputs. Program-level processes can be measured by counting whether messages are being sent, placed, or received, such as counts of press releases or publications issued or media placement and monitoring. At the functional level, auditors often measure processes by observing and counting what programs have been conducted, what personnel have been hired, and the amount of effort expended by program personnel.

It is important to point out that measures of communication processes must go beyond measures of products. Too often communication products (such as numbers of press releases or publications) are counted without understanding how those products fit into a strategic plan for communicating with a particular public. Sometimes, counting products might provide a good indicator that a process is being implemented. Too often, however, products are produced because the organization has always produced them and not because they are part of a consistent strategy.

Measuring process indicators can be very useful in evaluating public relations programs but they must be preceded by research that demonstrates that the processes being measured or counted have had demonstrable and valuable outcomes—both in the short term and the long term. At the program level, we must demonstrate, first, that the processes have had short-term effects on the cognitions, attitudes, and behaviors of both publics and management—what people think, feel, and do. In addition, we need to determine whether those short-term effects continue over a longer period—that is, whether they have any effect on the long-term cognitive, attitudinal, and behavioral relationships among organizations and publics. At the functional, organizational, and societal levels, broad measures of effects of a public relations department on the long- term quality of relationships between the organization and its publics are essential.

Short-term effects are not sufficient. These outcomes can be measured through quantitative survey methods or qualitative questions asked in inter- views, focus groups, or similar methods.

A public relations department could validate process measures either by conducting outcomes research itself (or contracting with an outside firm), by using secondary research conducted by the public relations department of a different organization, or by analyzing research published by academic researchers.

A public relations department itself could conduct pre-test or post-test research to demonstrate that particular processes regularly have desired effects. For example, an educational relations program could be tested by measuring how much students who participated in the program learned (a cognitive effect) in a pre-test of the program or post-test of an ongoing program. If the students consistently learned from the program, then we could infer that students who participate in the program in the future also will learn; and we could evaluate the program by counting how many programs are held and the number of students who attend.

In a community relations program, we might determine whether residents who attend an open house become less likely to call to complain about the organization or are more likely to say they support its objectives. If that outcome occurs, then we could measure the effect of the program by counting how many community residents attend open houses each year. This kind of research to confirm the effects of processes must be repeated from time to time, however—such as every three to five years—to demonstrate that the processes remain effective.

Organizations that conduct such validation research could help each other by sharing the results of their research or by collaborating in conducting the research. Perhaps most importantly, public relations departments often can find research to validate communication processes from research conducted for the profession in the academic literature, published in such journals as the Journal of Public Relations Research or Public Relations Review in the United States or the Journal of Communication Management in Europe.

I now will use these distinctions in types of research to provide a roadmap for how to conduct research to scientifically manage public relations programs. These suggestions are organized around the four levels of analysis.

In the next chapter: Public Relations Research at the Program Level

Read Part 1 and Part 2

Grunig, J. E.: Conceptualizing quantitative research in public relations. In B. Van Ruler, A. Tkalac Verčič, & D. Verčič, (Eds.). Public relations metrics (pp. 88-119). New York and London: Routledge, 2008. Republish with the permission of author

***

Grunig, J. E.: Conceptualizing quantitative research in public relations. In B. Van Ruler, A. Tkalac Verčič, & D. Verčič, (Eds.). Public relations metrics (pp. 88-119). New York and London: Routledge, 2008. Republish with the permission of author

References

Aldoory, L. (2001). Making health communications meaningful for women: Factors that influence involvement. Journal of Public Relations Research, 13, 163–185.

Berger, B. K. (2005). Power over, power with, and power to public relations: Critical reflections on public relations, the dominant coalition, and activism. Journal of Public Relations Research, 17, 5–28.

Bowen, S. A. (2000). A theory of ethical issues management: Contributions of Kantian deontology to public relations’ ethics and decision making. Unpublished doctoral dissertation, University of Maryland, College Park.

Bowen, S. A. (2004). Expansion of ethics as the tenth generic principle of public relations excellence: A Kantian theory and model for managing ethical issues. Journal of Public Relations Research, 16, 65–92.

Broom, G. M. (1977). Coorientational measurement of public issues. Public Relations Review, 3(4), 110–119.

Broom, G. M., & Dozier, D. M. (1990). Using research in public relations: Applications to program management. Englewood Cliffs, NJ: Prentice-Hall.

Chaffee, S. H. (1996). Thinking about theory. In M. B. Salwen & D. W. Stacks (Eds.), An integrated approach to communication theory & research (pp. 15–32). Mahwah, NJ: Lawrence Erlbaum Associates.

Chang, Y.-C. (2000). A normative exploration into environmental scanning in public relations. Unpublished M.A. thesis, University of Maryland, College Park.

Chen, Y.-R. (2005). Effective government affairs in an era of marketization: Strategic issues management, business lobbying, and relationship management by multinational corporations in China. Unpublished doctoral dissertation, University of Maryland, College Park.

Curtin, P. A., & Gaither, T. K. (2005). Privileging identity, difference, and power: The circuit of culture as a basis for public relations theory. Journal of Public Relations Research, 17, 91–116.

Durham, F. (2005). Public relations as structuration. Journal of Public Relations Research, 17, 29–48.

Ehling, W. P. (1992). Estimating the value of public relations and communication to an organization. In J. E. Grunig (Ed.), Excellence in public relations and communi- cation management (pp. 617–638). Hillsdale, NJ: Lawrence Erlbaum Associates.

Fleisher, C. S. (1995). Public affairs benchmarking. Washington, DC: Public Affairs Council.

Fombrun, C. J. (1996). Reputation: Realizing value from the corporate image. Boston: Harvard Business School Press.

Fombrun, C. J., & Van Riel, C. B. M. (2004). Fame & fortune: How successful companies build winning reputations. Upper Saddle River, NJ: Financial Times/ Prentice-Hall.

Grunig, J. E. (1997). A situational theory of publics: Conceptual history, recent challenges and new research. In D. Moss, T. MacManus, & D. Vercˇicˇ (Eds.), Public relations research: An international perspective (pp. 3–46). London: International Thomson Business Press.

Grunig, J. E. (2002). Qualitative methods for assessing relationships between organ- izations and publics. Gainesville, FL: The Institute for Public Relations, Commission on PR Measurement and Evaluation.

Grunig, J. E. (2005). Guia de pesquisa e mediçaõ para elaborar e avaliar uma funçaõ excelente de relações públicas (A roadmap for using research and measurement to design and evaluate an excellent public relations function). Organicom: Revisa Brasileira de Communiçaõ Organizacional e Relações Públicas (Brazilian Journal of Organizational Communication and Public Relations], 2(2), 47–69.

Grunig, J. E. (2006). Furnishing the edifice: Ongoing research on public relations as a strategic management function. Journal of Public Relations Research, 18, 151–176. Grunig, J. E., & Grunig, L. A. (1996). Implications of symmetry for a theory of ethics and social responsibility in public relations. Paper presented to the International Communication Association, Chicago (May).

Grunig, J. E., & Grunig, L. A. (2000a). Conceptualization: The missing ingredient of much PR practice and research. Jim and Lauri Grunig’s Research: A Supplement of PR Reporter, 10 (December), 1–4.

Grunig, J. E., & Grunig, L. A. (2000b). Research methods for environmental scanning. Jim and Lauri Grunig’s Research: A Supplement of PR Reporter, 7 (February), 1–4. Grunig, J. E., & Grunig, L. A. (2001, March) Guidelines for formative and evaluative research in public affairs: A report for the Department of Energy Office of Science.

Washington, DC: U.S. Department of Energy.

Grunig, J. E., & Huang, Y. H. (2000). From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relation- ship outcomes. In J. A. Ledingham & S. D. Bruning (Eds.), Public relations as rela- tionship management: A relational approach to the study and practice of public relations (pp. 23–53). Mahwah, NJ: Lawrence Erlbaum Associates.

Grunig, J. E., & Hung, C. J. (2002). The effect of relationships on reputation and reputa- tion on relationships: A cognitive, behavioral study. Paper presented to the Inter- national, Interdisciplinary Public Relations Research Conference, Miami, Florida (March).

Grunig, J. E., & Hunt, T. (1984). Managing public relations. New York: Holt, Rinehart & Winston.

Grunig, L. A., Grunig, J. E., & Dozier, D. M. (2002). Excellent public relations and effective organizations: A study of communication management in three countries. Mahwah, NJ: Lawrence Erlbaum Associates.

Grunig, L. A., Grunig, J. E., & Vercˇicˇ, D. (1998). Are the IABC’s excellence principles generic? Comparing Slovenia and the United States, the United Kingdom and Canada. Journal of Communication Management, 2, 335–356.

Holtzhausen, D. R., & Voto, R. (2002). Resistance from the margins: The postmodern public relations practitioner as organizational activist. Journal of Public Relations Research, 14, 57–84.

Hon, L. C., & Grunig, J. E. (1999). Guidelines for measuring relationships in public relations. Gainesville, FL: The Institute for Public Relations, Commission on PR Measurement and Evaluation.

Hung, C.-J. (2002). The interplays of relationship types, relationship cultivation, and relationship outcomes: How multinational and Taiwanese companies practice public relations and organization-public relationship management in China. Unpublished doctoral dissertation, University of Maryland, College Park.

Hung, C.-J. (2004). Cultural influence on relationship cultivation strategies: Multi- national companies in China. Journal of Communication Management, 8, 264–281. Jeffries-Fox Associates (2000a). Toward a shared understanding of corporate reputation and related concepts: Phase I: Content analysis. Basking Ridge, NJ: Report prepared for the Council of Public Relations Firms (March 3).

Jeffries-Fox Associates (2000b). Toward a shared understanding of corporate reputa- tion and related concepts: Phase III: Interviews with client advisory committee members. Basking Ridge, NJ: Report prepared for the Council of Public Relations Firms (June 16).