The Application of Best Practices In PR measurement and evaluation
The Application of Best Practices In PR measurement and evaluation
The Application of Best Practices In PR measurement and evaluation
Permalink:
Public relations measurement and evaluation are essential elements in the creation of successful communications programs. While there is wide acknowledgment that measurement and evaluation are important, there is little understanding among public relations practitioners of the best and most appropriate ways to design and implement an effective system for measurement and evaluation. This paper presents a series of processes, procedures and considerations that should be applied in the design of research programs. These recommended approaches are based on the concept of applying “best practices” to public relations research activities.
Background
Public relations research has grown exponentially over the past 25 years. Currently approximately two dozen companies1 offer services that measure and evaluate public relations activities. During this period, the industry has seen growth that is best represented by the multitude of companies specializing in this area, as well as a growing academic literature in the field. Yet, even with the increased attention paid to the discipline, significant variations continue to exist with the varying range of approaches to public relations measurement and evaluation. These variations result in a lack of standard measures that can be used to gauge the success of a public relations program as well as in an uneven overall quality of the research being conducted.
This focus of this paper is to present a set of best practices that can serve as a foundation for the public relations profession to follow and can have the potential to evolve into a standard set of measures against which programmatic diagnoses can be made and the success or failure of a program can be judged.
Public Relations Journal Vol. 1, No. 1, Fall 2007 © 2007 Public Relations Society of America
David Michaelson, Ph.D., is President of Echo Research, USA, DavidM@EchoResearch.com. Sandra Macleod is Chief Executive Officer of Echo Research, UK, SandraM@EchoResearch.com.
1 This only includes companies headquartered in the United States and in the United Kingdom. Other companies specializing in this area operate in other regions of the world.
A Brief History of Public Relations Research
The formal origins of public relations research can be traced to the 1950s. During that period, a company called Group Attitudes Corporation was acquired by Hill & Knowlton.2 The primary focus of Group Attitudes Corporation was to function as a standalone company that was a captive arm of the parent agency. Its work included research for the Tobacco Institute,3 as well as for other Hill & Knowlton clients. The primary focus of this research, taken from a review of several published reports, was to assess reaction to communications messages and vehicles using processes that appear similar to the research methods employed by the advertising industry during this same period. This industry model was followed over the next 25 years with the establishment of research arms at several other public relations agencies. During this period, the primary function of these agency-based research departments was similar to the work initially conducted by Group Attitudes Corporation.
During the late 1970s, however, it became increasingly apparent that public relations differed considerably from advertising in its ability to be measured and evaluated. At the time, advertising testing was dominated by a variety of measurement and evaluation systems of which the “day after recall” method (DAR), popularized by Burke Marketing Research in its work with Procter & Gamble, was one of the most common systems in use. These advertising-focused methods assumed that the message was completely controlled by the communicator. Therefore, the ability to test message recall and message efficacy were highly controllable and, in theory, projectable as to what would occur if the advertising were actually to be placed.
As early as the 1930s, methods were also being developed by advertisers and their agencies that linked exposure and persuasion measures to actual store sales. In essence, testing, measurement and evaluation systems became an integral part of the advertising industry. These systems became so institutionalized by mid decade that an academic journal,4 as well as an industry association,5 was established.
With the recognition that public relations needed a different set of measurements, the senior management at several major public relations agencies charged their research departments with the task of finding a way to measure the effectiveness of public relations activities. While a number of experiments were undertaken at that time, the primary benefit that was derived from this experimentation was a heightened awareness of the overall value of measuring public relations. This heightened awareness, along with advances in specific technologies, led to the
Michaelson & Macleod – Public Relations Journal – Vol. 1, No. 1, 2007
2
2 Group Attitudes Corporation was founded by John and Jane Mapes in 1950 and acquired by Hill & Knowlton in 1956; c.f. The New York Times; September 11, 1990.
3 C.f. Legacy Tobacco Document Library; University of California, San Francisco
4 Journal of Advertising Research was founded in 1936.
5 Advertising Research Foundation was founded in 1936.
founding of a number of research companies that specialize in measuring the effectiveness of public relations activities. For the most part, these companies specialize in the content analysis of media placements with little emphasis placed on programmatic or diagnostic research or research that evaluates the impact of public relations activities on the target audiences for these communications.
The legacy of these companies, as well as the services they wish to provide the public relations industry is noteworthy—particularly when the history of the profession is taken into consideration. Nonetheless, for the most part, these organizations have typically failed, in the fullest sense, to advance either the science or the art of public relations measurement and evaluation. The question arises as to why this is the case.
The Current State of Public Relations Measurement
Companies specializing in public relations measurement and evaluation have traditionally focused on evaluating only the outcomes of public relations. These outcomes are most commonly the media or press coverage that is a direct result of media relations activities (outputs). The primary limitation of these companies is their limited focus on an intermediary in the public relations process – the media – rather than on the target audience for these communications activities.
This paper will argue that relying strictly on evaluations of intermediaries in the communication process fails to create effective measurement and evaluation systems that provide a diagnostic appraisal of communications activities which, in turn, can lead to enhanced communications performance. The failure to include diagnostic measures ignores one of the fundamental “best practices” in communications research and is the key reason why public relations measurement and evaluation has failed to progress significantly over the past 25 years.
The Concept of Best Practices
The history of “best practices” originated in business literature during the origins of the industrial era.6 The concept was that, while there are multiple approaches that can be used to achieve a task or a goal, there is often a single technique, method or process that is more effective than others in reaching an established goal. As noted in a commentary from PriceWaterhouse Coopers, “Best practices are simply the best way to perform a business process. They are the means by which leading companies achieve top performance, and they serve as goals for other companies that are striving for excellence.”7
“Best Practices” In Public Relations – Public Relations Journal – Vol. 1, No. 1, 2007
3
6 C.f. Taylor, Frederick, The Principles of Scientific Management. New York: Harper & Brothers Publishers, 1919.
7 Source: PriceWaterhouse Coopers website on Global Best Practices www.globalbestpractices.com/Home/ Document.aspx?Link=Best+practices/FAQs&Idx=BestPracticesIdx
While the concept of best practices is often applied to the operations of a specific company, the logical extension of best practices is its application to an overall industry through the establishment of standards against which assessments can be made.
Best Practices in Public Relations Research
In public relations research, there are nine best practices that can serve as the foundation for establishing a standardized set of measures for public relations activities. These practices are divided between two broad areas: 1) the use of specific research methods and procedures and 2) the application of measures that examine both the quality and the substance of public relations activities.
I. RESEARCH METHODS & PROCEDURES
There are three research methods and procedures that are an essential part of best practices in public relations research. These methods and procedures include every key step in the research process from the inception of the project through the delivery of the research report itself. These three steps are:
1. Setting clear and well defined research objectives;
2. Applying rigorous research design that meets highest standards of research methods and ensures reliable research results; and
3. Providing detailed supporting documentation with full transparency.
1. Clear and Well Defined Research Objectives
Setting clear and well defined research objectives is the critical first step in the public relations research process. Unfortunately, it is the aspect of best research practices that is typically either overlooked or not given the level of attention that it requires in order to create an effective and reliable measurement and evaluation system. The establishment of clear and well defined definitions is particularly critical since research objectives function as the foundation upon which the rest of the research program rests.8 The key to setting these objectives is so they can effectively contribute to a measurement and evaluation program that meets best standards involves answering the following five questions.
• Is the information need clearly articulated?
Michaelson & Macleod – Public Relations Journal – Vol. 1, No. 1, 2007
4
8 See: Stacks, Don W., Primer of public relations research, New York: Guilford Press, Chapter 2.
– In order for any form of measurement and evaluation to be effective, it is essential that the information needs be specific and unambiguous. A generalized information need such as, “How well did the program perform?” is unlikely to serve as an effective basis for any research based decisions. The more appropriate questions are: “What is the level of awareness of the product, issue or situation?” “How knowledgeable is the target audience about the material being communicated?” “Is the information relevant to the target audience?” “How has the attitude of the audience been impacted by exposure to communications?” “Is the target audience willing to take any form of action as a result of exposure to the communications program?” These questions result in setting specific information objectives that can be reliably measured and provide data that can be used to improve communications performance.
• Are the target audiences for the communications program well defined?
– It is essential to understand who the target audience is as precisely as possible.9 This is important for several reasons. The primary and foremost reason is practical. In order to conduct research that reliably measures and evaluates a communications program it is essential that those to whom the program is directed also serve as the source of the information about the audience. A poorly defined audience is typically one that is so broad in its scope that it includes those unlikely to express an interest or need. An example of an audience that may be too broad in its scope is “women aged 18 to 49 years old.” By contrast, a more narrowly defined audience is “mothers of children that are 12 years or younger.” While the former group includes the latter group it is less precise and depending on the product or service, less likely to yield the same information.
• Are business objectives being met through the information gathered from the research?
– The central reason for conducting any type of measurement and evaluation research is to address a business issue or concern. Consequently, as the objectives for the research are being established, it is critical that a detailed assessment of the business take place as a first step in the process. For example, if the issue is assessing the introduction of a new product category, then measuring awareness is a highly relevant and essential measure. However, if the business issue concerns a prominent national brand, then purchase intent may be a more relevant and important measure to include in the research program. The more
“Best Practices” In Public Relations – Public Relations Journal – Vol. 1, No. 1, 2007
5
9 We can no longer get away with measuring publics; they are too heterogeneous in a global business environment that is so clearly interconnected via the Internet.
closely research is tied into delivering business objectives, the more valuable and strategic it will be.
• Is there a plan for how the findings from the research will be used?
– Just as it is important to have a clear understanding of the research objectives, it is equally essential to understand the types of actions that can be taken as a direct result of the information that is gathered in the research process. The intent is to create research that functions as an aid in the decision-making process, rather than having it serve as an end in and of itself. For this reason, it is best to consider likely internal users or “customers” for the research findings at the outset (e.g., marketing, investor relations, new product development, human resources, market or business units). Human nature being what it is, it is also advisable to secure their involvement and buy-in first, so that the findings are welcomed and applied constructively, not just as an after-thought. Objective “listening” research and the insights derived from it, are tremendously powerful in terms of internal education for management and appreciation for the strategic focus of communications.
• Is the organization prepared to take action based on research findings?
– Just as important as having a plan for applying the research is having an understanding of the actions the organization is willing to take based on the findings. If the senior decision makers are unwilling to undertake specific actions, then creating a research program that measures and evaluates that action will have little value to the organization and may actually be counter-productive to the organization’s long-term goals and objectives.
2. Rigorous Research Design
Once objectives have been established, it is important to design research that both supports the objectives and is rigorous enough to provide usable and actionable information. This rigor not only assures reliable research results, but also provides a foundation for measuring and evaluating communications performance over time. Again, a series of nine questions needs to addressed in order to assure that rigorous research designs are applied.
• Is the sample well defined?
– The research sample, just like the target audience, needs to be precise in order to make sure it is the actual target audience for communications that is included in the research. The recommended approach is to screen potential research respondents for these defining characteristics before
Michaelson & Macleod – Public Relations Journal – Vol. 1, No. 1, 2007
6
the start of the study. These defining characteristics can be demographic (e.g., age, gender, education, occupation, region, etc.), job title or function, attitudes, product use or any combination of these items. However, while it is important to define the sample precisely, caution must also be taken to make sure that key members of the target group are included in the sample. In some instances, samples require minimal quotas of specific types of respondents in order to assure that minimally analyzable segments are included in the study.
• Are respondents randomly selected?
– One of the most significant and immeasurable biases that can occur in a study is the exclusion of potential respondents who are difficult to reach and therefore are less likely to participate in the study. Special attention needs to be paid to ensure that these individuals have an equal opportunity to participate. This is typically accomplished through multiple contacts over an extended period with a random sample or replica of the group being studied. It is also essential to be sensitive to the audience being studied and appropriately adapt the ways that responses to questions are secured. Examples of these very specific groups of individuals that require increased sensitivity are young children or other groups where there are special laws and regulations guiding data collection, night shift workers, ethnic minorities, and disabled or disadvantaged groups).
• Are appropriate sample sizes used?
– Samples need to provide reliability in two distinct manners. The primary need is to make certain the overall sample is statistically reliable. The size of the sample can vary considerably from a few hundred respondents to over 1,000 individuals. The decision to use one sample size over another is contingent on the size of the overall population represented by the sample, as well as the number of subgroups that will be included in the analysis. For example, a national study of Americans typically requires a sample of 1,000 respondents. This assures geographic and demographic diversity as well as adequately sized subgroups between which reliable comparisons can be made. By contrast, a survey of senior executives may require only 200 to 400 completed interviews in order to meet its objectives.
• Are the appropriate statistical tests used?
– Survey research is subject to sampling error. This error is typically expressed as range of accuracy. A number of different standards can be applied to determine this level of accuracy as well as serve as the basis
“Best Practices” In Public Relations – Public Relations Journal – Vol. 1, No. 1, 2007
7
to compare findings between surveys. The most common standard used is the 95 percent measure. This standard assures that the findings, in 19 out of 20 cases, will be reliable within a specific error range for both sampling and measurement. This error range varies depending on the size of the sample under consideration with a larger sample providing a corresponding smaller range of error. With that standard in place, a number of different statistical tests can be applied. The key is to select the proper test for the situation being tested.
• Is the data collection instrument unbiased?
– A questionnaire can impact the results of a survey in much the same way as the sample selection procedures. The wording and sequence of questions can significantly influence results. Therefore, it is essential to make sure that wording is unbiased and the structuring of the questionnaire does not influence how a respondent answers a question. Paying attention to this concern increases the reliability of the findings and provides a better basis for decision making.
• Are the data tabulated correctly?
– Special concern needs to be taken to make sure that the responses from each questionnaire are properly entered into an analytic system so that data from the entire study can be reliably tabulated. Data preferably should be entered into a database with each questionnaire functioning as an independent record. This will also allow for subsequent verification if errors are detected and will also allow for the greatest analytic flexibility. Accuracy will also be significantly enhanced with this approach. Spreadsheets do not provide the same analytic flexibility as specialized statistical packages (i.e., SAS or SPSS) and it is significantly harder to detect errors when using that type of data entry system.
• Are the data presented accurately?
– Assuming the data are tabulated properly, it is equally important that it be presented in a manner that accurately represents the findings. While data is often selectively presented, the omission of data should not be allowed if it presents misleading or inaccurate results. Consequently, the full data set needs to be available, even if the data is only selectively presented.
• Is qualitative research used appropriately?
– Well-executed qualitative research (focus groups, individual in-depth interviews, and participant-observation) can provide unique insights that are not available from other sources. While these insights are invaluable, this form of research is not a substitute for survey data. Qualitative
Michaelson & Macleod – Public Relations Journal – Vol. 1, No. 1, 2007
8
research is particularly useful with three applications: development of communications messages, testing and refinement of survey research tools and providing insights as well as deeper explanations of survey findings.
• Can the study findings be replicated through independent testing?
– If research is properly executed, reproducing the study should yield similar results. The only exception is when significant communications activity has occurred that will impact attitudes and opinions. Unless the study is reliably constructed so that it can be replicated, it will be difficult to produce studies that can be reliably compared and which will demonstrate the actual impact of communications activities.


Leave a Reply
Want to join the discussion?Feel free to contribute!