REVIEW: Article

The Evaluation Revolution in Public Diplomacy

This article is based on dozens of personal interviews with Public Diplomacy leaders, practitioners, and United States Ambassadors in addition to academic research, prior experience as a business strategy consultant, and service with the United States Department of State from 2008 to the present in Yemen, Washington, Uruguay and Oman. The author will serve as the head of the Public Affairs Section in Doha, Qatar from 2016 to 2019.
 
Analytics. Metrics. Evaluation. Impact. All these terms are routinely bandied about, increasingly so in Washington, and yet Public Diplomacy (PD) pro­fession­als in the field (the ones most directly impacted by these concepts and trends in measurement) have been historically reluctant to discuss them for fear that admitting such loaded, mathematical terms into the equation of their work will undermine the relational, long-term, nuanced public outreach they do day in and day out. 

Granted, the ambitious and elusive goals that Public Diplomacy officers are tasked with achieving, such as “promoting freedom of speech,” “increasing respect for women’s and minority rights,” and “building stable and lasting democracies,” are difficult to measure. We don’t have the clarity that concrete indicators provide, such as those that our Consular colleagues collect and analyze daily (i.e. number of visas issued, or average wait times). Nor do we enjoy the enviable marketing budgets that private sector firms deploy to seg­ment their markets and promote their brands accordingly. However, what my experience as a Public Diplomacy officer in the field has taught me is that understanding our audiences and measuring our effectiveness is not a luxury, but rather a crucial element of what we need to commit to doing in order to perform our jobs with any degree of success.

In Need of a Culture Shock

What we have now as an organization is not an “evaluation culture,” and by that, I mean a culture that encourages and supports objective, rigorous, honest analysis of the impact (or lack thereof) of the public and press engagement we labor at every day. At best, what we have is a “reporting culture” of outputs: one that expects knee-jerk replies to ever increas­ing and overlapping data calls for numbers of activities organized, hours logged, or dollars spent, with a heart-tugging anecdote or flashy photo tossed in for good effect. These numbers are not irrelevant, and they do, in fact, form an essential part of any serious evalu­a­tion strategy, but they stop short of the ultimate goal: to know whether those hours of toil and precious budget dollars expended actually left a lasting impact on our intended audience. 

Fortunately, the distinction between outputs and outcomes is beginning to trickle down through revamped training for outgoing Public Diplomacy officers, and by inde­pend­ent accounts the United States is still regarded as a global leader in terms of exercising Public Diplomacy[1] as a discipline and endeavoring to measure its impact.[2] However, as PD leaders in Washington readily acknowledge, we still have a long way to go. 

In my view, the most serious challenges facing Public Diplomacy evaluation today are that: 

  • current evaluation efforts are dispersed across various offices and bureaus;
  • there is still no consensus on what we ought to be measuring; and
  • there is a misleading conflation between operational evaluation of program logistics (Did everything run smoothly?), and impact evaluation (Did the program achieve the policy impact we intended?).

Why Should We Care?

“Public Diplomacy is badly in need of an evidence base if it is to move into the mainstream of international relations. A broad research agenda—with evaluation at its heart—can help provide that evidence base.”[3] Rigorous evaluation of our programs is crucial in order for interagency policymakers and Department leadership to see Public Diplomacy for what it actually is: an instrument of national power. 

Unfortunately, we are short-changing our evaluation efforts in the field and in Washington, which only undermines our work and our ability to determine if what we’re doing is working at all. We would be wise to heed the common sense adage, “Show me your pocketbook, and I’ll show you your priorities.” In September 2014, the United States Advisory Commission on Public Diplomacy found that the Department of State had been severely under-funding research and evaluation efforts, falling well short of the industry standard for international organizations, foundations, and development agencies of five percent of an organization’s budget.[4] In fiscal year 2014, just $8.8 million of the State Department’s $726.5 million PD budget went toward research and evaluation—1.2 percent. (Within the Educational and Cultural Affairs Bureau, less than 0.25 percent of its budget was reserved for evaluation in fiscal year 2013.) It is only going down from there; this percentage is decreasing to 0.7 percent in fiscal year 2015.[5]

As Public Diplomacy officers, we are continually told to “do more with less.” And we have become increasingly adept at doing so: co-sponsoring events with like-minded organizations and embassies, employing social media platforms to increase the number of virtual participants and cut down on travel costs, and re-purposing planned events to respond to that latest cable requiring engagement on a niche issue and thus, “killing two birds with one stone.” However, this has led to the misconception that PD resources (be they budgets, human capital, or influence) are inexhaustible. 

The inconvenient truth is that time, money, and energy are not unlimited resources. In order to do our work most efficiently, Public Diplomacy officers need the breathing space and time to design outreach strategies based on solid research, the flexibility to choose not to engage on a certain issue in order to focus efforts on high-priority campaigns, and the increased resources needed to measure outcomes and analyze their impacts on issues of foreign policy concern.      

Encouraging Steps

Thankfully, PD leadership in Washington has realized the need for better analytics, and bureaus and offices throughout the “PD family” have started establishing their own evaluation units. 

  • Public Affairs (PA) Bureau has focused its evaluation unit on measuring the Department’s social media presence and effectiveness. In truth, posts and PD offices in Washington have traditionally been stronger at this aspect of evaluation, namely because the outreach occurs online and thus, it’s easier to capture, quantify, and analyze.
  • International Information Programs (IIP) Bureau has also established an analytics unit, again primarily focused on social media, and is encouraging more posts to run their social media outreach through Hootsuite[6] as part of a Department-wide initiative to improve messaging coordination and big picture visibility of the scale and effectiveness of our online outreach.
  • Educational and Cultural Affairs (ECA) Bureau has been doing some form of program evaluation since the 1990s, making them a pioneer within the PD family. However, their evaluations have historically focused more on the logistical and operational side, with an eye towards justification of budgets, given the high level of Congressional interest in their earmark.[7] There has been less evaluation of the impact of ECA programs on foreign policy initiatives,[8] although its team of social scientists has found some compelling evidence for the positive impact that study abroad in the United States has on shaping future attitudes towards the United States.[9] Current leadership is also working to tie their 18 disparate databases together into one, to automate country profile reports, and derive more of its program final reports straight from the Mission Activity Tracker (MAT).
  • Speaking of MAT, the Under Secretary for Public Diplomacy and Public Affairs’ Office of Policy, Planning and Resources (R/PPR) will be rolling out a new version, which will hopefully cut down (though not eliminate) multiple reporting chains. This is a step in the right direction, but whether it remains more of an output reporting tool rather than a true impact evaluation tool, only time will tell. However, R/PPR is setting up its own evaluation unit to look at large-scale initiatives; and, in the process, R/PPR hopes to develop a toolbox of evaluation templates that posts can draw from to reflect on whether their programs were effective.
  • Office of Opinion Research has been conducting focus group research on top policy priorities. Their work on Europeans’ views of the Transatlantic Trade and Investment Partnership (TTIP) yielded some surprising results, which helped PD practitioners sidestep potential pitfalls and reframe their messaging and format of engagement to better resonate with different segments of European society.[10] This is the kind of audience research and analysis we need to be doing BEFORE we start outreach, and not just as an evaluation after-the-fact.[11]

Given that US PD leadership is on the vanguard of evaluation of government-sponsored public outreach programs, it is natural that there will be some doubt as to the best way forward. In many ways, State Department PD leaders are navigating uncharted waters. In all the above cases, PD bureaus and offices are looking to evaluate large scale, multi-country or worldwide campaigns and programs and will depend on PD officers in the field to evaluate their efforts on a post or country level. Thus, the success of these Department-wide evaluation endeavors, particularly as they roll-out to the field, will hinge upon the level of coordination between the various PD bureaus and offices and their willing­ness to integrate their data onto one unifying platform. The PD officer in the field cannot possibly know what success looks like, nor how to faithfully track and report it, if he/she is receiving dozens of mixed signals through a multitude of channels.  

Mental Shifts Wanted

For this revolution in PD evaluation to actually be successful, we must also revolutionize the way we think of Public Diplomacy, and be willing to question our assumptions and unspoken expectations when social science and practical experience suggest otherwise. Four of these required shifts in thinking are outlined below.

Recognize that concrete change takes time. Most private sector and academic literature on the subject of public relations and public diplomacy campaigns describe “intermediate outcomes” as those expected in five years, with “long-range outcomes” expected much further out. (“Short-term” outcomes don’t even exist according to some research.)[12] Furthermore, researchers argue that fostering interest in an issue and achieving attitude shift is usually the most that public affairs activities can ever hope to produce on their own, and that actual behavioral shifts often take tremendous amounts of time and resources over decades of engagement. The graphic below illustrates an alternative, more scientifically-grounded approach to achieve behavioral shift.

  Spectrum of PD objectives with realistic timescale

Moreover, on the issue of polling as an indicator of PD success or failure, the Pew Research Center cautions, “More longitudinal polling data is critical. Individual polls only capture a moment in time. Public opinion is often fickle and contradictory. Only by viewing long-term trends can Public Diplomacy experts identify points for collaboration and leverage.”[13] 

Create a more risk-tolerant culture. The culture must encourage innovation and empower PD officers in the field to take calculated risks. It must not penalize them for a less-than-glowing outcome. (Otherwise, no reporting or evaluation will ever be honest or useful.) This has to come from leadership at the top; the Under Secretary for Public Diplomacy and Public Affairs Richard Stengel’s recent exhortation—“Get caught trying!”—is a step in the right direction. No big gains were ever made by playing it safe.

Reverse the trend of increasing centralization of PD decision making. Trust the experts on the ground to know more than a central office about the audience.

Stop over-strategizing and start meaningful evaluation. When planning docu­ments become so detailed that they become unwieldy and inflexible, they cease to be useful. Why would we put so much effort into imagining and describing every sub-activity we plan to do in the next three years, and then spend next to no time evaluating whether the total­ity of outreach we’ve conducted in the past year had any real impact on our policy objectives?

The Power of (Useful) Platforms

Given the many players in this evaluation revolution, limited time and resources, and the manifold challenges in capturing and evaluating the full spectrum of work we do as PD officers, what is needed is not one more form to fill out.[14] What we need is a useful platform upon which the work actually happens, and which collects useful data automati­cally as we use it to get the work done. This is why social media metrics and evaluation is so much more advanced than all other PD evaluation: the platforms that collect the data are the same ones which we use to actually do the work of digital outreach. 

But this need not be limited to our outreach in the digital realms. There are lessons to be learned from other successful “platforms:” Why are Craigslist, Amazon, Uber, and AirBNB so successful the world over? Because they designed well-functioning platforms that others can use to get the job done in the real world. Users of these platforms have a concrete, on-the-ground end goal in mind, i.e. find a roommate, purchase a bike, get a ride, or visit a foreign country. Meanwhile, the developers of the platform are collecting valuable data from each click, post, sale, and customer review. The metrics collection happens automati­cally, and analysis can happen at a big picture or detailed level at any time.

What if we could create a platform that would actually help PD officers get the job done rather than just a website for filling out post-facto reports of activities completed? We could possibly transform the current Public Diplomacy Resource Plan (PDRP) to be more of a dashboard, a one-stop-shop for PD officers in the field to do their research on the local media and public opinion climate; set their goals, audiences, and benchmarks; view the totality of their human and physical resources and allocate them accordingly; and measure their progress over time. 

The beauty of such a platform would be that it brings value-added to the user (the PD officer in the field who is using it to learn, set goals, assign tasks, and measure progress) and to the provider (Washington PD stakeholders who all could view data and run reports at any time). If the platform was well-designed, automated, intuitive, and flexible, this could eliminate the need for duplicative reporting, urgent data calls, and countless hours of misspent efforts on initiatives that simply don’t work. This would also eliminate the need to write up activities after the fact in emails, forms, MAT, and stove-piped SharePoint sites (a process that primarily serves central budgeting authorities and is of very limited value to officers in the field, which is why these existing vehicles never capture the full totality of PD work being done). There are many examples that Department of State PD could employ directly or look to for inspiration, some of which are already mobile-device com­pati­ble: Sandglaz, Wunderlist, and NetVibes to name a few.[15]

Now is the time to think seriously about such a platform, as we are already re-imagining how we get our work done in a fast-paced, interconnected world, and how we measure performance. David Steven writes: 

Are we fit for purpose as an organization…? In other words, are we organized to deliver concrete outcomes as cost effectively as possible? These questions cannot be answered in isolation. They require an understanding of all aspects of an organization’s environment, strategy and operations. A ‘systems approach’ is therefore vital—where evaluation is one element in an integrated approach to managing organizational performance.[16] 

Some Parting Thoughts

The winds of change are coming, whether we are prepared for them or not. However, there is no cause to fear or resist this change. Rather we, as Public Diplomacy profession­als, owe it to ourselves, our profession as a whole, and to our nation to honestly ask ourselves whether what we’re doing is making a real impact. The global “marketplace of ideas” is an increasingly loud, complex, and volatile space. With violent non-state actors like ISIS projecting national power both online and on the ground and recruiting global youth, and our age-old competitors China and Russia spending considerable resources both in their own countries and abroad in order to shape public opinion in their favor, our ability to analyze the effectiveness of our Public Diplomacy efforts will determine our success or failure in the global arena. 

This is an exciting time to be a Public Diplomacy officer, with revolutionary changes on the horizon that will undoubtedly transform not only the way we measure, but also how we conceptualize what we do and what our role should be as we promote our nation’s and fellow citizens’ interests abroad. It benefits all of us, particularly those in the field, to get involved, speak up, and share our ideas now on how we can do our work more effectively, what tools and support we need, and how we ought to measure Public Diplomacy success in the 21st century.*  

 


[1] Not to be confused with propaganda, misinformation, and unidirectional information campaigns, Public Diplomacy (PD) is a two-way form of communi­cation that aims at promoting dialogue and two-way understanding. In the case of the United States, PD encourages uncensored contact between foreign publics and US publics in addition to US government spokespersons.  For a more in-depth analysis of the nature of unidirectional versus bi-directional information campaigns, see Juyan Zhang, “Making Sense of the Changes in China’s Public Diplomacy: Direction of Information Flow and Messages,” Place Branding and Public Diplomacy, Volume 4, Issue 4, (November 2008), 303-316.

[2] Robert Banks, Ph.D., “Evaluating Public Diplomacy,” USC Center on Public Diplomacy, Annenberg School for Communication and Journalism, Summer Institute 2014. The Dutch and Swedish governments have also contributed significantly to this field, as have certain EU initiatives designed to measure the impact of outreach efforts on EU policy priorities.

[3] David Steven, “Evaluation and the New Public Diplomacy” (Presentation to The Future of Public Diplomacy 842nd Wilton Park Conference. West Sussex, United Kingdom, March 2, 2007), 4. Mr. Steven’s presentation drew on recent work for the United Kingdom’s Public Diplomacy Board, scoping a system for measuring public diplomacy performance for the Foreign and Commonwealth Office, British Council, and BBC World Service; however, it was not tied to those organizations or specific to the UK context. Instead, the presentation provided an overview of the role evaluation can play in developing a new approach to public diplomacy.

[4] United States Advisory Commission on Public Diplomacy, “Data-Driven Public Diplomacy: Progress Towards Measuring the Impact of Public Diplomacy and International Broadcasting Activities,” September 16, 2014, 19.

[5] Ibid., 19.

[6] The State Department’s Office of Policy, Planning, and Resources for Public Diplomacy and Public Affairs (R/PPR) is coordinating this project with IIP.

[7] Banks, “Evaluating Public Diplomacy.”

[8] United States Advisory Commission on Public Diplomacy, “Data-Driven Public Diplomacy,” 12.

[9] “ECA: New Paradigms for Exchanges” (Presentation to Public Affairs Officer course, Washington, DC, August 11, 2015).

[10] Office of Opinion Research, “Views of TTIP: France, Sweden, United Kingdom.” (Presentation based on focus groups conducted in March 2015).

[11] Joshua S. Fouts (ed.), “Public Diplomacy: Practitioners, Policy Makers, and Public Opinion” (A Report of the Public Diplomacy and World Public Opinion Forum held April 9-11, 2006 in Washington, DC, and  sponsored by the Annenberg Foundation Trust at Sunnylands in partnership with the USC Annenberg School for Communication, the USC Center on Public Diplomacy, and the Pew Research Center), 18.

[12] Steven, “Evaluation and the New Public Diplomacy,” 9.

[13] Fouts, “Public Diplomacy: Practitioners, Policy Makers, and Public Opinion,” 17.

[14] In recent years required planning and reporting documents (such as the PDCC, PDIP, PDRAM, WebRABIT, ICS, MSRP, MRR, MSP, etc.) have proliferated or increased in complexity and length, while true impact evaluation has yet to become part of our mainstream experience. 

[15] See http://www.cpcstrategy.com/blog/2015/01/organization-tools-digital-marketing/ for a more compre­hen­sive list of current online platforms that organizations, companies, and nongovernmental organizations are using to improve productivity, planning, and evaluation.

[16] Steven, “Evaluation and the New Public Diplomacy,” 2. (Emphasis my own.)

* Author’s Note: I would like to thank the Department of State and the Council of American Ambassadors for giving me the tremendous opportunity to serve as a Davis Fellow this past year and a special thank you goes to the dozens of current and former United States Ambassadors, high-level officials, policy leaders, academics, and current and former Public Diplomacy officers who all gave of their time to meet with me and share their experiences, advice, and thoughts on the practice of a career and a mission we all hold dear. As most of these conversations were conducted “on background,” in order to encourage candor, specific attribution is not given; but to all of you who contributed to this article, I thank you. The views expressed in this article are those of the author and not necessarily those of the US Government.

Issue Date

Author(s)

Kathryn W. Davis Public Diplomacy Fellow, 2014-2015