Data Visualization Approaches for Program Evaluation (and Beyond)

Simone Parrish

CCP | Global Repository Director
Dataviz network map detail

Network map detail from slides from "DataViz! Tips, Tools, and How-tos for Visualizing Your Data" (#14ntcdataviz)

When you hear the phrase “program evaluation findings,” are you bored already? Most people—even within the evaluation field—perceive evaluation as dry. The major output of an evaluation is often a weighty report that gets read once (if at all) before it begins its long-term dust-collecting destiny. Adding some good data visualization to the mix can really wake people up.

From 2001-2011, I worked at Innovation Network, a small, Washington, DC-based nonprofit that does program evaluation consulting for other nonprofits and funders. Innovation Network is deeply invested in making evaluation more engaging and useful. At the 2014 Nonprofit Technology Conference in March, I reconnected with some colleagues from Innovation Network: Johanna Morariu and Ann Emery presented (with Andrew Means of Data Analysts for Social Good) a session entitled “DataViz! Tips, Tools, and How-tos for Visualizing Your Data” (handout | slides). In the last year or two of my tenure at Innovation Network (when I was the knowledge manager, webmaster, and primary editor), we had begun experimenting with using data visualization as a regular part of our evaluation approaches. It was heartening to be reminded of our earlier work, and see how far Johanna and Ann have taken their dataviz expertise.

Data Placemats: Talking Through the Data

One of Innovation Network’s first dataviz approaches was to hold participatory data analysis meetings. Innovation Network has been a longtime proponent of participatory evaluation approaches, but the participation piece was disproportionately skewed toward the beginning of an effort, during evaluation planning. No matter how good an evaluator is at data collection and analysis, it is the program staff who know a program best—and they often see a finding or a trend in a different light than the evaluators. 

Data Placemats

From “Participatory Analysis: Expanding Stakeholder Involvement in Evaluation” (430KB .pdf), Innovation Network, Inc., 2011, used under a Creative Commons Attribution-Noncommercial-Sharealike license.

However, going through endless cross-tabs made stakeholders’ eyes cross. To provide a more engaging and comprehensible framework for these meetings, we created “data placemats” (see Fig. 1). Each stakeholder at the meeting had a set of these “placemats” in front of them. The evaluators walked through the data, guiding the discussion with the questions “What surprises you about the data?”, “What factors may explain some of the trends we are seeing?”, and “Does this lead you to new questions?”

Innovation Network’s whitepaper on participatory data analysis notes, “As a result of the participatory analysis meeting, the quality of evaluation findings and recommendations was strengthened, stakeholder buy-in for findings/ recommendations was increased, and the likelihood of evaluation use improved.”

Seeing Themes in Qualitative Data: Wordle

We also experimented with using word-clouds as preliminary analysis tools for qualitative data. Taking a clump of qualitative data—a set of interview transcripts, or the open-ended answers to a survey, for example—and dumping it as plain text into Wordle (http://www.wordle.net/create) makes a word cloud with frequencies represented by size. For example, this is a Wordle of a draft of this post.

Wordle: Evaluation Dataviz

Word cloud, created with Wordle from the text of a draft of this post. "Evaluation", "data", "participatory", and "analysis" are some themes that become clear through Wordle's frequency processing.

There’s definitely cleanup involved, and we certainly didn’t make any claims about statistical validity based on the Wordles, but it’s a very fast and satisfying way to see what themes pop out. For the evaluators, it can be a big help with the first stages of re-coding qualitative data for analysis. For the stakeholders, it can give a powerful visual soon after data collection ends, so they don’t have to wait for the analysis to be complete. 

The Nonprofit Evaluation Field, Visualized

State of Evaluation 2012 detail

Detail from Innovation Network's "State of Evaluation 2012" (2.6MB pdf), © 2012  Innovation Network, Inc., used with permission

Another Innovation Network initiative is the “State of Evaluation” project. In 2010 and 2012, Innovation Network did a research project asking hundreds of U.S. nonprofits about their evaluation practice, capacities, and needs.  Rather than putting the findings in a text-heavy report, Innovation Network went the extra mile to think carefully about what kinds of data visualizations would best reveal the nuances of the data. The result is an insightful and engaging look at a field that could otherwise have been as dry as a mouthful of sawdust.

Using the Tools You Have

One of the things I appreciated most about the "DataViz!” session was that a portion of it was spent on demonstrating how to make Excel charts look good. It’s refreshing to have a conference session in which the presenters recognize the very real constraints faced by most nonprofits—not making the assumption that a $20,000 software-as-a-service subscription is within reach, for example. We’ve all seen—and most of us have used—the default charts produced by Excel. They present data accurately, but generally speaking they can’t be described as “elegant” or “engaging”. Ann Emery’s quick demonstration (and her free video tutorials) of simple changes—like removing tick marks, adding emphasis with color, and directly labelling—gave me hope for a brighter dataviz future.