• October 20, 2014

Who’s Behind the Evaluation Curtain

Foundations Struggle To Get Evaluation Right 3

Grant money is precious, says the Robert Wood Johnson Foundation’s David Colby, “and we want to use it in the most effective ways possible.”

Enlarge Image
close Foundations Struggle To Get Evaluation Right 3

Grant money is precious, says the Robert Wood Johnson Foundation’s David Colby, “and we want to use it in the most effective ways possible.”

The Duke Endowment gives tens of millions of dollars a year to prevent child abuse, expand health care, strengthen rural churches, and improve higher education. It’s William Bacon’s job to figure out how well that money is spent.

As the foundation’s director of evaluation, Mr. Bacon helps program officers shape studies of the work they support, tries to find new ways to solicit grantee feedback, and serves as a liaison to consultants who research what’s working and what’s not.

During a recent week, he shuttled between meetings and conference calls focused on many topics: assessing the advocacy work of the nonprofit Nurse-Family Partnership, a Duke grantee; clarifying the way the foundation describes its work on its Web site; and encouraging universities that get money from the Duke Endowment to share lessons on reducing their campus energy costs.

It’s a “wonky” job description, says Mr. Bacon, that doesn’t roll off the tongue at cocktail parties. But he summarizes it like this: “I work across different program areas to help staff and management understand the effectiveness of our work.”

Evolving Jobs

With pressure building for philanthropists to prove their money helps society, jobs like Mr. Bacon’s have become more common. Many big foundations now employ one person who oversees all the evaluation work, although approaches to managing evaluation go in and out of fashion.

Last year the Ford Foundation eliminated its central evaluation position in favor of a structure that puts program and regional office managers in charge.

But the John D. and Catherine T. MacArthur Foundation recently created a full-time chief evaluator position, and so too have newer funds like the Margaret A. Cargill Philanthropies and the MasterCard Foundation.

The jobs have evolved in recent years: Instead of focusing mostly on individual grants, evaluation work now concentrates on helping to shape and assess programs as well as the fund’s overarching strategy.

Ideally, evaluators and the work they do can help foundations rejigger programs to make them more effective, improve communication with the foundation’s grantees, and even influence what other grant makers and governments support.

“Evaluation gives us the data and information to help us improve—to improve programs, to improve the outcomes of grantees, to improve the quality of staff and their ability to think quantitatively,” says Risa Lavizzo-Mourey, president of the Robert Wood Johnson Foundation. “What we’ve learned from evaluation has made us a better institution.”

Evaluation’s Purpose

Done poorly, evaluations can burden charities with busy work, only to produce research that ends up collecting dust.

“Many of us in the nonprofit community worry, To what end are we doing evaluations?” says Mark Loranger, president of Chrysalis, a Los Angeles nonprofit that helps find jobs for low-income and homeless people, including those with criminal backgrounds. “We have to face the reality that this does divert money from a very limited set of resources, and at a certain point you have to wonder if it’s really achieving the mission of the agency.”

Or if it’s achieving the mission of the foundation. Maurice Miller, a veteran of antipoverty groups and a trustee of the California Endowment and Hitachi Foundation, says that grant makers too often conduct evaluations just to reassure themselves about what they already know. Obsequious consultants, who get paid by the foundations, are sometimes afraid to deliver bad news.

“Evaluations are rarely used to make decisions,” says Mr. Miller, who now heads the Family Independence Initiative, a charity that encourages poor people to support one another in the climb to the middle class.

Making Adjustments

To ensure that evaluations do collect information that can help grant makers and nonprofits shape their work, evaluators, foundation leaders, and charity officials say they must be seen not as ominous final exams but as checkups that allow nonprofit staff members to introduce mid-course corrections when they’re needed.

“We’re here to help program folks get the information they need so they can see if they’re on the right track,” says Mr. Bacon. “It’s not about extracting information from them so we can judge if they’re performing well; it’s about working with them so they can clearly define what they want to achieve.”

The Duke Endowment-backed evaluations of a seven-year project to improve the health of overworked clergy members is a good example, says Rae Jean Proeschold-Bell, an assistant professor at Duke University who serves as research director for the project.

More than 1,000 religious leaders across North Carolina signed up, but only a third were enrolled at a time so the evaluators could apply each group’s lessons to later groups.

When researchers surveyed the first group last year, they found an encouraging drop in participants’ weight and blood pressure. But participants weren’t showing healthy levels of HDL, a “good” type of cholesterol that’s associated with exercise.

So the “wellness advocates” who advise the clergy members on their health spent more time encouraging participants to go on walks or put some of their $500 stipends from the program toward gym memberships. It’s Ms. Proeschold-Bell’s hope that if evaluations continue to record progress, the program could have widespread applications.

Eight-five percent of people in the general population who lose a lot of weight gain it back in a year, she says. If participants in the clergy program keep their weight off, perhaps other grant makers and governments would be interested in spreading the program more broadly.

“If the numbers show we aren’t making a difference, that’s very important, so others don’t waste time and money on this,” says Ms. Proeschold-Bell. “But our preliminary results do look good. Perhaps this is something we could take nationally.”

That’s the holy grail: Finance a program, rigorously evaluate it, and then watch other donors help more people benefit from it.

If not done right, however, the process can do more harm than good, say nonprofit leaders.

Often foundations ask nonprofits to conduct expensive and time-consuming data collection without providing enough money to do so. They have unrealistic ideas about how long it takes to collect information, charity officials say. Sometimes they aren’t asking relevant questions.

Nonprofit leaders say they sometimes feel like they’re being put through their paces just for the sake of appearance rather than to make constructive changes. Critics of evaluation also worry that it can dissuade foundations from taking risks and from backing projects that are hard to measure.

Chrysalis’s experience helping to shape a new evaluation approach illustrates how complex the process can be—and the importance of clear communication between grant maker and grantee.

The nonprofit won $600,000 from the Roberts Enterprise Development Fund, or REDF, a venture-philanthropy group in California, to prepare people for the work force by finding them jobs in businesses with social goals. The money came as part of the White House Office of Social Innovation and Civic Participation’s grant-making program, which requires rigorous assessment.

At first, REDF considered testing the program through a randomized controlled trial. Such trials, which are widely used in medical research, have gained popularity among nonprofit evaluators because they allow researchers to compare people who receive services with those who don’t.

But while the nonprofit’s leaders welcomed the chance to measure the group’s effectiveness, they had concerns about the approach.

The experiment would have relied on a computer program to decide arbitrarily which people who walked through Chrysalis’s doors were placed in jobs at social enterprises and which were not. Was that ethical, Chrysalis officials wondered?

Besides, some of the information that REDF sought was almost—but not quite—the same as data Chrysalis already collected.

Was it possible to avoid collecting basically the same information twice?

Charity officials were also worried about securing clients’ informed consent: Many of the Chrysalis clients are homeless or have just been released from prison, and asking them to sign a complicated form agreeing to participate in a randomized trial would require a lot of explanation.

REDF and Chrysalis ended up forgoing a random trial in favor of one that allows the charity to maintain control over who receives its help. The evaluation will still be a lot of work: Mr. Loranger, the Chrysalis president, says he might need to hire another person to assist.

But he’s grateful that in Anna Martin, who served as REDF’s director of evaluation until February, he had someone who was sensitive to the charity’s concerns. “It would have been easier for REDF to say, 'Sorry, but this is how it’s going to be.’”

New Ideas

As they wade deeper into evaluation, foundations are trying to find new and creative ways of understanding what is and isn’t working.

Ms. Martin, of REDF, says she likes to combine quantitative analysis with more qualitative ways of gathering feedback from people served by REDF grantees. She would visit job programs to interview homeless people receiving support.

David Colby, vice president for research and evaluation at the Robert Wood Johnson Foundation, perhaps the country’s most sophisticated evaluator, is exploring a way to use “information markets,” such as surveys of people with expertise in a specific cause, to predict how successful a program might be before it’s adopted.

He’s also working to identify ways to analyze the costs and benefits of certain types of work. For example, a rise in home prices following a town’s adoption of clean-air laws might provide a jumping-off point to assess the return that a donor got from supporting advocacy work for that legislation.

He says no single approach is best when it comes to evaluation, but it’s important for foundations to keep pushing for better ways to learn.

“We have a precious resource, a very scarce resource, and we want to use it in the most effective ways possible,” he says.

  • 1255 Twenty-Third St., N.W.
  • Washington, D.C. 20037
subscribe today

Raise more money and increase awareness with trusted insight.