What is the Right Way to Evaluate a Nonprofit?
Some controversy and emotion arose in the nonprofit blogosphere last week. The Direct Marketing Association’s Nonprofit Federation released a study that concluded that watchdog groups use evaluation systems that are confusing and simplistic. The reaction from the watchdog groups was a bit emotional. The groups which the study focused on were the American Institute of Philanthropy, Better Business Bureau’s Wise Giving Alliance, and Charity Navigator.
The issue is a familiar one: organizations manage to what is measured. When the measures are short-sighted, subpar actions can result. The dialogue is not helped by watchdog charges of bias in the studies or refusal to consider adverse consequences of rating systems.
Consider this. The popular US News ranking of colleges uses the percentage of applicants accepted as one measure of a good college: the lower the percentage, the more competitive the college. So what do colleges do? They expand marketing to increase the applicant pool for an unchanged number of slots so that the percentage accepted declines. Colleges spend more money on processing applications. Students now apply to ten colleges instead of three because the acceptance rates are down. Is anyone better off?
I devoted two chapters in my recent book, More Than Just Money, to the distortions that rating systems can introduce if they are too simplistic. The chapter Disclosure: the Good and the Bad cited research that individual donors don’t pay attention to the watchdogs anyway. And it noted that with each watchdog creating its own rating system, we have created a Tower of Babel for evaluating nonprofits in which one rating can be good and another average for the same nonprofit. The chapter The Sarbanes Oxley Act noted that diverse reporting systems amplify nonprofit’s administration costs for preparing the reports at the same time that many of the systems penalize a nonprofit for having higher administration costs.
For organizations like Charity Navigator to assert that there can be no negative consequences from their rating systems is sad. While accountability is important and necessary, it is harmful to create an adversarial relationship between donors and nonprofits. There has been an erosion of trust between donors and nonprofits. There has been a shift from donors supporting nonprofits to donors supporting specific programs and having grant competitions for which nonprofit gets to provide the programs.
The problem, as I see it, is that we want to oversimplify what constitutes a good nonprofit. And there seems to be a competition among some of the watchdogs organizations to be the go-to site for nonprofit rankings. I would argue that the competition is not helpful. More rating systems, more reports for nonprofits to file. For what?
We all should agree that there are bad nonprofits that should be weeded out, just as there are bad for-profits. The conflict comes in the middle ground. On a scale from A to F, we know that A is good to have and F is bad. But what about a B, C, or D? Are they bad nonprofits? Are they bad if they have 25% overhead costs but great if they have 5%? Are they good if they have a web site but bad if they don’t? And can financial ratios by themselves really separate good management from bad management? Are high reserves good or do they indicate hoarding? Is a large endowment an indication of good management or more a reflection of the financial means of the nonprofit’s constituency? Is a high percentage of spending devoted to programs an indication of efficiency or is it an indication of inadequate controls, limited oversight, and outdated technology?
The reality is that supporting a nonprofit is an investment in a mission. Financial advisors tell us to invest our money in the stocks of businesses we know and understand. That seems good advice for investing in nonprofits as well. Volunteer, attend events, read their publications and read their IRS 990 report. That is the way to get to know a nonprofit.