Religion & Liberty Online

Ongoing evaluation of the effectiveness of public policy

(Photo credit: Shutterstock)

One of the real challenges in arguing for various social policies is getting reliable data about the effectiveness of government programs. This is particularly the case with regard to welfare spending. It’s often very difficult to measure a particular program’s effectiveness, however. But this is an essential task, as Jennifer Marshall writes:

The measure of our compassion for the poor should not be how much we spend on federal antipoverty programs. Compassion must be effective.

We ought to define success by how many escape dependence on welfare to pursue their full potential as human beings. To measure our commitment to the poor by the number of dollars spent on antipoverty programs is to diminish human dignity.

Researchers in the UK have written a report arguing for an approach to public policy that integrates “randomized controlled trials” (RCTs) into attempts to measure the impacts, intended and otherwise, of government programs. In “Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials,” (HT: Hacker News) the authors argue that RCTs are used widely in the private sector, but at least in the UK they “are not routinely used to test the effectiveness of public policy interventions.”

They go on to explain why RCTs are particularly helpful in determining the effectiveness of a particular program:

What makes RCTs different from other types of evaluation is the introduction of a randomly assigned control group, which enables you to compare the effectiveness of a new intervention against what would have happened if you had changed nothing.

The introduction of a control group eliminates a whole host of biases that normally complicate the evaluation process – for example, if you introduce a new “back to work” scheme, how will you know whether those receiving the extra support might not have found a job anyway?

Check out the whole report which provides details on the nine suggested steps for implementation (PDF).

The Office of Management and Budget’s (OMB) 2012 draft report to Congress on costs and benefits of federal regulations states that “agencies should carefully consider how best to obtain good data about the likely effects of regulation; experimentation, including randomized controlled trials, can complement and inform prospective analysis, and perhaps reduce the need for retrospective analysis.”

This last point is somewhat dubious, for as the title of the UK report indicates, the process of evaluating the effectiveness of public policy interventions is ongoing: Test, learn, adapt, repeat! The ninth step is actually to “return to Step 1 to continually improve your understanding of what works.” But in any case it might well be that RCBs are going to be one tool increasingly relied upon to provide some helpful insight into what works and what doesn’t.

Jordan J. Ballor

Jordan J. Ballor (Dr. theol., University of Zurich; Ph.D., Calvin Theological Seminary) is director of research at the Center for Religion, Culture & Democracy, an initiative of the First Liberty Institute. He has previously held research positions at the Acton Institute and Vrije Universiteit Amsterdam, and has authored multiple books, including a forthcoming introduction to the public theology of Abraham Kuyper. Working with Lexham Press, he served as a general editor for the 12 volume Abraham Kuyper Collected Works in Public Theology series, and his research can be found in publications including Journal of Markets & Morality, Journal of Religion, Scottish Journal of Theology, Reformation & Renaissance Review, Journal of the History of Economic Thought, Faith & Economics, and Calvin Theological Journal. He is also associate director of the Junius Institute for Digital Reformation Research at Calvin Theological Seminary and the Henry Institute for the Study of Christianity & Politics at Calvin University.