leidenlawmethodsportal

Causality in Research Design

Empirical research in law often deals with causal questions. Causality is present, for instance, in research investigating the effects of policies (law as an explanatory variable) or research examining how law or policies come about (law as an outcome). Yet, estimating causal effects in empirical legal studies requires very careful choices in terms of research design.


Why is causal inference so complicated? Why does the relationship between, say, the number on police forces on the one hand, and crime rates on the other hand, tell us nothing about the causality between the two? It has to do with two major problems related to observational stud-ies (studies in which the independent variable is not manipulated by the researcher), namely reverse causality and omitted variable bias (confounding).

Reverse causality implies that by looking at the relationship between two variables (say, X and Y), one is not able to discern what the direction of the causal channel is; is it X-->Y or X<--Y or maybe both? Coming back to the example above, if a researcher observes a positive rela-tionship between the size of police force and the rate of crime, would she conclude that police causes crime? The answer is of course no, as it may also be the case that more crime leads to more police deployment.

The omitted variable bias, on the other hand, stems from the fact that the observed relationship between X and Y might be induced by a third variable Z, which causes both X and Y. Suppose you found a negative relationship between the number of police forces and crime rate. This relationship may be driven by a third variable, such as affluence approximated by the average income of residents living in a given area. Wealthy people are less likely to commit (street) crimes, but more affluent areas can also afford more police forces. If that holds true and one does not control for affluence, the relationship between the size of police force and crime rate is confounded.

Designing causal research

So how to design your research if the goal is to make a causal statement? The gold standard is to perform an experiment, whereby an explanatory variable of interest is assigned randomly to subjects under investigation. The power of experiments, and randomization in particular, lies in the fact that it makes confounders irrelevant.

Incentivized experiments typically involve a monetary compensation. They come in two main forms.

  • Field experiments tend to have a higher external validity, as they are performed in more generalizable setups. The downside is that the environment is often less control-lable, which may impair the internal validity of the experiment.
  • Lab experiments score well in terms on internal validity as in these setups researchers typically have a high control over experimental treatments. Yet, the type of respond-ents participating in lab experiments (usually students) precludes making generaliza-tion to other groups in the population.

Survey experiments, often population-based, are compatible with research investigating how audiences perceive law or, for instance, how audiences would hypothetically react to the intro-duction or breach of legal rules.

Performing experiments, however, is not always feasible. Consider again our example of re-searching the effect of police on crime. To randomly assign numbers of police officers to var-ious municipalities would be complicated for various reasons. In these cases, one could use experiments that have occurred by chance.

These so-called natural experiments are setups whereby random assignment was not led by a researcher but was triggered by natural forces (such as a natural disaster) or by someone’s arbitrary decision (such as drawing a discretional border). Studies that employ natural experi-ment designs provide some of the most robust results in research on the effects of police on crime. Natural experiments are predominantly used by economists and political scientists, but are getting increasing attention from empirical legal researchers.

In cases where neither lab, nor field, nor natural experiments are applicable, there is still a plethora of other options.

If researchers have access to a large number of observations, they can perform multiple re-gression analyses with the goal to control for as many confounders as possible. If, however, the number of observations is small, then comparative case studies methods, such as most similar system design, are more suitable. Although regression analysis and comparative case studies differ in terms of how the analysis is performed, the general idea of controlling for confounders is the same. Note that these designs score lower in terms of reaching causal infer-ences, since it is not possible to control for all confounders because they cannot be a priori identified or measured.

Resources

  • Angrist, J., & Jörn-Steffen, P., (2014). Mastering Metrics: The Path from Cause to Effect. Princeton University Press.

    An introduction into design-driven approach to causality in empirical research. Important for researchers trying to understand and use natural experiments. Written in a very accessible way

  • Angrist, J., & Jörn-Steffen, P. (2009). Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press.

    This book is very similar to the one above (“Mastering Metrics”) but presents the material in a more technical way.

  • Blatter, J. & Haverland, M. (2012) Designing Case Studies. Explanatory Approaches in Small-N Research. Palgrave Macmillan.

    Accessible and thorough overview of various case study methodologies. It skilfully demonstrates how to combine diverse research approaches.

  • Engel, C. (2013). Legal Experiments - Mission Impossible? Erasmus Law Lectures. Eleven International Publishing.

    This reference gives a good introduction to the logic of experiments as applied to legal questions. It demonstrates some examples and raises issues with experimental designs.

  • Van den Bos, K. & Hulst, L. (2016). On Experiments in Empirical Legal Research. LaM maart 2016, DOI: 10.5553/REM/.000014

    This reference gives a good introduction to the logic of experiments as applied to legal questions. It demonstrates some examples and raises issues with experimental designs.

  • Mutz, D.C. (2011). Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.

    A good starting point for researchers and students planning to perform a survey experiment as it gives an overview of various types of survey experiments and guides through various stages of experimental design and analysis. Diana Mutz is one of the pioneers of applying survey experiments in political science, particularly research on foreign trade. For legal scholarship applying survey experiments, refer to work by, for instance, Jeffrey Rachlinski.

  • Schneider, C.Q. & Rohlfing, I. (2013). Combining QCA and Process Tracing in Set-Theoretic Multi-Method Research. Sociological Methods & Research, 42(4), 559-597.

    This paper gives a comprehensive overview through qualitative comparative analysis and demonstrate how to combine it with process tracing methodology.

  • Schneider C.Q. & Wagemann, C. (2010). Standards of Good Practice in Qualitative Comparative Analysis (QCA) and Fuzzy-Sets. Comparative Sociology, 9, 397-418.

    This paper gives a comprehensive overview through qualitative comparative analysis and demonstrate how to combine it with process tracing methodology.