A Review of Causal Inference Tools Using Education Policies

0
1109

Abstract

Policymakers must be able to identify the most effective strategies for ensuring that a policy will reap substantial benefit while minimizing costs or adverse effects. To that end, they must use data-backed, empirical evidence rather than intuition to shape their policies. However, when analyzing data to assess the efficacy of a current policy or to estimate the potential impact of a new policy, policymakers may misinterpret correlations as causal relationships and in turn make incorrect conclusions, leading to inaccurate assessments of effects of policy. Therefore, it is critical for both policymakers and voters to have a clear understanding of causality and methods used to assess impact of new interventions. This paper provides a readable and easily digestible explanation of what causality is, why causal relationships are important to demonstrate, and reviews key causal inference tools used to estimate causal relationships. Four common causal inference methodologies are discussed: randomized controlled trials, regression discontinuity, difference-in-differences, and instrumental variables. These methods are defined and described with specific focus on their application to real world problems regarding education policy, such as the promise of free tuition for low-income students at the University of Michigan, virtual learning in the United States during the COVID-19 pandemic, merit-based scholarships at the University of Oregon, and smaller class sizes in elementary schools. In summary, this literature review paper serves as a helpful primer for a reader who is seeking to build a better understanding of key causal inference tools and explore application of these tools to a variety of real world policies. 

Introduction: How is Causality Defined and Why is Causality Important?

“With the average earnings of college graduates at a level that is twice as high as that of workers with only a high school diploma, higher education is now the clearest pathway into the middle class.” – President Obama1

Policymakers, such as President Obama, have praised education’s financial returns with seemingly concrete evidence to support their claim. On the surface of Obama’s statement, the fact that college graduates earn “twice as much,” may appear to be a compelling argument that costly investments in education, in terms of both time and reward, are worthwhile. However, an econometrician sees a statement such as the one above and concludes that, while the hypothesized relationship is likely true, more thorough analysis is needed. This is because the relationship described depicts a correlation rather than a sound causal argument. A correlation represents an association between two variables, meaning they change together at a constant rate. On the other hand, causality indicates that one variable is the result of another variable (i.e., there is a “cause and effect” relationship between the two variables). Although the data show that wages and education have a positive correlation (as schooling increases, wages increase), this correlation does not necessarily mean that education causes higher future wages, as wages could have increased due to factors other than additional education. Causal inference techniques provide tools to determine where a correlation captures a true causal effect. This application of econometrics is especially useful in policies such as the one above, where government officials and policymakers may misinterpret correlations as causality.

Highly involved in the discussion of the returns to education is the issue of signaling: are college students really learning skills or is higher education just serving as a signal for something else? That is, are students who are in college truly learning and increasing their knowledge and skills that will help them in their jobs later, or is their attendance at college just signaling that they can take a test (but not necessarily have a concrete grasp of the material), and earn a degree? Estimating a causal effect between higher education and wages would help settle this signaling discussion2. Even if education is purely signaling, obtaining higher education would still help with getting higher wages if employers interpret higher education as a valid signal. However, if lots of people were to achieve higher education, then education may no longer be highly correlated with “ability”. And once this correlation fails to hold, employers would stop paying higher wages to those with higher education, breaking the role of education’s signaling ability.  

This paper provides a highly accessible, easily digestible introduction to key causal inference tools and explains their application to real world problems using education policy. It is not a comprehensive or exhaustive review of all the statistical issues related to causal inference, which is an ongoing area of research, but rather this paper seeks to serve as a brief and practical primer on the most commonly used causal inference tools with a focus on their application to education policy. 

The next section, Section 2, explains why causality is not as easy to determine as it seems, as well as the key factors that limit the use of correlation as indicative of causality. Section 3 introduces the “gold standard” in causal inference methods: randomized controlled trials. In Section 4, natural experiment methods used to estimate causality are discussed, and, in Section 5, these methods are compared with each other in the context of various education policies. Section 6 highlights how causal inference can be used to inform public policy, and Section 7 concludes.

Challenge of Making Causal Statements

Through higher education, students are able to gain knowledge, receive a degree, and secure a high paying job. As explained above, it may seem at first by intuition that education increases wages, but this does not necessarily mean that a causal relationship exists between education and wages. Instead, this is a correlation, and we cannot be certain that this relationship indicates causality. This could be due to two factors: omitted variable bias and reverse causality.

Omitted variable bias (OVB) occurs when outside factors influence both variables involved in an experiment. For example, ability of a student and resources provided by a student’s family may cause OVB that prevents interpreting the correlation between education and wages as causal. Ability and resources could directly impact both educational and wage outcomes, meaning that the observed association between education and wages includes the effects of both education and these omitted variables. Therefore it is impossible to infer from the correlation alone how much of the association is due to education directly versus a combination of education and omitted variables. Although intuitively unlikely, concerns about OVB mean that, statistically, the correlation between education and wages could be present even if it only reflects ability and outcomes (i.e., education has no causal effect on wages). The bias from ability creates problems when estimating the returns to higher education because some students who are able to pursue more years of schooling could have better innate ability than other individuals, making them more successful when it comes to careers and wages. This means that a higher wage between an individual who had more education than another may not have been caused by education, but rather from ability. 

However, not all omitted variables cause bias. An omitted variable must be relevant to the causal relationship being studied, or else it will not cause bias. This means that the omitted variable must have a meaningful effect on the outcome or treatment variable. The treatment variable is a program, policy, medication, or any other intervention that is imposed on or affects a population or part of a population. This variable is the independent variable in an experiment that is purposely manipulated by the econometrician to test its effect on the dependent, outcome variable. In the case of the returns to higher education, education would be the treatment variable while wages would be the outcome variable. Because ability and resources affect both a student’s education and future wages, this means that they are relevant omitted variables and could cause bias in the study. 

Another reason why correlation does not always indicate causality is due to reverse causality. This happens when we intend to estimate a causal relationship between the treatment and outcome variables, but there is actually a causal relationship acting in the opposite “direction” (i.e., the outcome causes the treatment). When researching the returns to higher education, one typically believes there is a causal relationship between education and wages, however it could be the case that individuals with higher income are able to pursue further education with more advanced degrees.

Randomized Controlled Trials (RCT): The Gold Standard

The randomized controlled trial methodology starts by randomly assigning the population into distinct treatment and control groups. Random assignment means that individuals are placed into different groups by the “flip of a coin” or another random method that does not predict or influence which group a given individual will be assigned to. In RCTs and other studies using econometric techniques, the treatment group is the group that receives or is affected by the treatment variable implemented by the researchers. On the other hand, the control group does not receive the treatment. Once the random assignment of both groups is finalized, the treatment is then implemented with the outcome variable of the treatment group being compared to that of the control group. If we were to use an RCT to estimate the returns to higher education, we would randomly assign the population to treatment and control groups, followed by an implementation of the treatment variable, higher education, on the treatment group and not on the control group. Once the treatment is “implemented” (i.e., only those in the treatment group receive higher education), we can calculate the difference in the outcome variable, future wages of students, between the two groups to yield a treatment effect estimate.  

When individuals are randomly assigned to education, ability and resources would no longer be correlated with education as these variables would be equally distributed between the treatment and control groups by the process of randomization. Therefore, differences in outcome between the groups could not be considered to be caused by omitted variables. To be clear, ability and resources can still influence educational and wage outcomes in an RCT, but not differentially between treatment and control groups. This allows the researcher to estimate a sound causal relationship, without the effect of OVB. Similarly, RCTs remove concerns regarding reverse causality by only shifting one variable of interest, like higher education, and seeing its effect on the other variable, like wages.

Natural Experiment Methods

Although RCTs guarantee a causal effect, sometimes they are unethical to conduct or unrealistic or difficult to implement. For example, it would be both unethical and difficult to randomly assign children to treatment and control groups if a researcher wanted to determine the returns to higher education using an RCT. Preventing a randomly assigned group of kids from going to school for the sake of an experiment would clearly be unethical, but also many parents would likely find a way to send their children to school even if they were assigned to the control group, ruining the random assignment of the RCT. For these reasons, natural experiments, or real world experiments used when randomization is not possible, require a different set of methodology in order to assess causality.

It is important to note, however, that there are many related statistical methods and issues involved in the field of causal inference, all of which are outside the scope of this paper. As stated above, this paper does not involve an exhaustive discussion of causal methods, but rather their assumptions and applications in real world problems.. The interested reader may consult the following resources for more information regarding the related statistical methods: Causal Inference: The Mixtape3; Econometric Analysis of Cross Section and Panel Data4; Handbook of Field Experiments5; Mostly Harmless Econometrics6.

The following section discusses how to estimate a valid causal effect when an RCT is not possible or ethical, what assumptions are necessary for causal interpretation when using natural experiment methods and how to interpret causal estimates from these methods. Although there are other causal inference tools for natural experiments, I discuss three of the most commonly used  methods: regression discontinuity, difference-in-differences, and instrumental variables.

Regression Discontinuity (RD)

Instead of using a comparison of randomly assigned treatment and control groups to estimate a causal impact, the method of regression discontinuity (RD) compares individuals around a set “cutoff” to estimate causal effect of a policy. For example, a study in Kenya by Owen Ozier used a secondary school entrance exam to set-up an RD design in order to see if there was a causal relationship between secondary school education and human capital, occupational choice, as well as fertility7. Taken in 8th grade by all Kenyan students, this test establishes a set cutoff score that students need to achieve in order to be accepted into secondary schools. RD compares individuals scoring just below the cutoff, and therefore not receiving the treatment of secondary school education (this group would then serve as the control group), with individuals just above the cutoff, who received the treatment of secondary school education (the treatment group). If there is a discontinuity in the outcome variable–that is, a distinct jump in wages below the cutoff score compared to above the cutoff score–one can infer that there is a causal relationship, which can be estimated by measuring the difference between the outcome variable on each side of the cutoff.

An important distinction to remember for RD is that in contrast to an RCT which estimates an average treatment effect (ATE) across the entire population being studied, an RD estimation represents a local average treatment effect (LATE). Because the magnitude of the estimated causal effect compares “control” and “treatment” individuals just below and above the cutoff, RD estimates a treatment effect based on only those selected “local” individuals right around the cutoff. 

The estimate using an RD approach is causal under two main assumptions. First, it must be the case that the assignment of individuals just before and just after the cutoff are “as-if-random.” For example, students who barely passed or failed on the secondary school entrance exam should be very similar in terms of academic ability, so whether they are on one side of the cutoff or the other is essentially random. RD, in essence, sets up a “mini RCT” around the cutoff, indicating causality as a result. If this “as-if-random” assumption is violated, then students on one side of the cutoff are systematically different from students on the other side of the cutoff, such that differences in outcomes between the two groups could reflect differences in the groups themselves rather than the causal effect of the policy. In a setting similar to Ozier’s study in Kenya, for example, some students or parents may take additional measures (maybe even cheating) to ensure their scores are right above the cutoff. To mitigate this limitation involved in RD, researchers can more closely monitor individuals around the cutoff. This could mean running checks on past tests done by students in the case of the Ozier study, and looking through the data for suspicious trends in test scores over time. Another reason that some individuals may be able to push themselves above a cutoff could be due to higher family income and resources (such as ability to obtain additional tutoring or other test preparation strategies). Thus, an assessment of future wages of these wealthier students compared to others on the lower side of the cutoff with less resources would not reflect a causal relationship. Higher wages from the higher side of the cutoff would not simply be a result of secondary school attendance; rather, these wealthier students who gained the cutoff would have an intergenerational transmission of wealth or maybe a more extensive family network, resulting in higher wages. 

Figure 1: Relationship between father’s education and student KCPE (Kenya Certificate of Primary Education score in Ozier study

However, as long as additional data describing the backgrounds of the subjects are available, the researcher can provide strong suggestive evidence that such concerns are unlikely to have an impact in their study setting. Specifically, we can also check to see if these variables have a discontinuity at the cutoff. If a variable such as family income or level of parental education (or any variable that could not change due to treatment or was pre-treatment) “jumps” at the cutoff, this suggests that the policy is not the only thing affecting the outcome variable or may not be impacting it at all. Figure 1 shows that there is no such “jump” at the cutoff score when looking at father’s education. 

The second key assumption in RD is that the research setting has no other policy that also responds to the threshold. If this were the case, RD would estimate the combined effect of the policies, and it is not possible to isolate the causal effect of any one policy. Therefore, the researcher needs to look for and be aware of all the current policies being enforced in the setting and if any of those policies could respond to the same cutoff as the policy being studied. 

Difference-in-Differences (DiD)

Difference-in-differences (DiD) is another method used when an RCT is not possible or practical. It estimates the average treatment effect on the treated (ATT). This causal inference methodology first begins with the assignment of cohorts, separating individuals into groups based on factors such as time, age, race, gender, and geographic location. A study by Esther Duflo used DiD to assess if  a new school construction program had an impact on years of schooling in Indonesia–that is, if the development of new schools really meant that kids would go to school for longer8. In the case of the Duflo study, students from regions where there was little to no school construction belonged to the low intensity cohort, while those who lived in areas where there were many schools constructed were assigned to the high intensity cohort. In this case, the treatment variable is school construction, meaning that the low intensity cohort serves as the control group (because there were little to no schools built in this cohort’s regions) and the high intensity region serves as the treatment group. Another distinction made within these cohorts was age: a “young” cohort with students of ages 2-6, and an “old” cohort with students 12-17 years old. Once cohorts are assigned, one can compare the differences between the outcome variable–in Duflo’s case, years of education–across cohorts in order to produce a DiD estimate. Essentially, two differences in outcome variables are calculated and the difference between these differences yields the causal estimate (hence the name difference-in-differences). For example, Duflo first took the difference between low intensity and high intensity individuals in the young cohort, and then took the same difference (between low and high intensity students) in the old cohort. Once these two differences are calculated, we can calculate the difference between the differences to yield an estimate of the causal effect of a school construction policy on years of education.

The use of two differences rather than a single difference in DiD allows the econometrician to make OVB less credible. Individuals across a population experience different macroeconomic trends, and using only one difference would not account for the changes in outcome variables across the population caused by these trends. For example, in the case of the Duflo study, some regions had access to more education and thus higher wages, so utilizing a second difference of region in addition to age allows us to take OVB into account and minimize the potential bias caused by it.

Similar to RD and other natural experiment methods, DiD requires a key assumption to hold in order to support a causal relationship. The parallel trends assumption states that the outcome variables of the treatment and control groups involved in a DiD study would continue to evolve in a parallel fashion over time had the treatment not been implemented. In terms of the Indonesia study, the parallel trends assumption means that years of education (the outcome variable) would increase at the same rate for both individuals in the low intensity and high intensity cohorts had the new school construction program not been implemented. Because DiD takes the difference between outcome variables across cohorts, the parallel trends assumption still holds if the different cohorts start at different levels in outcome variables; as long as this relative difference would have remained the same during the time of the study, the parallel trends assumption holds. One way to check if the parallel trends assumption holds in a DiD framework is by looking for pre trends in the research setting. Pre trends are any fluctuations or changes in the trends of the outcome variables between the treatment and control groups before the treatment was implemented. Checking for pre trends asks this main question: before the treatment was implemented, were there any significant changes in the setting that could have impacted the population and thus explain the difference between the treatment and control outcome variables? Although checking for pre trends does not prove the parallel trends assumption entirely, it can help support or undermine this crucial assumption of the DiD method. If there was a high impact of pre trends on the outcome variable (prior to treatment), it is likely that the treatment and control groups would not be on parallel trends after treatment. This violates the key parallel trends assumption because it is unlikely that there would be a constant difference between the two groups throughout the entire experiment period had the treatment not been implemented. Conversely, if pre trends had little impact, it makes it more likely, though not certain, that the parallel trends assumption holds. 

Instrumental Variables (IV)

As another commonly used natural experiment method, instrumental variables (IV) leverages an exogenous variable that has a correlation with the outcome but not a causal relationship. In economics, exogenous variables are variables that shift treatment and only impact outcome variables indirectly through their effect on treatment. To use instrumental variables to estimate a causal effect, the econometrician must first identify an exogenous variable that will be used as an instrument. A study by David Card used an individual’s proximity to a college as an instrument to assess the returns to education9. The method of IV isolates the variation due to the treatment, and minimizes potential bias due to omitted variables. As shown by Card, the likelihood of a student pursuing higher education is certainly associated with their distance from a school (the instrument is correlated to the outcome), but geographic location does not directly cause an individual to have more or less future earnings. Furthermore, the instrument of geographic region also has no effect on the omitted variable of ability, which goes to show how IV isolates the variation due to treatment and rules out variation due to omitted variables. Therefore, Card’s use of geographical differences in distances to college between individuals as an instrument in this implementation of IV is quite effective; the instrument is only related to the outcome variable through the treatment, and therefore represents a causal relationship. Figure 2 shows a conceptual map of the relationship between instrument, treatment, and outcomes with omitted variables (observed and unobserved confounders). 

Figure 2: Conceptual map of the relationship between different variables in the natural experiment tool Instrumental Variables10.

However, before we can use an instrument to estimate causality we must ensure that the said instrument is strong–a weak instrument indicating a causal effect is invalid. In order to test whether or not an instrument is strong, econometricians utilize the F-test. It is somewhat agreed upon that the F-statistic, a value gained from the F-test, must be greater than 10, however this is still an area of ongoing research and debate. Furthermore, a strong instrument is relevant to the experiment, meaning that it has a significant correlation to the outcome variable being studied.

Causal Inference for Evaluating Education Policies and Comparison of Methods

PolicyCausal Inference TechniqueExplanation
Free Tuition PromiseRCTLooking for an ATE: we want to see if a large scale of low income students are more likely to apply to a school with the program in place, rather than a select group of students around a cutoff.
Merit-based ScholarshipsRDWe want to estimate the LATE of merit-based scholarships on first year GPA: utilize students with GPA just around the cutoff to set up a RD design.
Remote InstructionDiDWe are able to exploit the difference of time (because the pandemic occurred at a specific point in time) and a second difference (race, region, etc.). Also, students’ academic progression over time would likely adhere to parallel trends.
Class SizeIVIn a setting where many different factors could influence learning and academic progress, we can assign an exogenous instrument in order to isolate the causal effect of smaller class sizes on academic achievement.

Table 1: Examples of different causal inference tools applied to various education policies. 

RCT = Randomized Controlled Trial, DiD = Difference-in-Differences, RD = Regression Discontinuity, IV = Instrumental Variables.

ATE = Average Treatment Effect, LATE = Local Average Treatment Effect, GPA = Grade Point Average.

Table 1 depicts the application of the causal inference tools reviewed to various education policies, with an explanation of the rationale behind the use of that method. The “Free Tuition Promise” example represents an intervention where low-income students are encouraged to apply to college with the guarantee that free tuition would be provided should they be accepted. With the ability to send out flyers to a randomly selected group of low-income high school seniors across the United States, the researchers were able to conduct an RCT because this did not raise concerns regarding the main ethical limitation of an RCT. In the “Merit-based Scholarships” policy example, RD is used to estimate the effect of merit-based scholarship on first year academic performance as measured by GPA (grade point average). Similar to Ozier’s study discussed in section 4.1, a cutoff GPA can be used to estimate a local average treatment effect with an RD design. The example of “Remote Instruction” describes the mandate of virtual learning during the COVID-19 pandemic and its causal effect on student achievement utilizing DiD. Because the two differences of time and demographics were easily applicable to the nature of the COVID-19 pandemic, DiD was able to be used to estimate the causal effect of virtual learning. Finally, the “Class Size” example shows the how the instrumental variables method can be used to estimate the effect of smaller class sizes on student development. Similar to Card’s study described in section 4.3, the context involving the effect of class sizes on academic achievement allows for an exogenous instrument to be used.

When deciding on a method for a given policy, the first question that should be asked prior to application is this: is randomization possible? As discussed in previous sections, the randomized controlled trial is the “gold standard” of causal inference. Randomization enables the researcher to immediately gain a causal link between two variables; however, there are real and unavoidable limitations that often make an RCT impossible, unethical, or impractical. In reference to the education policy examples in Table 1 randomizing remote instruction may not be acceptable, but randomizing who is notified of a promise of free tuition (if applicable) when applying to a college may be considered acceptable. Hence, if the question regarding randomization cannot be answered “yes”, one then turns to the natural experiment methods. Choosing among these methods not only depends on the type of policy being tested, but also the availability of data in a particular setting. For example, because the DiD method involves two differences and sometimes needs data over a long period of time, a successful DiD framework requires a higher availability of data in comparison to RD or IV. RD only requires data from one setting with only individuals just around the set cutoff, and IV requires data from the setting involving the instrument, treatment, and outcome variables. 

Additional Empirical Example of Causal Inference for Policy Evaluation

Banned by the Supreme Court on June 29, 2023, affirmative action has sparked fierce controversy. The executive order was originally passed in 1965 by President Lyndon B. Johnson as a measure to combat discrimination and increase opportunities for under-represented races, such as Blacks and Hispanics, as well as a compensatory gesture to “rewrite” past wrongs that African Americans experienced through slavery. After a lawsuit filed by Students for Fair Admissions against Harvard University and the University of North Carolina, race-based affirmative action was officially banned for college admission11. This meant that race could no longer be considered as a factor in the college admissions process. 

However, affirmative action was already banned at the state level in some states. For example, the University of Michigan (UMich) and the University of California (UC) both removed race-based admissions policies in 2003 and 1996 respectively following rulings by the state supreme court in Michigan and a legal referendum in California. Because affirmative action was recently banned nationwide by the Supreme Court, we can use a DiD framework to estimate a causal effect of banning affirmative action on a number of policy relevant outcome variables, such as student diversity across a range of measures and educational outcomes, especially those where student diversity is considered an important input in the learning process. With one difference being time and the other being state, we can see if the Supreme Court ban has any impact on race-based admissions on schools that already banned affirmative action in the first place. By first assigning individuals to Michigan and California cohorts, we can then utilize a second difference of time to set up a DiD estimation. The use of time as a cohort factor assigns individuals based on when they applied to college–pre or post treatment (where treatment is affirmative action). Students who applied before the Supreme Court’s affirmative action ban would belong to the pre treatment cohort, and students who applied after the Supreme Court ban would belong to the post-treatment cohort.

Conclusion

Causal inference can be used to estimate the impact of a policy, allowing policymakers to determine whether or not it is worthwhile to allocate a given resource to a new policy. However, policymakers may misinterpret associative relationships as causal, which can lead to misinformed decisions on policies. This paper reviewed the definition of causality, the importance of establishing a causal relationship when assessing impact of an intervention, including some main “roadblocks” in establishing a causal effect: omitted variable bias and reverse causality, as well as the ethical or practical barriers of using RCT. It also reviewed three commonly used natural experiment methods of RD, DiD, and IV to prove or disprove causality, as well as the critical assumptions behind each method and interpretation of their results. The application of causal inference methodologies was demonstrated with various education policies. 

Although causal inference tools are an effective way for policymakers to estimate the effect of a policy, there are some limitations. When a causal inference methodology is employed, it estimates the causal effect of a treatment in that setting, at that point in time. This limitation is called external validity: simply because a causal relationship may occur for a certain experiment, it does not mean it will hold in a different setting and time, even if the same exact methodology is used. Another limitation of causal inference tools lies in the implementation of a treatment. When the effect of  a policy is evaluated with a given causal inference tool, we estimate the effect of the policy as a whole. However, there could be multiple aspects of a policy that we cannot isolate and “tease out” with causal inference alone. Therefore, policymakers must take external validity and the issue involving implementation into account when assessing estimates from causal inference methodologies.

These limitations certainly have an impact on the credibility of causal inference tools, but policymakers must think about this in the respect to its alternative. Causal inference is a much more robust approach to assessing policy compared to using data on only association and correlation. 

The knowledge of causal inference methods can allow one to question a causal statement and avoid making inappropriate statistical inferences from data showing correlation. Having a critical approach to an established convention or a causal claim made by a policymaker may lead to deeper understanding and discovery of facts. Causal inference tools can also prove oneself wrong and challenge one’s intuition and assumptions, making them powerful and humbling tools.

Acknowledgements

I would like to acknowledge my mentor, Mr. Russell Morton, for providing guidance and feedback on this manuscript.

  1. B. Obama. Higher education. https://obamawhitehouse.archives.gov/issues/education/higher-education (2016). []
  2. B. Caplan. What students know that experts don’t: school is all about signaling, not skill-building. https://www.latimes.com/opinion/op-ed/la-oe-caplan-education-credentials-20180211-story.html (2018). []
  3. S. Cunningham. Causal Inference: The Mixtape (2021). []
  4. J. M. Wooldridge. Econometric Analysis of Cross Section and Panel Data (2010). []
  5. E. Duflo. A. Banerjee. Handbook of Field of Experiments (2017). []
  6. J. Angrist. Mostly Harmless Econometrics (2008). []
  7. O. Ozier. The impact of secondary schooling in Kenya: a regression discontinuity analysis. https://documents1.worldbank.org/curated/en/700151467997577920/pdf/WPS7384.pdf (2015). []
  8. E. Duflo. Schooling and labor market consequences of school construction in Indonesia: evidence from an unusual policy experiment. American Economic Review. 91, 795-813 (2001). []
  9. D. Card. Using geographic variation in college proximity to estimate the return to schooling. https://davidcard.berkeley.edu/papers/geo\_var\_schooling.pdf (1993). []
  10. M. U. Awan, Y. Liu, M. Morucci, S. Roy, C. Rudin, A. Volfovsky. Interpretable almost-matching-exactly with instrumental variables. Proceedings of the Thirty-fifth Conference on Uncertainty in Artificial Intelligence. (2019). []
  11. Students for fair admissions, inc. v. president and fellows of Harvard college. https://www.supremecourt.gov/opinions/22pdf/20-1199_hgdj.pdf (2023). []

LEAVE A REPLY

Please enter your comment!
Please enter your name here