Policy Design Analysis and Evaluation
A.Y. 2024/2025
Learning objectives
This course trains researchers in political analysis and public policy. Thus, it contributes to delivering COM/DAPS&CO's learning outcomes in policy analysis and evaluation that will serve students in a variety of contexts including firms, public agencies, private and public interest associations, and research institutes.
Expected learning outcomes
This course equips students to shape and test causal claims about policy and institutional design. By the end of the course, students will acquire the key theoretical knowledge and skills to perform small- and large-scale analyses that establish the nature of the connection between relevant institutional features, behavior, and policy performance.
Lesson period: First trimester
Assessment methods: Esame
Assessment result: voto verbalizzato in trentesimi
Single course
This course can be attended as a single course.
Course syllabus and organization
Single session
Responsible
Lesson period
First trimester
The course will rely on Ariel and other apps, regardless of the rules in force at the time. Prospective students are warmly encouraged to request access to the channel at their earliest convenience.
Course syllabus
PN. The course's content can undergo some improvements. The consolidated version will be available on the Ariel website.
Class 00 - Map
The first class will be for introducing ourselves and recapping (a) the course's topics and goals; (b) available resources; (c) the learning path and assessment strategies.
Backing:
A Schneider, H Ingram, Systematically pinching ideas, 10.1017/S0143814X00006851
A Schneider, H Ingram, Behavioral assumptions of policy tools, 10.2307/2131904
___Module A
We understand public policies as solutions X to collective problems Y. As such, they build on two intertwined hypotheses.
The first maintains that problem Y arises as some actors make a particular decision in situations Φ. The second considers that intervention X on the situation can trigger some mechanism M that changes actors' behaviors to solve or alleviate the problem.
The module offers analytic tools to develop a whole policy theory, then focuses on using information as an instrument of intervention.
At the end of this module, you will be able to:
- Render effectiveness as a Y through proper gauges,
- Conceive some micro-foundations M of policy effectiveness,
- Conceptualize policy interventions X as tools geared toward changing the gauge's values by triggering M,
- Consider the setting Φ of policy interventions and how they can twist the tools' rationale.
___Module B
In module A we have learned essential frameworks and practices to define XΦMY and render the core of a policy theory.
But how can we know that such theory 'holds'?
A long-honored answer maintains that a theory holds when its shape allows for valid reasoning about the world.
The module introduces the criteria for moving from conversational theories to valid structured arguments and amenable to empirical probation.
At the end of this module, you will be able to:
- identify Reasons and Conclusions in written texts,
- pinpoint those elements that remain implicit,
- organize Reasons and Conclusions into inferential structures,
- recognize basic logical fallacies in those structures, if any,
- consider which evidence could prove the structure sound.
__Module C
Qualitative Comparative Analysis (QCA) allows for establishing the tenability of complex causal hypotheses as Boolean models.
The module introduces the special assumptions and strategy of the explanatory usage of QCA with the aid of actual examples.
At the end of the module, you'll be able to:
- Conceive of deterministic causal models,
- Establish classification rules for the units of a population,
- Identify underspecified models,
- Get rid of irrelevant components in over-specified explanatory models.
___Module D
Either explicitly or implicitly, most empirical research aims to interpret the co-occurrence of interesting phenomena causally. Addressing causality, however, has been notoriously difficult without the luxury of experimental data. This course will introduce methods that allow making convincing causal claims without working with experimental data. We will look at four such designs:
1. Instrumental Variables
2. Matching
3. Difference-in-Differences estimation
4. Regression Discontinuity Design
For each method, an in-depth theoretical explanation will be followed by a short review of the mathematical intuition behind the model. Furthermore, we will have a hands-on lab section for each method.
By the end of the module, students will be in position to:
- Critically read and evaluate statements about causal relationships based on some data analysis.
- Apply a variety of design-based easy-to-implement methods that will help them draw causal inferences in their own research.
- More broadly, think about policy proposals and their outputs or outcomes under the logic of causal inference.
Class 00 - Map
The first class will be for introducing ourselves and recapping (a) the course's topics and goals; (b) available resources; (c) the learning path and assessment strategies.
Backing:
A Schneider, H Ingram, Systematically pinching ideas, 10.1017/S0143814X00006851
A Schneider, H Ingram, Behavioral assumptions of policy tools, 10.2307/2131904
___Module A
We understand public policies as solutions X to collective problems Y. As such, they build on two intertwined hypotheses.
The first maintains that problem Y arises as some actors make a particular decision in situations Φ. The second considers that intervention X on the situation can trigger some mechanism M that changes actors' behaviors to solve or alleviate the problem.
The module offers analytic tools to develop a whole policy theory, then focuses on using information as an instrument of intervention.
At the end of this module, you will be able to:
- Render effectiveness as a Y through proper gauges,
- Conceive some micro-foundations M of policy effectiveness,
- Conceptualize policy interventions X as tools geared toward changing the gauge's values by triggering M,
- Consider the setting Φ of policy interventions and how they can twist the tools' rationale.
___Module B
In module A we have learned essential frameworks and practices to define XΦMY and render the core of a policy theory.
But how can we know that such theory 'holds'?
A long-honored answer maintains that a theory holds when its shape allows for valid reasoning about the world.
The module introduces the criteria for moving from conversational theories to valid structured arguments and amenable to empirical probation.
At the end of this module, you will be able to:
- identify Reasons and Conclusions in written texts,
- pinpoint those elements that remain implicit,
- organize Reasons and Conclusions into inferential structures,
- recognize basic logical fallacies in those structures, if any,
- consider which evidence could prove the structure sound.
__Module C
Qualitative Comparative Analysis (QCA) allows for establishing the tenability of complex causal hypotheses as Boolean models.
The module introduces the special assumptions and strategy of the explanatory usage of QCA with the aid of actual examples.
At the end of the module, you'll be able to:
- Conceive of deterministic causal models,
- Establish classification rules for the units of a population,
- Identify underspecified models,
- Get rid of irrelevant components in over-specified explanatory models.
___Module D
Either explicitly or implicitly, most empirical research aims to interpret the co-occurrence of interesting phenomena causally. Addressing causality, however, has been notoriously difficult without the luxury of experimental data. This course will introduce methods that allow making convincing causal claims without working with experimental data. We will look at four such designs:
1. Instrumental Variables
2. Matching
3. Difference-in-Differences estimation
4. Regression Discontinuity Design
For each method, an in-depth theoretical explanation will be followed by a short review of the mathematical intuition behind the model. Furthermore, we will have a hands-on lab section for each method.
By the end of the module, students will be in position to:
- Critically read and evaluate statements about causal relationships based on some data analysis.
- Apply a variety of design-based easy-to-implement methods that will help them draw causal inferences in their own research.
- More broadly, think about policy proposals and their outputs or outcomes under the logic of causal inference.
Prerequisites for admission
The course assumes no previous special knowledge. Nevertheless, students will find module D easier after learning Multivariate Analysis for Social Scientists. Besides, they are warmly suggested to bring their laptops during classes and, especially since Module C, to have the updated version of R and RStudio installed.
Teaching methods
The course develops students' knowledge and competencies through lectures, data lab sessions, instant polls, debates, group works, and presentations.
Teaching Resources
___Module A
- Sabatier Paul A. 2007. Theories of the Policy Process. Boulder, CO: Westview Press. Ch. 2, 3, 7
- Schneider, A., & Ingram, H. 1988. Systematically pinching ideas: A comparative approach to policy design. Journal of Public Policy, 61-80.
- Schneider, A., & Ingram, H. 1990. Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510-529.
- Ostrom, E. 2011. Reflections on" Some unsettled problems of irrigation". American Economic Review, 101(1), 49-63.
- Aukes, E., Lulofs, K., & Bressers, H. 2018. Framing mechanisms: the interpretive policy entrepreneur's toolbox. Critical Policy Studies, 12(4), 406-427.
- Kelley, J. G., & Simmons, B. A. 2015. Politics by number: Indicators as social pressure in international relations. American journal of political science, 59(1), 55-70.
- Schueth, S. 2015. Winning the rankings game: The Republic of Georgia, USAID, and the Doing Business project. 151-177 in A Cooley, J Snyder (Eds). Ranking the World: Grading States as a Tool of Global Governance, 151-77.
- Bini L., Bellucci M. 2020. Accounting for Sustainability. 9-51 In: Integrated Sustainability Reporting. Springer, Cham.
- Zicari A., Aldama L.P. (2017) Value-Added Statements as a Communication Tool for Stakeholders: The Case of Industrias Peñoles in Mexico. 193-214 In: Freeman R., Kujala J., Sachs S. (eds) Stakeholder Engagement: Clinical Research Cases. Issues in Business Ethics, vol 46. Springer, Cham.
___Module B
- Phelan, P. and Reynolds, P. 1996. Argument and Evidence: Critical Analysis for the Social Sciences. London: Routledge.
___Module C
- Rihoux, B., and C.C. Ragin. 2008. Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications. Ch 2, 3, 5
- Dusa, A. 2019. QCA with R. Cham: Springer.
- Damonte, A. 2021. Modeling configurational explanations. Italian Political Science Review/Rivista Italiana di Scienza Politica, 1-16.
___Module D
- Angrist, Joshua, and Jörn-Steffen Pischke. 2009. Mostly Harmless Econometrics: An Empiricist's Companion, Princeton: Princeton University Press.
- Dunning, Thad. 2012. Natural Experiments in the Social Sciences: a Design-Based Approach. Cambridge University Press.
- Card, D. 1993. Using geographic variation in college proximity to estimate the return to schooling. NBER working paper, (w4483).
- Card, D., & Krueger, A.B. 1993. Minimum wages and employment: A case study of the fast food industry in New Jersey and Pennsylvania (No. w4509). National Bureau of Economic Research.
- Costalli, S., & Negri, F. 2021. Looking for twins: how to build better counterfactuals with matching. Italian Political Science Review/Rivista Italiana di Scienza Politica, 1-16.
- Lee, D.S. 2008. Randomized experiments from non-random selection in US House elections. Journal of Econometrics, 142(2), 675-697.
Please note that integrations or substitutions can be decided as needed.
- Sabatier Paul A. 2007. Theories of the Policy Process. Boulder, CO: Westview Press. Ch. 2, 3, 7
- Schneider, A., & Ingram, H. 1988. Systematically pinching ideas: A comparative approach to policy design. Journal of Public Policy, 61-80.
- Schneider, A., & Ingram, H. 1990. Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510-529.
- Ostrom, E. 2011. Reflections on" Some unsettled problems of irrigation". American Economic Review, 101(1), 49-63.
- Aukes, E., Lulofs, K., & Bressers, H. 2018. Framing mechanisms: the interpretive policy entrepreneur's toolbox. Critical Policy Studies, 12(4), 406-427.
- Kelley, J. G., & Simmons, B. A. 2015. Politics by number: Indicators as social pressure in international relations. American journal of political science, 59(1), 55-70.
- Schueth, S. 2015. Winning the rankings game: The Republic of Georgia, USAID, and the Doing Business project. 151-177 in A Cooley, J Snyder (Eds). Ranking the World: Grading States as a Tool of Global Governance, 151-77.
- Bini L., Bellucci M. 2020. Accounting for Sustainability. 9-51 In: Integrated Sustainability Reporting. Springer, Cham.
- Zicari A., Aldama L.P. (2017) Value-Added Statements as a Communication Tool for Stakeholders: The Case of Industrias Peñoles in Mexico. 193-214 In: Freeman R., Kujala J., Sachs S. (eds) Stakeholder Engagement: Clinical Research Cases. Issues in Business Ethics, vol 46. Springer, Cham.
___Module B
- Phelan, P. and Reynolds, P. 1996. Argument and Evidence: Critical Analysis for the Social Sciences. London: Routledge.
___Module C
- Rihoux, B., and C.C. Ragin. 2008. Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications. Ch 2, 3, 5
- Dusa, A. 2019. QCA with R. Cham: Springer.
- Damonte, A. 2021. Modeling configurational explanations. Italian Political Science Review/Rivista Italiana di Scienza Politica, 1-16.
___Module D
- Angrist, Joshua, and Jörn-Steffen Pischke. 2009. Mostly Harmless Econometrics: An Empiricist's Companion, Princeton: Princeton University Press.
- Dunning, Thad. 2012. Natural Experiments in the Social Sciences: a Design-Based Approach. Cambridge University Press.
- Card, D. 1993. Using geographic variation in college proximity to estimate the return to schooling. NBER working paper, (w4483).
- Card, D., & Krueger, A.B. 1993. Minimum wages and employment: A case study of the fast food industry in New Jersey and Pennsylvania (No. w4509). National Bureau of Economic Research.
- Costalli, S., & Negri, F. 2021. Looking for twins: how to build better counterfactuals with matching. Italian Political Science Review/Rivista Italiana di Scienza Politica, 1-16.
- Lee, D.S. 2008. Randomized experiments from non-random selection in US House elections. Journal of Econometrics, 142(2), 675-697.
Please note that integrations or substitutions can be decided as needed.
Assessment methods and Criteria
The course equips students with the knowledge and techniques to establish policy designs' logical, causal, and empirical tenability as causal arguments.
Students will be evaluated for your:
1. active participation.
Lectures can include instant polling, lab sessions, discussions, and presentations.
Participation in these activities accounts for up to 2 additional points per module (max 8 points).
2. capacity to identify a suitable gauge.
In the last part of module A, students are given a policy problem and guided to identify problem mechanisms, solution mechanisms, and policy tools that can activate them.
Effective work earns up to 6 points.
3. their capacity to formalize a policy argument and identify a suitable research design for testing its tenability.
At the end of module B, students are asked to translate a policy theory from natural language to logical models and identify the most suitable strategy for testing it.
Effective translation accounts for up to 6 points.
4. competence in critically replicating a configurational analysis.
At the beginning of module D, a replicable published QCA is assigned to each student. Under guidance, they produce a script that answers the following questions:
i) the model: is it configurational?
ii) case and raw variable selection: does it afford proper probation?
iii) calibration: is it replicable?
iv) directional expectations: are they empirically supported?
v) truth table: is there any inconsistent primitive?
vi) solutions: are the deserving ones discussed?
Proper completion accounts for up to 6 points.
5. competence in critically replicating a quasi-experimental analysis.
To receive their final evaluation on module C, students are asked to replicate the quasi-experimental analysis of a published peer-reviewed paper critically.
The students will provide:
1) a summary of the study;
2) a short description of the study in terms of potential outcomes framework;
3) the actual study replication (tables and figures);
4) a commented replication code;
5) a short discussion of the strengths and weaknesses of the study replicated.
Completion of the replication analysis accounts for up to 6 points.
The individual final colloquium discloses the points earned during the course and offers the opportunity to review past exercises, discuss shortcomings, and add insights.
The colloquium decides the final mark with a possible adjustment of up to 1 point.
PN. Students' assessment does not make any difference between attending and nonattending students. Nonattending students are encouraged to reach out to instructors for further clarifications.
Students will be evaluated for your:
1. active participation.
Lectures can include instant polling, lab sessions, discussions, and presentations.
Participation in these activities accounts for up to 2 additional points per module (max 8 points).
2. capacity to identify a suitable gauge.
In the last part of module A, students are given a policy problem and guided to identify problem mechanisms, solution mechanisms, and policy tools that can activate them.
Effective work earns up to 6 points.
3. their capacity to formalize a policy argument and identify a suitable research design for testing its tenability.
At the end of module B, students are asked to translate a policy theory from natural language to logical models and identify the most suitable strategy for testing it.
Effective translation accounts for up to 6 points.
4. competence in critically replicating a configurational analysis.
At the beginning of module D, a replicable published QCA is assigned to each student. Under guidance, they produce a script that answers the following questions:
i) the model: is it configurational?
ii) case and raw variable selection: does it afford proper probation?
iii) calibration: is it replicable?
iv) directional expectations: are they empirically supported?
v) truth table: is there any inconsistent primitive?
vi) solutions: are the deserving ones discussed?
Proper completion accounts for up to 6 points.
5. competence in critically replicating a quasi-experimental analysis.
To receive their final evaluation on module C, students are asked to replicate the quasi-experimental analysis of a published peer-reviewed paper critically.
The students will provide:
1) a summary of the study;
2) a short description of the study in terms of potential outcomes framework;
3) the actual study replication (tables and figures);
4) a commented replication code;
5) a short discussion of the strengths and weaknesses of the study replicated.
Completion of the replication analysis accounts for up to 6 points.
The individual final colloquium discloses the points earned during the course and offers the opportunity to review past exercises, discuss shortcomings, and add insights.
The colloquium decides the final mark with a possible adjustment of up to 1 point.
PN. Students' assessment does not make any difference between attending and nonattending students. Nonattending students are encouraged to reach out to instructors for further clarifications.
INF/01 - INFORMATICS - University credits: 6
SPS/04 - POLITICAL SCIENCE - University credits: 6
SPS/04 - POLITICAL SCIENCE - University credits: 6
Lessons: 80 hours
Professors:
Damonte Alessia, De Angelis Andrea
Educational website(s)
Professor(s)
Reception:
Friday 13:30-14:30 (students) - 14.30-16.30 (thesis students and PhD candidates)
internal building, 2nd floor, room 12 | VirtualOffice channel in Teams