Model Based Policy Analysis (Subject) / Manski (Lesson)
There are 5 cards in this lesson
incredible certitude
This lesson was created by Moadscha.
- What types of practices lead to incredible certitude? 1. Conventional certitude: A prediction that is generally accepted as true but is not necessarily true. Example: CBO (Congressional Budget Office) uses the middle estimate of the distribution Solution: Provide interval forecasts (e.g. 0.10 and 0.90 quantiles) or graphical fan charts 2. Dualing crtitude: Contradictory predictions made with alternative assumptions. Example: Death penalty --> Different assumptions: 1. Before-and-after estimates 2. Persons living in the treated and untreated area have the same propensity to commit murder in the absence of death penalty 3. Difference-indifference Researches studying a question have used much of the same data but have made different assumptions that lead to contradictory results. 3. Conflating science and advocacy: Specifying assumptions to generate a predetermined conclusion. choose certain assumptions in order to generate a favored conclusion reverse approach: conclusion + data à assumptions 4. Wishful extrapolation: Using untenable assumptions to extrapolate. Extrapolation= Hochrechnung Researchers assume that a future or hypothetical situation would be identical to an observed one, e.g. a trial to receive long-term information/results --> “real” situation is often different from the trial conditions Example: drug testing on human treatment groups (test persons require certain characteristics and it cannot be certain that the drug has the same effect in the real world) Time dimension: studies often cover only a certain period but are extended--> But do we know about long-term effects? Data: What is measured? Number of heart-attacks vs. cholesterol etc. 5. Illogical certitude: Drawing an unfounded conclusion based on logical errors. Common illogical behavior: null hypothesis is not rejected = proof that the hypothesis is right Non-rejection ≠ proof!!! 6. Media overreach: Premature or exaggerated public reporting of policy analysis. Journalists influence the public’s perception Journalists should pay attention when writing about science: it should be alarming when there is no sign of uncertainty, peer review does help but is not perfect, research for several opinions (not only the author’s opinion)
- What is incredible certitude and why is it a problem? Problem: Policy analysis are reported with incredible certitude: exact prediction on policy outcomes are made and communicated, while uncertainty is neglected For example, federal statistical agencies that provide point estimates of GDP Sampling errors + non-sampling errors cause uncertainty Policy analysis: assumptions + (limited) data--> conclusions Problem: stronger assumptions may yield stronger conclusions, but the credibility decreases with the strength of assumptions Law of Decreasing Credibility “The credibility of interference decreases with the strength of the assumptions maintained.” (CGE model assumptions: ceteris paribus, optimization & constant returns to scale)
- What are possible reasons why federal entities report with incredible certitude? Politically motivated: - Entities are mutually depended on each other à when one entity criticizes another that might hamper their future communication and information flow - Pressure to impress the public: the one who seems to be uncertain is not taken seriously - Conflict: when admitting you have made a “mistake” you are kind of guilty
- What are consequences of ignoring uncertainty? 1. Problems of recognizing new research 2. Prevents learning and coping strategies for/about uncertainty
- What kinds of uncertainties does Manski distinguish? 1. Transitory uncertainty: arises, because data collection takes time and agencies may therefore use incomplete information for a quick estimation. But by collecting further data, the underlying assumptions get proven to be false. Users of data mainly prefer point estimations instead of intervals à that’s why measures of errors and interval estimates are not published by the agencies Example: GDP estimation 2. Permanent uncertainty: arises from incompleteness or inadequacy of data collection that is not resolved over time. Problem: finite sampling size (sampling errors), nonresponse and misreporting (non-sampling errors). Mainly, sampling errors are considered, but non-sampling errors are not report or only mentioned (but not integrated quantitively) à lack of knowledge on the closeness of imputed (kalkulierte) values to actual values is common. Example: unemployment. Respondents might not answer truthfully or skip/forget to answer questions/lack the knowledge to answer exactly. 3. Conceptual uncertainty: arises from incomplete understanding of the information that official statistics provide or from the lack of clarity in the concepts themselves. Conceptual uncertainty mainly concerns the interpretation of statistics. Example: seasonal adjustment, e.g. adjustment of fluctuation due to unemployment Problem: It is not clear how seasonal adjustment should be performed. Also “there presently exists no clearly appropriate way to measure the uncertainty associated with seasonal adjustment.” Maybe just don’t do seasonal adjustment: If one wants to compare year to year, rather than month to month changes/development it is arguably more reasonable to leave use unadjusted data.