We use cookies on this site to enhance your experience.
By selecting “Accept” and continuing to use this website, you consent to the use of cookies.
Speaker: Dr. Bert Baumgaertner, University of Idaho
Some common explanations of issue polarization and echo chambers rely on social or cognitive mechanisms of exclusion. Accordingly, suggested interventions like "be more open-minded" target these mechanisms: avoid epistemic bubbles and don't discount contrary information. Contrary to such explanations, we show how much a weaker mechanism - the preference for belief - can produce issue polarization in epistemiccommunities with little to no mechanisms of exclusion. We present a network model (with an empirically-validated structure) that demonstrates how a dynamic interaction between the preference for belief and common sturctures of epistemic communities can turn very small unequal distributions of initial beliefs into full-blown polarization. This points to a different class of explanations, one that emphasizes the importance of the initial spread of information. We also show how our model complements extant explanations by including a version of biased assimiliation and motivated reasoning - cognitive mechanisms of exclusion. We find that mechanisms of exclusion can exacerbate issue polarization, but may not be the ultimate root of it. Hence, the recommended interventions suggested by extant literature is expected to be limited and the problem of issue polarization to be even more intractable.
Location: P327
Speaker: Dr. Matthew Silk, University of Waterloo
Today algorithms can do everything from deciding who gets a loan, who gets a job interview, what sorts of internet content you see, whether you will have a longer criminal sentence, or even determine if you have cancer. However, deep learning requires the use of hidden layers of artificial neurons that can render the algorithm a black box, making it difficult to explain the decisions that an algorithm makes. Is there something ethically wrong with accepting the conclusions of an artificial intelligence if you don’t know how it reached that conclusion? Inductive risk cases ask us to consider whether the evidence for a conclusion is sufficient given the chance that we could be wrong and there can be ethically significant consequences for doing so. If we are unaware of how an AI reaches a conclusion, how can we manage inductive risks? Using arguments from the ethics of belief, I will consider what our ethical responsibilities are in the face of AI opacity.