Skip to main content
Future Students Alumni Library Athletics & Recreation
 Mobile
Students
  • Studying & Academics
    • Academic Calendars and Governance
    • Academic Integrity
    • Advising and Support
    • Exams
    • Global Engagement and Exchanges
    • Graduate and Postdoctoral Studies
    • Library
    • Online Learning
    • Programs
    • Research
  • Student Life
    • Dean of Students
    • Dining on Campus
    • Equity, Diversity and Inclusion
    • Indigenous Student Services
    • International Student Support
    • Residence and Off-Campus Housing
    • Student Organizations
    • Sustainability
  • Wellness & Recreation
    • Athletics and Recreation
    • Gendered Violence Prevention and Support
    • Human Rights and Conflict Management
    • Safety
    • Student Wellness Centre
    • Wellness Education
  • Work, Leadership & Volunteering
    • Career and Employment Support
    • Community and Workplace Partnerships
    • Co-op
    • Innovation and Entrepreneurship
    • Experience Record
    • Working on Campus
    • Student Teaching Development
    • Volunteering
  • Registration & Finances
    • Convocation and Graduation
    • Enrolment Services
    • Financial Aid
    • Graduate Funding and Awards
    • Money Management
    • OneCard
    • Course Registration Guide
    • Scholarships and Bursaries
    • Tuition and Fees
  • Services & Spaces
    • Childcare Services
    • Classrooms and Spaces
    • Educational Technologies
    • Parking and Transportation
    • Printing Services
    • Prism Resources
    • Retail and Mail Services
    • Tech Services

    • Home
    • Programs
    • Arts
    • Philosophy
    • Philosophy Speaker Series

    Philosophy Speaker Series

    Print | PDF

     

    Friday, March 10, 2023

    Time: 3:00 - 4:30 p.m.

    Zoom: contact kdyck@wlu.ca for the link

    Speaker: Dr. Bert Baumgaertner, University of Idaho

    The preference for belief, issue polarization, and echo chambers

    Some common explanations of issue polarization and echo chambers rely on social or cognitive mechanisms of exclusion. Accordingly, suggested interventions like "be more open-minded" target these mechanisms: avoid epistemic bubbles and don't discount contrary information. Contrary to such explanations, we show how much a weaker mechanism - the preference for belief - can produce issue polarization in epistemiccommunities with little to no mechanisms of exclusion. We present a network model (with an empirically-validated structure) that demonstrates how a dynamic interaction between the preference for belief and common sturctures of epistemic communities can turn very small unequal distributions of initial beliefs into full-blown polarization. This points to a different class of explanations, one that emphasizes the importance of the initial spread of information. We also show how our model complements extant explanations by including a version of biased assimiliation and motivated reasoning - cognitive mechanisms of exclusion. We find that mechanisms of exclusion can exacerbate issue polarization, but may not be the ultimate root of it. Hence, the recommended interventions suggested by extant literature is expected to be limited and the problem of issue polarization to be even more intractable.

     

    Friday, March 24, 2023

    Time: 3:00 - 4:30 p.m.

    Location: P327

    Speaker: Dr. Matthew Silk, University of Waterloo

    Ethical Gaps Relating to Inductive Risk and Opacity in Artificial Intelligence

    Today algorithms can do everything from deciding who gets a loan, who gets a job interview, what sorts of internet content you see, whether you will have a longer criminal sentence, or even determine if you have cancer. However, deep learning requires the use of hidden layers of artificial neurons that can render the algorithm a black box, making it difficult to explain the decisions that an algorithm makes. Is there something ethically wrong with accepting the conclusions of an artificial intelligence if you don’t know how it reached that conclusion? Inductive risk cases ask us to consider whether the evidence for a conclusion is sufficient given the chance that we could be wrong and there can be ethically significant consequences for doing so. If we are unaware of how an AI reaches a conclusion, how can we manage inductive risks? Using arguments from the ethics of belief, I will consider what our ethical responsibilities are in the face of AI opacity.

     

     

    Unknown Spif - $key
    Wilfrid Laurier University
    • Locations, Maps & Parking
    • Contact Us
    • Accessibility
    • Campus Status
    • Social Media Directory
    • Instagram
    • Twitter
    • Facebook
    • Linked In
    • Email
    • Youtube

    WILFRID LAURIER UNIVERSITY

    • Waterloo
    • Brantford
    • Milton
    • Kitchener

    © 2025 Wilfrid Laurier University

    We use cookies on this site to enhance your experience.

    By selecting “Accept” and continuing to use this website, you consent to the use of cookies.