Simon Roth

GSDS

Simon Roth was a PhD student at the Graduate School of Decision Sciences (GSDS) with Susumu Shikano as his dissertation supervisor. His research interests included political extremism, bibliometrics and machine learning. Additionally, Simon Roth was working as a junior data scientist at Paraboost and Lovai. His last projects were build upon methods from natural language processing, machine learning and web development.

Publikationsliste

  • Artikel
  • Buch
  • Dissertation
  • Studien- / Abschlussarbeit
  • Tagungsbericht
  • Andere
  • Roth, Simon (2022): Biased Machines in the Realm of Politics
    Roth, Simon. 2022. “Biased Machines in the Realm of Politics.” http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1pnwchwzwnqav4.

    Biased Machines in the Realm of Politics

    ×

    This dissertation addresses one of the most serious risks associated with automated decision-making: bias. This is not a new phenomenon, and decisions have always been biased, but automated decision-making multiplies the risks in many ways. The main challenges are: How can we detect biases? Who should be held accountable for biased predictions? And how can biases be mitigated or corrected? The three studies within this dissertation help answer these questions by emphasizing the importance of monitoring our own machine learning (ML) pipelines, auditing third party prediction systems, and exposing the potential abuse of predictive algorithms when given sensitive data.



    The first paper (section 2) addresses the question of how to direct ML users to high-performing, robust, and fair models. ML systems have been shown to harm human lives via discrimination, distortion, exploitation, or misjudgment. Although bias is often associated with malicious behavior, this is not always the case. Inductive biases, for example, such as knowledge about parameter ranges or priors can help to stabilize a model optimization process. Furthermore, decomposing into statistical bias and variance, allows for model selection with minimum future risk. Since "all models are wrong, but some are useful", we should analyze as many biases in ML as feasible before putting faith in our predictions.



    The second paper (section 3) addresses the question about how to audit recommender bias on social media. The goal of this experiment is to quantify the causes of algorithmic filter bubbles by analyzing amplification bias in the recommender system of Twitter. Using simulation of human behavior with bots we can show that 'filter bubbles' exist and that they add an additional layer of bias to 'echo chambers'. More precisely, the algorithm responded far more strongly to bots that actually engage with content than to bots that just follow human accounts. This demonstrates that the Twitter algorithm significantly depends on human interactions to adapt to preferences of its users. This has serious consequences since users may be unaware of the large personalization bias that happens when they like or share content.



    The third paper (section 4) addresses the question whether online communication is predictive of offline political behavior. We can predict the party affiliation and turnout likelihood of a person with fair accuracy using a unique dataset consisting of thousands of ordinary citizens, including their Twitter statuses, integrated with public US voter registration files. Our results show social media communication is sufficiently biased to provide information about attitudes and political behavior of an average person in the real world. We demonstrate how, in addition to us, political, commercial, or bad faith actors may acquire this sensitive data to build prediction models, for example, to influence a customers retail journey or perhaps worse discourage them from voting on scale.



    Biases can limit the potential of ML for business and society by cultivating distrust and delivering distorting or discriminating results. However, if our societies can (1) implement effective data privacy regulations (2) require internal debaising steps and encourage external independent auditing (3) educate the broader public of biases and ways to report them (4) and invest in training interdisciplinary computational scientists, we may be better prepared for negative consequences of the next industrial revolution.

Beim Zugriff auf die Publikationen ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut und informieren Sie im Wiederholungsfall support@uni-konstanz.de