AI language models are increasingly integral to decision-making, communication, and public discourse, yet their inherent biases pose significant risks to impartiality and fairness. This talk delves into the findings of the paper Biased AI Can Influence Political Decision-Making, which demonstrates how partisan bias in AI can sway political decisions, even overriding individuals’ pre-existing partisanship. Through interactive experiments, the study highlights the subtle yet profound ways biased AI can alter opinions and amplify societal polarization. Beyond identifying these risks, the research also underscores the importance of mitigation strategies to reduce the influence of biased AI. One promising finding is that prior knowledge about AI and its potential biases can significantly decrease susceptibility, suggesting a critical role for public education in empowering users to critically assess AI-generated content. These strategies, combined with advancements in AI alignment and bias detection, offer a path forward to ensure language models serve as tools for fairness and inclusivity rather than instruments of division.
@incollection{qualcomm, author = {Fisher, Jillian and Hallinan, Skyler}, title = {Small but Mighty: Empowering Small Language Models to Outperform Their Larger Counterparts}, publisher = {Qualcomm}, year = {2024}, month = {Nov}, keywords = {invited}, file = {Qualcomm_Presentation.pdf} }
As AI becomes more pervasive, small language models (SLMs) are increasingly vital for applications where computational resources, proprietary constraints, and efficiency are key considerations. This talk explores three pioneering approaches that enable SLMs to achieve performance levels on par with, or even superior to, LLMs—without compromising on computational efficiency or accessibility. We present JAMDEC, a framework that pushes the boundaries of SLM capabilities solely through decoding-time enhancements, requiring no training. Second, we present STEER which leverages knowledge distillation from LLMs, empowering SLMs with distilled insights for heightened performance. Finally, we present StyleRemix which integrates fine-grained, controllable distillation with Low-Rank adapters, marrying adaptability with computational efficiency. Together, these methods reveal new pathways to powerful, resource-conscious AI suited for diverse real-world challenges.
@incollection{ai_transfer_lab, author = {Fisher, Jillian}, title = {Influence Diagnostics Under Self-Concordance and Application to Natural Language Models}, publisher = {AI Transfer Lab}, url = {https://transferlab.ai/seminar/2023/influence-diagnostics-under-self-concordance/}, year = {2023}, month = sep, keywords = {invited}, file = {influence_theory_nlp.pdf} }
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data.
The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship when needed, e.g., blind reviews for scientific papers, anonymous online reviews, or anonymous interactions in the mental health forums. In this paper, we propose an unsupervised inference-time approach to authorship obfuscation to address the unique challenges of authorship obfuscation: lack of supervision data for diverse authorship and domains, and the need for a sufficient level of revision beyond simple paraphrasing to obfuscate the authorship, all the while preserving the original content and fluency. We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation that can be in principle applied to any text and authorship. Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs, while also reducing the performance gap between small and large language models via algorithmic enhancement. The key idea behind our approach is to boost the creative power of smaller language models through constrained decoding, while also allowing for user-specified controls and flexibility. Experimental results demonstrate that our approach based on GPT2-XL outperforms previous state-of-the-art methods based on comparably small models, while performing competitively against GPT3.5 175B, a propriety model that is two orders of magnitudes larger.
@incollection{JSM_2023, author = {Fisher, Jillian}, title = {Statistical and Computational Guarantees for Influence Diagnostics}, publisher = {Joint Statistical Meetings}, year = {2023}, month = aug, address = {Toronto, Canada} }
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data.
@incollection{JSM_2023, author = {Fisher, Jillian}, title = {Statistical and Computational Guarantees for Influence Diagnostics}, publisher = {Joint Statistical Meetings}, year = {2023}, month = aug, address = {Toronto, Canada} }
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data.
@incollection{ICSDS_2022, author = {Fisher, Jillian}, title = {Influence Diagnostics Under Self-Concordance}, publisher = {International Conference on Statistical and Data Sciences}, year = {2022}, month = dec, address = {Florence, Italy} }
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data.
@incollection{JSM_2022, author = {Fisher, Jillian}, title = {Model Editing in Language Models Using Influence Functions}, publisher = {Joint Statistical Meetings}, year = {2022}, month = aug, address = {Washington, DC}, file = {JSM_2022.pdf} }
Despite the successes of large pretrained language models, there is a dependence of large training corpora that contain social biases and toxicity, which adversely affects model behavior. We propose to use influence functions, a classical concept from robust statistics, to design a cost-effective post-hoc model editing technique to remove unwanted behaviors in trained language models. Influence functions depict the amount of dependence the estimator has with any one data point in a given sample. We utilize influence functions to approximate parameters of transformer language models based on a subset of the training data without re-training the model, using only the gradient and Hessian-vector product oracles of the model. To implement this method, we use an efficient numerical technique to calculate the influence of a datapoint utilizing matrix sketching. We present this technique using a combination of English and Spanish sentences from WikiText-2 and Large Spanish Corpus, and analyzed the removal of a subset (Spanish sentences) from the training data. Our preliminary results show promising increase in forgetting the Spanish while retaining the learning of the English sentences.
Jillian Fisher
Statistics PhD Student
University of Washington
© 2024 Jillian Fisher