Large Scale Reliable Model Editing

As knowledge gets outdated frequently and new facts are created everyday, model editing helps keep models up-to-date without the expensive procedure of pre-training and relying solely on using retrieval based methods. In this project, we focus on model editing methods that modify the parameters of language models. Current model editing methods are successfuly able to perform singular knowledge edits with great precision, but lead to significant model degradation as they are scaled. In this project, we have the following aims:

  1. Large Scale Model Editing - Creating better sequential and batched editing methods as well as analyzing existing methods in these settings.
  2. Improving Edit Quality - Current model editing methods fail to generalize to logical continuations of the edited fact as well as transfer to other languages made by the model. We aim to improve the quality of model edits along these lines both at small and large scale.
  3. Model Interpretability - Model editing requires a better understanding of the knowledge storing mechanisms in LLMs and in the process of solving this problem, we also hope to make these models more interpretable.
  4. Creating Continually Learning and Editable Models - Our moonshot goal is to create continually learning and editable models.

News

  Website Credits - Ashok Devireddy, Maochuan Lu