Double Hard-Debias: Tailoring Word Embeddings for Gender Bias MitigationTianlu Wang · 30 JUN 2020
Word embeddings inherit strong gender bias in data which can be further amplified by downstream models. We propose to purify word embeddings against corpus regularities such as word frequency prior to inferring and removing the gender subspace, which significantly improves the debiasing performance.
(Re)Discovering Protein Structure and Function Through Language ModelingJesse Vig · 29 JUN 2020
In our study, we show how a language model, trained simply to predict a masked (hidden) amino acid in a protein sequence, recovers high-level structural and functional properties of proteins.
We annually publish and present our findings at top NLP and computer vision conferences.See all publications
Collaborate on breakthrough open source artificial intelligence technology.Collaborate
Connecting deep learning researchers with the world's smartest CRM.Apply