User profiles for Douwe Kiela

Douwe Kiela

Contextual AI, Stanford University
Verified email at stanford.edu
Cited by 18902

Supervised learning of universal sentence representations from natural language inference data

A Conneau, D Kiela, H Schwenk, L Barrault… - arXiv preprint arXiv …, 2017 - arxiv.org
Many modern NLP systems rely on word embeddings, previously trained in an unsupervised
manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks …

Retrieval-augmented generation for knowledge-intensive nlp tasks

…, T Rocktäschel, S Riedel, D Kiela - Advances in …, 2020 - proceedings.neurips.cc
Large pre-trained language models have been shown to store factual knowledge in their
parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. …

Winoground: Probing vision and language models for visio-linguistic compositionality

…, A Singh, A Williams, D Kiela… - Proceedings of the …, 2022 - openaccess.thecvf.com
We present a novel task and dataset for evaluating the ability of vision and language models
to conduct visio-linguistic compositional reasoning, which we call Winoground. Given two …

Flava: A foundational language and vision alignment model

…, W Galuba, M Rohrbach, D Kiela - Proceedings of the …, 2022 - openaccess.thecvf.com
State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic
pretraining for obtaining good performance on a variety of downstream tasks. Generally, such …

Poincaré embeddings for learning hierarchical representations

M Nickel, D Kiela - Advances in neural information …, 2017 - proceedings.neurips.cc
Abstract Representation learning has become an invaluable approach for learning from
symbolic data such as text and graphs. However, state-of-the-art embedding methods typically …

True few-shot learning with language models

E Perez, D Kiela, K Cho - Advances in neural information …, 2021 - proceedings.neurips.cc
Pretrained language models (LMs) perform well on many tasks even when learning from a
few examples, but prior work uses many held-out examples to tune various aspects of …

The hateful memes challenge: Detecting hate speech in multimodal memes

D Kiela, H Firooz, A Mohan… - Advances in neural …, 2020 - proceedings.neurips.cc
This work proposes a new challenge set for multimodal classification, focusing on detecting
hate speech in multimodal memes. It is constructed such that unimodal models struggle and …

No training required: Exploring random encoders for sentence classification

J Wieting, D Kiela - arXiv preprint arXiv:1901.10444, 2019 - arxiv.org
We explore various methods for computing sentence representations from pre-trained word
embeddings without any training, ie, using nothing but random parameterizations. Our aim is …

Personalizing dialogue agents: I have a dog, do you have pets too?

S Zhang, E Dinan, J Urbanek, A Szlam, D Kiela… - arXiv preprint arXiv …, 2018 - arxiv.org
Chit-chat models are known to have several problems: they lack specificity, do not display a
consistent personality and are often not very captivating. In this work we present the task of …

Adversarial NLI: A new benchmark for natural language understanding

…, E Dinan, M Bansal, J Weston, D Kiela - arXiv preprint arXiv …, 2019 - arxiv.org
We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial
human-and-model-in-the-loop procedure. We show that training models on this new …