path: root/virt
diff options
authorCornelia Huck <>2014-03-17 19:11:35 +0100
committerPaolo Bonzini <>2014-03-18 17:06:04 +0100
commit684a0b719ddbbafe1c7e6646b9bc239453a1773d (patch)
treeeb83541090766873f36f9916e720ff0a85e1eeb2 /virt
parent93c4adc7afedf9b0ec190066d45b6d67db5270da (diff)
KVM: eventfd: Fix lock order inversion.
When registering a new irqfd, we call its ->poll method to collect any event that might have previously been pending so that we can trigger it. This is done under the kvm->irqfds.lock, which means the eventfd's ctx lock is taken under it. However, if we get a POLLHUP in irqfd_wakeup, we will be called with the ctx lock held before getting the irqfds.lock to deactivate the irqfd, causing lockdep to complain. Calling the ->poll method does not really need the irqfds.lock, so let's just move it after we've given up the irqfds.lock in kvm_irqfd_assign(). Signed-off-by: Cornelia Huck <> Signed-off-by: Paolo Bonzini <>
Diffstat (limited to 'virt')
1 files changed, 4 insertions, 4 deletions
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index abe4d6043b36..29c2a04e036e 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -391,19 +391,19 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
irqfd_update(kvm, irqfd, irq_rt);
- events = f.file->f_op->poll(f.file, &irqfd->pt);
list_add_tail(&irqfd->list, &kvm->irqfds.items);
+ spin_unlock_irq(&kvm->irqfds.lock);
* Check if there was an event already pending on the eventfd
* before we registered, and trigger it as if we didn't miss it.
+ events = f.file->f_op->poll(f.file, &irqfd->pt);
if (events & POLLIN)
- spin_unlock_irq(&kvm->irqfds.lock);
* do not drop the file until the irqfd is fully initialized, otherwise
* we might race against the POLLHUP