Skip to content

Commit 77c9b9d

Browse files
dwmw2sean-jc
authored andcommitted
KVM: x86/xen: Use fast path for Xen timer delivery
Most of the time there's no need to kick the vCPU and deliver the timer event through kvm_xen_inject_timer_irqs(). Use kvm_xen_set_evtchn_fast() directly from the timer callback, and only fall back to the slow path if delivering the timer would block, i.e. if kvm_xen_set_evtchn_fast() returns -EWOULDBLOCK. If delivery fails for any other reason, do nothing and just let it fail silently, as that is what the slow path would end up doing anyways. This gives a significant improvement in timer latency testing (using nanosleep() for various periods and then measuring the actual time elapsed). However, there was a reason[1] the fast path was dropped when this support was first added. The current code holds vcpu->mutex for all operations on the kvm->arch.timer_expires field, and the fast path introduces a potential race condition. Avoid that race by ensuring the hrtimer is (temporarily) cancelled before making changes in kvm_xen_start_timer(), and also when reading the values out for KVM_XEN_VCPU_ATTR_TYPE_TIMER. [1] https://lore.kernel.org/kvm/846caa99-2e42-4443-1070-84e49d2f11d2@redhat.com Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Link: https://lore.kernel.org/r/f21ee3bd852761e7808240d4ecaec3013c649dc7.camel@infradead.org [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent ee11ab6 commit 77c9b9d

File tree

1 file changed

+49
-0
lines changed

1 file changed

+49
-0
lines changed

arch/x86/kvm/xen.c

+49
Original file line numberDiff line numberDiff line change
@@ -134,9 +134,23 @@ static enum hrtimer_restart xen_timer_callback(struct hrtimer *timer)
134134
{
135135
struct kvm_vcpu *vcpu = container_of(timer, struct kvm_vcpu,
136136
arch.xen.timer);
137+
struct kvm_xen_evtchn e;
138+
int rc;
139+
137140
if (atomic_read(&vcpu->arch.xen.timer_pending))
138141
return HRTIMER_NORESTART;
139142

143+
e.vcpu_id = vcpu->vcpu_id;
144+
e.vcpu_idx = vcpu->vcpu_idx;
145+
e.port = vcpu->arch.xen.timer_virq;
146+
e.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
147+
148+
rc = kvm_xen_set_evtchn_fast(&e, vcpu->kvm);
149+
if (rc != -EWOULDBLOCK) {
150+
vcpu->arch.xen.timer_expires = 0;
151+
return HRTIMER_NORESTART;
152+
}
153+
140154
atomic_inc(&vcpu->arch.xen.timer_pending);
141155
kvm_make_request(KVM_REQ_UNBLOCK, vcpu);
142156
kvm_vcpu_kick(vcpu);
@@ -146,6 +160,14 @@ static enum hrtimer_restart xen_timer_callback(struct hrtimer *timer)
146160

147161
static void kvm_xen_start_timer(struct kvm_vcpu *vcpu, u64 guest_abs, s64 delta_ns)
148162
{
163+
/*
164+
* Avoid races with the old timer firing. Checking timer_expires
165+
* to avoid calling hrtimer_cancel() will only have false positives
166+
* so is fine.
167+
*/
168+
if (vcpu->arch.xen.timer_expires)
169+
hrtimer_cancel(&vcpu->arch.xen.timer);
170+
149171
atomic_set(&vcpu->arch.xen.timer_pending, 0);
150172
vcpu->arch.xen.timer_expires = guest_abs;
151173

@@ -1019,9 +1041,36 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
10191041
break;
10201042

10211043
case KVM_XEN_VCPU_ATTR_TYPE_TIMER:
1044+
/*
1045+
* Ensure a consistent snapshot of state is captured, with a
1046+
* timer either being pending, or the event channel delivered
1047+
* to the corresponding bit in the shared_info. Not still
1048+
* lurking in the timer_pending flag for deferred delivery.
1049+
* Purely as an optimisation, if the timer_expires field is
1050+
* zero, that means the timer isn't active (or even in the
1051+
* timer_pending flag) and there is no need to cancel it.
1052+
*/
1053+
if (vcpu->arch.xen.timer_expires) {
1054+
hrtimer_cancel(&vcpu->arch.xen.timer);
1055+
kvm_xen_inject_timer_irqs(vcpu);
1056+
}
1057+
10221058
data->u.timer.port = vcpu->arch.xen.timer_virq;
10231059
data->u.timer.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
10241060
data->u.timer.expires_ns = vcpu->arch.xen.timer_expires;
1061+
1062+
/*
1063+
* The hrtimer may trigger and raise the IRQ immediately,
1064+
* while the returned state causes it to be set up and
1065+
* raised again on the destination system after migration.
1066+
* That's fine, as the guest won't even have had a chance
1067+
* to run and handle the interrupt. Asserting an already
1068+
* pending event channel is idempotent.
1069+
*/
1070+
if (vcpu->arch.xen.timer_expires)
1071+
hrtimer_start_expires(&vcpu->arch.xen.timer,
1072+
HRTIMER_MODE_ABS_HARD);
1073+
10251074
r = 0;
10261075
break;
10271076

0 commit comments

Comments
 (0)