Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for cursor events #220

Open
sebcrozet opened this issue Apr 12, 2020 · 2 comments
Open

Add support for cursor events #220

sebcrozet opened this issue Apr 12, 2020 · 2 comments

Comments

@sebcrozet
Copy link
Owner

sebcrozet commented Apr 12, 2020

HTML5 defines cursor event as a generalization of mouse/touch/pen events. We should design a way of supporting this. The question of how the API must be modified an how we handle compatibility with older browser is not simple and has been discussed extensively in #207.

The discussion should continue on this issue instead.

@alvinhochun
Copy link
Collaborator

alvinhochun commented May 6, 2020

I think it is worth analysing the backend situation for a bit. Currently there are two backends in Kiss3d - glutin (native target) and stdweb (web target):

Regarding glutin:

  • glutin only exposes separate Mouse and Touch events via glutin::event::WindowEvent, which is actually just an re-export from winit.
  • For winit, there is an issue that touches on something like a unified pointer event model: Richer Touch events / unified pointer event model rust-windowing/winit#336
  • The Windows backend of winit appears to already be using the native pointer input messages for something, but it does not include mouse events since that requires calling the EnableMouseInPointer function which isn't there in winit. Also, since Windows 7 does not support pointer input messages the Windows backend realistically can't move over to it at once if Windows 7 is to be supported.
  • The stdweb and web-sys backends in winit appears to make use of pointer events to some extent, but I have not looked into them in detail since Kiss3d does not make use of winit for web targets.
  • Other native platforms appears to only support the separate Mouse and Touch event input model. (Wayland has "pointer input" but it is really just for mouse inputs.)

Regarding stdweb:


In any case, if Kiss3d is to expose an API with the unified pointer event model, code must be written to adapt the separate Mouse and Touch event model into the unified pointer event model. The only variable is whether this code will be in Kiss3d or in winit.

At the end, Kiss3d can either:

  • provide APIs for both the unified pointer event model or the separate Mouse and Touch event model with the expectation that users will choose to use either one of them, and Kiss3d will make sure that either will work fully on any platforms, or
  • replace the APIs of the separate Mouse and Touch event model with APIs of a unified pointer event model as a breaking change.

The former adds the complexity of having to switch between two input event handling implementations within Kiss3d, but it is good for keeping backward-compatibility with existing code.

I do not consider it an option to provide APIs with the unified pointer event model for use only on supported platforms, because it transfers the complexity to the user, which goes against the idea of keeping it simple to use.

@alvinhochun
Copy link
Collaborator

But I do wonder, if all this work is needed is it really worth supporting the unified pointer event model in Kiss3d? The idea of a unified pointer event model is to simplify pointer event handling by allowing the mouse/touch/pen input to be shared for the most part except for some special cases.

How I imagine the pointer events would be used considering the use case of Kiss3d:

  • Conrod UI - How Conrod and the user-defined handles them is not really Kiss3d's business. And even when Kiss3d use the pointer events model, Conrod still handles mouse and touch separately.
  • Camera controls (panning and rotation) - I don't know how ergonomic touchscreen camera controls would be, but I imagine it will be vastly different from using the mouse, perhaps even involving on-screen buttons and controls. This means they are unlikely to share much handling code. Pen input, however, can be handled the same way as mouse input (pens usually have at least one side button and an eraser tip).
  • Object picking and dragging - For this purpose mouse, touch and pen should be able to share the code. However, since touch might also be used for camera controls, there likely needs to be extra condition checks to decide whether to start dragging an object or to control the camera.

I don't exactly see the benefit here, other than the issue that the current APIs does not provide direct support for Pen inputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants