Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabling "Embed Subwindows" breaks manual input sent via get_viewport().push_input() in Godot 4.4 #103361

Open
kb173 opened this issue Feb 27, 2025 · 6 comments

Comments

@kb173
Copy link

kb173 commented Feb 27, 2025

Tested versions

  • Reproducible in Godot 4.4-beta1 and later
  • Not reproducible in Godot 4.3-stable and earlier

System information

Tested on Arch Linux (KDE) and Windows 11

Issue description

I manually create an InputEventMouseButton and push it via get_viewport().push_input to press a GUI button. The button is pressed as expected on Godot 4.3 and earlier, as well as on Godot 4.4 as long as the "Embed Subwindows" option is enabled in the project settings.

However, when "Embed Subwindows" is disabled, the behavior is strange since Godot 4.4: the button does not react to the pushed input, even though the "Misc" panel in the Debugger correctly identifies the last clicked control as the Button.

The button neither reacts visually nor does it emit signals in this case.

Steps to reproduce

We manually create mouse clicks like this:

var event = InputEventMouseButton.new()
event.pressed = true
event.button_index = 1
event.position = Vector2(200, 50)
event.global_position = Vector2(200, 50)

get_viewport().push_input(event, false)

await get_tree().process_frame

# Send a mouse release event immediately after
var release_event = event.duplicate()
release_event.pressed = false

get_viewport().push_input(release_event, false)

The position must correspond to the location of a UI element. With "Embed Subwindows", the UI node at the corresponding location is pressed as expected by this code. But without "Embed Subwindows", the UI does not react to the manually created event (even though it does react to a normal mouse click).

Minimal reproduction project (MRP)

viewport-input-test.zip

Pressing the "hallo" button manually prints "hallo".
Pressing the "press the other button" button causes the "hallo" button not to be pressed, even though it should be. When setting window/subwindows/embed_subwindows to true (either in the project settings GUI or in project.godot), "press the other button" does cause "hallo" to be pressed, printing "hallo" to the output.

@matheusmdx
Copy link
Contributor

Seems a regression between 4.4 dev 2 and 4.4 dev 3, bisecting.

@matheusmdx
Copy link
Contributor

Bisected to #93500, CC @anniryynanen

Image

@anniryynanen
Copy link
Contributor

Oh dear. I wish I could look into this but I don't have the capacity at the moment. #93500 has an updated MRP that should make it easy to test that the fix to this issue doesn't regress that fix.

@Sauermann
Copy link
Contributor

Sauermann commented Feb 28, 2025

The issue is, that #93500 was implemented under the assumption that mouse-button presses only activate a Button node, if the mouse is hovering the button. With your MRP, this fails, when the mouse is not over the first button. (Try selecting the second button, moving the mouse over the first button and activating the second button with the "space" key on the keyboard in your MRP)

Workaround 1:
Before the button-down event, send a mouse-motion event to coordinates Vector2(200, 50), so that the first button gets the mouse-hovered status before the button press. (will probably need additional mouse-position cleanup after the button-release)

Workaround 2:
Let the first Button grab focus and afterwards send a ui_accept action to the viewport. That way, the hover status is not checked, but the button receives the activation input event. (release of the mouse-button over the second button could be problematic)

Workaround 3:
Instead of activating the first button, instead just call the same functionality from the second button. (might not be, what you want)

Conceptional problem:
I believe, that sending input events in order to buttons is conceptually problematic, because basically you send an input event, while a different input event is processed and both input events might interact in unexpected ways with each other.

Proposed solution:
Extend the API of the class BaseButton with an additional function called activate_button, that allows users to trigger a button by script. That way it is not necessary to create input events and should be cleaner to use.

@kb173
Copy link
Author

kb173 commented Mar 1, 2025

Thank you so much for the quick responses and suggestions!

I believe workaround 1 should work for us, but the others don't. The reason why we need it (and why I think this should remain supported) is that we use it to implement an alternative input method. It's a projector (which projects a separete window of the game, which is why we need "Embed Subwindows" at false) with a camera, where the camera is accessed by a Python program to detect objects on the surface which the projector projects to. It essentially behaves like a touchscreen; placing objects onto the projection should generate inputs.

But since we want to control the main window of the game with mouse and keyboard at the same time as this projector-camera input method is used, we can't send OS-level mouse input, but instead send WebSocket messages to our Godot game. The game turns these messages into InputEvents. That way, all our UI is controllable with this funky input method (while still being able to debug it with a mouse if needed), and mouse-keyboard input in the other window is unaffected.

That might be too abstract to imagine the use-case, so here is a bit more detail: the separate window which is projected and interacted with through objects detected by a camera displays a map. Objects represent things to be placed on that map (e.g. a wind turbine). The main window is a 3D visualization of the same map section. Here's a video of what I'm talking about: https://landscapelab.boku.ac.at/videos/wka_placement.mp4

So I do think that manually creating "click" inputs for controlling the UI has its legitimate use-cases with alternative input methods.

@akien-mga akien-mga moved this from Unassessed to Bad in 4.x Release Blockers Mar 2, 2025
@AThousandShips AThousandShips modified the milestones: 4.4, 4.5 Mar 3, 2025
@kb173
Copy link
Author

kb173 commented Mar 4, 2025

I don't believe @Sauermann's workaround 1 works, unless I'm doing something wrong. This is the code I added before the click events:

var motion_event = InputEventMouseMotion.new()
motion_event.position = Vector2(200, 50)
motion_event.global_position = Vector2(200, 50)

get_viewport().push_input(motion_event, false)

await get_tree().process_frame

This does not change any behavior.
A workaround that does work for us (but is cumbersome to implement) is to add the following script to all buttons which need to be able to be interacted with this way:

extends Button

func _input(event):
	if event is InputEventMouseButton and event.pressed:
		if get_global_rect().has_point(event.position):
				pressed.emit()
				get_viewport().set_input_as_handled()

This makes sense, since it bypasses the button's own logic. But this also means that it doesn't replicate some of the usual behavior (check visibility, grab focus, change visual appearance, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Bad
Status: For team assessment
Development

No branches or pull requests

5 participants