Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iNotify error when multiple Zilla instances are started in K8s Pods on a Portainer.io host #1081

Closed
vordimous opened this issue Jun 6, 2024 · 0 comments · Fixed by #1107
Closed
Assignees
Labels
bug Something isn't working

Comments

@vordimous
Copy link
Contributor

Describe the bug
Getting iNotify error when multiple Zilla instances are started in K8s Pods on a Portainer.io host. The issue is likely due to a limitation imposed by the host as described in this Portainer forum post.

java.io.IOException: User limit of inotify instances reached or too many open files
	at java.base/sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:62)
	at java.base/sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:53)
	at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.internal.registry.FileWatcherTask.<init>(FileWatcherTask.java:52)
	at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.Engine.<init>(Engine.java:216)
	at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.EngineBuilder.build(EngineBuilder.java:147)
	at io.aklivity.zilla.runtime.command.start@0.9.82/io.aklivity.zilla.runtime.command.start.internal.airline.ZillaStartCommand.run(ZillaStartCommand.java:161)
	at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain$Invoker.invoke(ZillaMain.java:69)
	at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain.invoke(ZillaMain.java:40)
	at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain.main(ZillaMain.java:34)

Additional context
The fix is likely to update the host setting, but in the majority of cases when a zilla image is deployed to a production cluster the running zilla instances won't need to watch the filesystem for changes. Any rollout of updates to a zilla config would result in tearing down pods and creating new ones with the new config. Adding a default option inside the container to not watch the filesystem for changes would also prevent this error from happening.

@vordimous vordimous added the bug Something isn't working label Jun 6, 2024
@jfallows jfallows self-assigned this Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants