You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Getting iNotify error when multiple Zilla instances are started in K8s Pods on a Portainer.io host. The issue is likely due to a limitation imposed by the host as described in this Portainer forum post.
java.io.IOException: User limit of inotify instances reached or too many open files
at java.base/sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:62)
at java.base/sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:53)
at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.internal.registry.FileWatcherTask.<init>(FileWatcherTask.java:52)
at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.Engine.<init>(Engine.java:216)
at io.aklivity.zilla.runtime.engine@0.9.82/io.aklivity.zilla.runtime.engine.EngineBuilder.build(EngineBuilder.java:147)
at io.aklivity.zilla.runtime.command.start@0.9.82/io.aklivity.zilla.runtime.command.start.internal.airline.ZillaStartCommand.run(ZillaStartCommand.java:161)
at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain$Invoker.invoke(ZillaMain.java:69)
at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain.invoke(ZillaMain.java:40)
at io.aklivity.zilla.runtime.command@0.9.82/io.aklivity.zilla.runtime.command.internal.ZillaMain.main(ZillaMain.java:34)
Additional context
The fix is likely to update the host setting, but in the majority of cases when a zilla image is deployed to a production cluster the running zilla instances won't need to watch the filesystem for changes. Any rollout of updates to a zilla config would result in tearing down pods and creating new ones with the new config. Adding a default option inside the container to not watch the filesystem for changes would also prevent this error from happening.
The text was updated successfully, but these errors were encountered:
Describe the bug
Getting iNotify error when multiple Zilla instances are started in K8s Pods on a Portainer.io host. The issue is likely due to a limitation imposed by the host as described in this Portainer forum post.
Additional context
The fix is likely to update the host setting, but in the majority of cases when a zilla image is deployed to a production cluster the running zilla instances won't need to watch the filesystem for changes. Any rollout of updates to a zilla config would result in tearing down pods and creating new ones with the new config. Adding a default option inside the container to not watch the filesystem for changes would also prevent this error from happening.
The text was updated successfully, but these errors were encountered: