You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
First of all, Tokio is awesome and very flexible -- we have used it extensively in InfluxDB IOx, Apache Arrow DataFusion and elsewhere and it works great. Thank you very much ❤️
In DataFusion, InfluxDB IOx and other applications, there is a mix of CPU bound and I/O work and we use thread pools to manage the execution of those tasks.
I realize the design goals and optimization point for tokio is a large number of IO tasks, but its core threading model and support for the async / Future machinery of the Rust language and ecosystem makes it a very compelling thread (task) pool implementation as well. We have used tokio effectively for both types of work and found significant value of doing so.
If your code is CPU-bound and you wish to limit the number of threads used to run it, you should run it on another thread pool such as rayon
Has geenrated significant discussion and confusion as it implies to some that the tokio Runtime should never be used for CPU bound tasks.
I believe the intent of this section is to warn people against using the same thread pool (Runtime) for I/O and CPU bound work, which is definitely sage advice to avoid significant and potentially unbounded latencies responding to IO. However I don't think there is anything specific to tokio that prevents it being used for CPU bound work.
Describe the solution you'd like
Make it clear in the documentation that the situation to avoid is using the same tokio threadpool, and point readers to the apis to create their own thread pools.
Additional context
I think this advice may be from an earlier time when it wasn't possible (or easy?) to create a separate tokio Runtime, for CPU bound tasks (which indeed serves as a "dedicated thread pool"). We have created a wrapper to do this for us in IOx DedicatedExecutor and it has worked well for us in practice.
Is your feature request related to a problem? Please describe.
First of all, Tokio is awesome and very flexible -- we have used it extensively in InfluxDB IOx, Apache Arrow DataFusion and elsewhere and it works great. Thank you very much ❤️
In DataFusion, InfluxDB IOx and other applications, there is a mix of CPU bound and I/O work and we use thread pools to manage the execution of those tasks.
I realize the design goals and optimization point for tokio is a large number of IO tasks, but its core threading model and support for the
async
/Future
machinery of the Rust language and ecosystem makes it a very compelling thread (task) pool implementation as well. We have used tokio effectively for both types of work and found significant value of doing so.However, on many occasions over the last few years, the following note from https://docs.rs/tokio/1.11.0/tokio/#cpu-bound-tasks-and-blocking-code
Has geenrated significant discussion and confusion as it implies to some that the tokio
Runtime
should never be used for CPU bound tasks.I believe the intent of this section is to warn people against using the same thread pool (
Runtime
) for I/O and CPU bound work, which is definitely sage advice to avoid significant and potentially unbounded latencies responding to IO. However I don't think there is anything specific to tokio that prevents it being used for CPU bound work.Describe the solution you'd like
Make it clear in the documentation that the situation to avoid is using the same tokio threadpool, and point readers to the apis to create their own thread pools.
Additional context
I think this advice may be from an earlier time when it wasn't possible (or easy?) to create a separate tokio
Runtime
, for CPU bound tasks (which indeed serves as a "dedicated thread pool"). We have created a wrapper to do this for us in IOx DedicatedExecutor and it has worked well for us in practice.Examples of confusion / questions:
The text was updated successfully, but these errors were encountered: