You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using the memory_usage function to get the peak usage of an ETL job but noticed it actually caused increased memory usage.
It looks like it creates a copy of the inputs to the function being profile causing some duplication in memory. My function simply does some merging and processing of some columns but here is a script very close to my testing:
I spotted some consequent memory increase due to memory_profiler in this issue: py-pdf/fpdf2#641 (comment) It contains some minimal code to reproduce the problem.
I'm not sure if this is really a leak, or even a bug, but just importing the library increases the RSS memory usage by 15MiB to 25MiB.
I have been using the
memory_usage
function to get the peak usage of an ETL job but noticed it actually caused increased memory usage.It looks like it creates a copy of the inputs to the function being profile causing some duplication in memory. My function simply does some merging and processing of some columns but here is a script very close to my testing:
Sample script to reproduce
Without
memory_usage
With
memory_usage
Is this a bug or expected behaviour? Is there any way to avoid this?
The text was updated successfully, but these errors were encountered: