-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: dynamic frame skipping #29
Conversation
a7949b6
to
cd71ada
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree taht this architecture is preferable to the previous one - just left some comments.
|
||
class ComfyStreamClient: | ||
def __init__(self, **kwargs): | ||
config = Configuration(**kwargs) | ||
self.comfy_client = EmbeddedComfyClient(config) | ||
self.prompt = None | ||
self._lock = asyncio.Lock() | ||
self.input_queue = deque(maxlen=max_queue_size) | ||
self.output_queue = asyncio.Queue() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that you need two queues here - I think you can just use a single asyncio.Queue to manage the buffer of input frames since you're not consuming anything from the output queue right now.
If the queue is full, the last frame is returned thereby also dropping the current input frame. Otherwise, the input frame is added to the queue and the processor task can handle it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yondonfu See my comment below about two queues #29 (comment). Should we proceed with merge?
if self.processor_task is None: | ||
self.processor_task = asyncio.create_task(self._process_queue()) | ||
|
||
async def _process_queue(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This runs an infinite loop but we probably want some way for it to exit cleanly i.e. when the event loop closes. I think you can catch an asyncio.CancelledError and just re-raise the error. For testing, worth adding a print statement for debugging to see if the loop is actually exiting cleanly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added handling for CancelledError which will break the loop, while allowing for other errors to continue
comfystream/src/comfystream/client.py
Lines 43 to 50 in a070e88
except Exception as e: | |
logger.error(f"Error processing queue item: {str(e)}") | |
if output_fut and not output_fut.done(): | |
output_fut.set_exception(e) | |
self.output_queue.task_done() | |
except asyncio.CancelledError: | |
logger.info("Stopped frame processor loop") | |
raise |
output_fut = asyncio.Future() | ||
tensor_cache.outputs.append(output_fut) | ||
self.input_queue.append(output_fut) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing this makes me realize there is probably a problem with how locking works right now. I'm not that familiar with the details of that at the moment and whether the lock usage is truly needed, but AFAICT given the current structure there can only be a single queue_prompt() call at a time meaning that appending to the input queue will not happen until the previous queue_prompt() call completes. I don't think that is the desired behavior...you probably want to keep appending to the queue (until its full) even if the previous queue_prompt() is still awaiting the output of an input frame.
If you just want to enable frame skipping then I suggest addressing that in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yondonfu That's correct, I believe this is why the solution only works with two queues:
- Incoming frames are appended to the input queue, but we do not wait for it to return. This allows for a buffer of incoming frames that get dropped when the queue fills up:
comfystream/src/comfystream/client.py
Line 63 in ef67569
self.input_queue.append(output_fut)
Then, we must await the output queue for the result, since we need to return a frame from _process_queue
comfystream/src/comfystream/client.py
Line 69 in ef67569
await self.output_queue.put((input, output_fut)) |
If we want to address this differently, I think it would affect how apps are implementing ComfyStreamClient
, which could be a breaking change. Are you fine with addressing the duplicate queues in a separate PR after we implement release tagging?
8f7d446
to
a070e88
Compare
a070e88
to
ef67569
Compare
Closing in favor of #10 |
This change adds a queue in ComfyStreamClient to dynamically skip inference on frames to acheive maximum FPS
Dynamically uses as many FPS as possible, however, due to last frame being returned the output FPS will mirror the source