Skip to content

Commit 6c529f5

Browse files
committed
fix: merge conflicts
2 parents b4f871a + 9583a28 commit 6c529f5

18 files changed

+1114
-90
lines changed

.gitignore

+14-1
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,17 @@ __pycache__
22
*.egg-info
33
build
44
.DS_STORE
5-
comfyui*
5+
comfyui*
6+
7+
# VS Code settings
8+
.vscode/
9+
*.code-workspace
10+
launch.json
11+
12+
# Environment files
13+
.env
14+
.env.local
15+
.env.*.local
16+
.env.development
17+
.env.test
18+
.env.production

README.md

+44-37
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,23 @@ comfystream is a package for running img2img [Comfy](https://www.comfy.org/) wor
55
This repo also includes a WebRTC server and UI that uses comfystream to support streaming from a webcam and processing the stream with a workflow JSON file (API format) created in ComfyUI. If you have an existing ComfyUI installation, the same custom nodes used to create the workflow in ComfyUI will be re-used when processing the video stream.
66

77
- [comfystream](#comfystream)
8-
- [Install package](#install-package)
9-
- [Custom Nodes](#custom-nodes)
10-
- [Usage](#usage)
11-
- [Run tests](#run-tests)
12-
- [Run server](#run-server)
13-
- [Run UI](#run-ui)
14-
- [Limitations](#limitations)
15-
- [Troubleshoot](#troubleshoot)
8+
- [Quick Start](#quick-start)
9+
- [Install package](#install-package)
10+
- [Custom Nodes](#custom-nodes)
11+
- [Usage](#usage)
12+
- [Run tests](#run-tests)
13+
- [Run server](#run-server)
14+
- [Run UI](#run-ui)
15+
- [Limitations](#limitations)
16+
- [Troubleshoot](#troubleshoot)
1617

17-
# Install package
18+
## Quick Start
19+
20+
The fastest way to get started is to follow [this tutorial](https://livepeer.notion.site/ComfyStream-Dev-Environment-Setup-15d0a3485687802e9528d26050142d82) by @ryanontheinside.
21+
22+
For additional information, refer to the remaining sections below.
23+
24+
## Install package
1825

1926
**Prerequisites**
2027

@@ -24,21 +31,21 @@ A separate environment can be used to avoid any dependency issues with an existi
2431

2532
Create the environment:
2633

27-
```
34+
```bash
2835
conda create -n comfystream python=3.11
2936
```
3037

3138
Activate the environment:
3239

33-
```
40+
```bash
3441
conda activate comfystream
3542
```
3643

3744
Make sure you have [PyTorch](https://pytorch.org/get-started/locally/) installed.
3845

3946
Install `comfystream`:
4047

41-
```
48+
```bash
4249
pip install git+https://github.com/yondonfu/comfystream.git
4350

4451
# This can be used to install from a local repo
@@ -47,75 +54,75 @@ pip install git+https://github.com/yondonfu/comfystream.git
4754
# pip install -e .
4855
```
4956

50-
## Custom Nodes
57+
### Custom Nodes
5158

5259
comfystream uses a few custom nodes to support running workflows.
5360

5461
Copy the custom nodes into the `custom_nodes` folder of your ComfyUI workspace:
5562

56-
```
63+
```bash
5764
cp -r nodes/* custom_nodes/
5865
```
5966

6067
For example, if your ComfyUI workspace is under `/home/user/ComfyUI`:
6168

62-
```
69+
```bash
6370
cp -r nodes/* /home/user/ComfyUI/custom_nodes
6471
```
6572

66-
## Usage
73+
### Usage
6774

6875
See `example.py`.
6976

70-
# Run tests
77+
## Run tests
7178

7279
Install dev dependencies:
7380

74-
```
81+
```bash
7582
pip install .[dev]
7683
```
7784

7885
Run tests:
7986

80-
```
87+
```bash
8188
pytest
8289
```
8390

84-
# Run server
91+
## Run server
8592

8693
Install dependencies:
8794

88-
```
95+
```bash
8996
pip install -r requirements.txt
9097
```
9198

9299
If you have existing custom nodes in your ComfyUI workspace, you will need to install their requirements in your current environment:
93100

94-
```
101+
```bash
95102
python install.py --workspace <COMFY_WORKSPACE>
96103
```
97104

98105
Run the server:
99106

100-
```
107+
```bash
101108
python server/app.py --workspace <COMFY_WORKSPACE>
102109
```
103110

104111
Show additional options for configuring the server:
105112

106-
```
113+
```bash
107114
python server/app.py -h
108115
```
109116

110117
**Remote Setup**
111118

112119
A local server should connect with a local UI out-of-the-box. It is also possible to run a local UI and connect with a remote server, but there may be additional dependencies.
113120

114-
In order for the remote server to connect with another peer (i.e. a browser) without any additional dependencies you will need to allow inbound/outbound UDP traffic on ports 1024-65535 ([source](https://github.com/aiortc/aiortc/issues/490#issuecomment-788807118)).
121+
In order for the remote server to connect with another peer (i.e. a browser) without any additional dependencies you will need to allow inbound/outbound UDP traffic on ports 1024-65535 ([source](https://github.com/aiortc/aiortc/issues/490#issuecomment-788807118)).
115122

116123
If you only have a subset of those UDP ports available, you can use the `--media-ports` flag to specify a comma delimited list of ports to use:
117124

118-
```
125+
```bash
119126
python server/app.py --workspace <COMFY_WORKSPACE> --media-ports 1024,1025,...
120127
```
121128

@@ -124,38 +131,38 @@ If you are running the server in a restrictive network environment where this is
124131
At the moment, the server supports using Twilio's TURN servers (although it is easy to make the update to support arbitrary TURN servers):
125132

126133
1. Sign up for a [Twilio](https://www.twilio.com/en-us) account.
127-
2. Copy the Account SID and Auth Token from https://console.twilio.com/.
134+
2. Copy the Account SID and Auth Token from [https://console.twilio.com/](https://console.twilio.com/).
128135
3. Set the `TWILIO_ACCOUNT_SID` and `TWILIO_AUTH_TOKEN` environment variables.
129136

130-
````
137+
```bash
131138
export TWILIO_ACCOUNT_SID=...
132139
export TWILIO_AUTH_TOKEN=...
133-
````
140+
```
134141

135-
# Run UI
142+
## Run UI
136143

137144
**Prerequisities**
138145

139146
- [Node.js](https://nodejs.org/en/download/package-manager)
140147

141148
Install dependencies
142149

143-
```
150+
```bash
144151
cd ui
145152
npm install --legacy-peer-deps
146153
```
147154

148155
Run local dev server:
149156

150-
```
157+
```bash
151158
npm run dev
152159
```
153160

154-
By default the app will be available at http://localhost:3000.
161+
By default the app will be available at <http://localhost:3000>.
155162

156-
The Stream URL is the URL of the [server](#run-server) which defaults to http://127.0.0.1:8888.
163+
The Stream URL is the URL of the [server](#run-server) which defaults to <http://127.0.0.1:8888>.
157164

158-
# Limitations
165+
## Limitations
159166

160167
At the moment, a workflow must fufill the following requirements:
161168

@@ -170,12 +177,12 @@ At the moment, a workflow must fufill the following requirements:
170177
- The workflow must have a single output using a PreviewImage or SaveImage node
171178
- At runtime, this node is replaced with a SaveTensor node
172179

173-
# Troubleshoot
180+
## Troubleshoot
174181

175182
This project has been tested locally successfully with the following setup:
176183

177184
- OS: Ubuntu
178185
- GPU: Nvidia RTX 4090
179186
- Driver: 550.127.05
180187
- CUDA: 12.5
181-
- torch: 2.5.1+cu121
188+
- torch: 2.5.1+cu121

install.py

+3-1
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
import argparse
44
import logging
55

6+
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
67
logger = logging.getLogger(__name__)
78

8-
99
def install_custom_node_req(workspace: str):
1010
custom_nodes_path = os.path.join(workspace, "custom_nodes")
1111
for folder in os.listdir(custom_nodes_path):
@@ -24,4 +24,6 @@ def install_custom_node_req(workspace: str):
2424
)
2525
args = parser.parse_args()
2626

27+
logger.info("Installing custom node requirements...")
2728
install_custom_node_req(args.workspace)
29+
logger.info("Custom node requirements installed successfully.")

server/app.py

+40-1
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
RTCConfiguration,
1313
RTCIceServer,
1414
MediaStreamTrack,
15+
RTCDataChannel,
1516
)
1617
from aiortc.rtcrtpsender import RTCRtpSender
1718
from aiortc.codecs import h264
@@ -20,6 +21,7 @@
2021

2122
logger = logging.getLogger(__name__)
2223

24+
2325
MAX_BITRATE = 2000000
2426
MIN_BITRATE = 2000000
2527

@@ -132,6 +134,39 @@ async def offer(request):
132134
h264.MAX_BITRATE = MAX_BITRATE
133135
h264.MIN_BITRATE = MIN_BITRATE
134136

137+
# Handle control channel from client
138+
@pc.on("datachannel")
139+
def on_datachannel(channel):
140+
if channel.label == "control":
141+
@channel.on("message")
142+
async def on_message(message):
143+
try:
144+
params = json.loads(message)
145+
146+
if params.get("type") == "get_nodes":
147+
nodes_info = await pipeline.get_nodes_info()
148+
response = {
149+
"type": "nodes_info",
150+
"nodes": nodes_info
151+
}
152+
channel.send(json.dumps(response))
153+
elif params.get("type") == "update_prompt":
154+
if "prompt" not in params:
155+
logger.warning("[Control] Missing prompt in update_prompt message")
156+
return
157+
pipeline.set_prompt(params["prompt"])
158+
response = {
159+
"type": "prompt_updated",
160+
"success": True
161+
}
162+
channel.send(json.dumps(response))
163+
else:
164+
logger.warning("[Server] Invalid message format - missing required fields")
165+
except json.JSONDecodeError:
166+
logger.error("[Server] Invalid JSON received")
167+
except Exception as e:
168+
logger.error(f"[Server] Error processing message: {str(e)}")
169+
135170
@pc.on("track")
136171
def on_track(track):
137172
logger.info(f"Track received: {track.kind}")
@@ -222,7 +257,11 @@ async def on_shutdown(app: web.Application):
222257
)
223258
args = parser.parse_args()
224259

225-
logging.basicConfig(level=args.log_level.upper())
260+
logging.basicConfig(
261+
level=args.log_level.upper(),
262+
format='%(asctime)s [%(levelname)s] %(message)s',
263+
datefmt='%H:%M:%S'
264+
)
226265

227266
app = web.Application()
228267
app["media_ports"] = args.media_ports.split(",") if args.media_ports else None

server/pipeline.py

+11-2
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111

1212
WARMUP_RUNS = 5
1313

14+
15+
1416
class Pipeline:
1517
def __init__(self, **kwargs):
1618
self.client = ComfyStreamClient(**kwargs, max_workers=5) # hardcoded max workers
@@ -26,6 +28,9 @@ def __init__(self, **kwargs):
2628
self.time_base = fractions.Fraction(1, self.sample_rate)
2729
self.curr_pts = 0 # figure out a better way to set back pts to processed audio frames
2830

31+
def set_prompt(self, prompt: Dict[Any, Any]):
32+
self.client.set_prompt(prompt)
33+
2934
async def warm(self):
3035
dummy_video_frame = torch.randn(1, 512, 512, 3)
3136
dummy_audio_frame = np.random.randint(-32768, 32767, 48000 * 1, dtype=np.int16)
@@ -95,7 +100,6 @@ async def get_processed_video_frame(self):
95100
frame.time_base = time_base
96101
return frame
97102

98-
99103
async def get_processed_audio_frame(self):
100104
while not self.audio_output_frames:
101105
out_fut = await self.audio_futures.get()
@@ -104,4 +108,9 @@ async def get_processed_audio_frame(self):
104108
print("No Audio output")
105109
continue
106110
self.audio_output_frames.extend(self.audio_postprocess(output))
107-
return self.audio_output_frames.pop(0)
111+
return self.audio_output_frames.pop(0)
112+
113+
async def get_nodes_info(self) -> Dict[str, Any]:
114+
"""Get information about all nodes in the current prompt including metadata."""
115+
nodes_info = await self.client.get_available_nodes()
116+
return nodes_info

0 commit comments

Comments
 (0)