-
Notifications
You must be signed in to change notification settings - Fork 0
Docker Algorithms
For the Algorithms pipeline, we use three Docker containers:
-
Container to compute deep neural net features for a video collection,
-
Load_DB Container to load the video data into the Agile Video Query database, and
-
Broker Container for running broker.py (for updating queries).
Before using the containers, make sure that the trained models and prototxt files being used by src/features_GPU_compute/calcSig_wOF_ensemble.sh are in the src/features_GPU_compute/models folder.
-
Build the image for this container from the main video-query-algorithms/ folder:
docker build -t avq_compute_dnn_features .
-
Start up a new container in interactive mode:
docker run -it --name <name, optional> --runtime=nvidia -v <folder with video collections>:/video_data avq_compute_dnn_features
where
<folder with video collections>
is the path of the folder containing your video collections. It should have subdirectories that contain the videos.<folder with video collections> |--video_collection_1 |--video1.mp4 |--video2.mp4 |--video_collection_2 |--video3.mp4 |--video4.mp4 etc.
File and subdirectory names cannot contain spaces. If necessary, execute
find . -type f -name "* *.xml" -exec rename "s/\s/_/g" {} \;
to remove spaces from names. -
From within the container, follow the documentation at Compute Video Features for steps 1. build_wof_clips.py and 2. calcSig_wOF_ensemble.sh.
-
Exit the container.
-
For subsequent sessions restart the container using
docker start -it <container name>
-
Build the image for this container from the main video-query-algorithms/ folder:
docker build --file Load_DB_Dockerfile -t avq_load_db .
-
Verify the user-defined docker network video-query-api_default exists:
docker network ls
-
Start up a new container in interactive mode:
docker run -it --rm --name avq_load_db -v <folder with video features>:/video_data --network video-query-api_default avq_load_db
where
<folder with video features>
is the path of the folder containing the deep net features computed in step 1 above. It should have subdirectories for videos and separate subdirectories for each set of video features. (Video features computed by means other than step 1 can also be loaded using the Load_DB container.) -
From within the container, execute
cd /code/src export API_CLIENT_USERNAME=<api username> export API_CLIENT_PASSWORD=<api password>
-
Then follow the documentation at Compute Video Features for step
3. load_db.py, usinghttp://video-query-api:8001
for the api url. -
Exit the container.
-
In broker.py, set BASE_URL to
http://video-query-api:8001
-
The broker does not use GPU computing (yet!), so it can be run from a server with only cpu resources.
-
Build the image for this container from the main video-query-algorithms/ folder:
docker build --file Load_DB_Dockerfile -t avq_broker .
-
Start up a new container in interactive mode:
docker run -it --name avq_broker --network video-query-api_default avq_broker
-
From within the container, execute
cd /code/src export API_CLIENT_USERNAME=<api username> export API_CLIENT_PASSWORD=<api password> export BROKER_THREADING='True' export COMPUTE_EPS=.000003 export RANDOM_SEED=None
-
Start the broker in background mode:
nohup python broker.py &
-
Exit the container without stopping it:
CTRL^P CTRL^Q