- About
- Building
TVApp2
Image - Using
tvapp
Image - Traefik Integration
- Authentik Integration
- Troubleshooting
- Run Error:
Error serving playlist: ENOENT: no such file or directory, open /usr/src/app/xmltv.1.xml
- Build Error:
s6-rc-compile: fatal: invalid /etc/s6-overlay/s6-rc.d/certsync/type: must be oneshot, longrun, or bundle
- Build Error:
unable to exec /etc/s6-overlay/s6-rc.d/init-envfile/run: Permission denied
- Run Error:
- Extra Notes
- π Dedication
- β¨ Contributors
TVApp2 is a docker image which allows you to download M3U playlist and EPG guide data which can be plugged into your IPTV applications such as Jellyfin, Plex, and Emby. It is a revision of the original app by dtankdempse which is no longer available. This app fetches data for:
- TheTvApp
- TVPass
- MoveOnJoy
- More coming soon
This project contains several repositories which all share the same code; use them as backups:
To install TVApp2 in docker; you will need to use either the π docker run
command, or create a π docker-compose.yml
file which contains information about how to pull and start up.
Type out your π docker run
command, or prepare a π docker-compose.yml
script. Examples are provided below. We have also provided charts with a list of the registries you can pull the image from, and a list of all the available environment variables you can use.
Pick one registry URL from the list Registry URLs and put it in your π docker run
command, or in your π docker-compose.yml
.
For the environment variables, you may specify these in your π docker run
command or π docker-compose.yml
file. See the examples below.
Env Var | Default | Description |
---|---|---|
TZ |
Etc/UTC |
Timezone for error / log reporting |
WEB_IP |
0.0.0.0 |
IP to use for webserver |
WEB_PORT |
4124 |
Port to use for webserver |
URL_REPO |
https://git.binaryninja.net/BinaryNinja/ |
Determines where the data files will be downloaded from. Do not change this or you will be unable to get M3U and EPG data. |
FILE_PLAYLIST |
playlist.m3u8 |
Filename for M3U playlist file |
FILE_EPG |
xmltv.xml |
Filename for XML guide data file |
FILE_GZIP |
xmltv.xml.gz |
Filename for XML compressed as gzip .gz |
STREAM_QUALITY |
hd |
Stream quality Can be either hd or sd |
DIR_BUILD |
/usr/src/app |
Path inside container where TVApp2 will be built. |
DIR_RUN |
/usr/bin/app |
Path inside container where TVApp2 will be placed after it is built |
LOG_LEVEL |
4 |
Level of logging to display in console6 Trace & below5 Debug & below4 Info & below3 Notice & below2 Warn & below1 Error only |
These paths can be mounted and shared between the TVApp2 docker container and your host machine:
Container Path | Description |
---|---|
π /usr/bin/app |
Path where TVApp2 files will be placed once the app has been built. Includes π formatted.dat , π xmltv.1.xml , π urls.txt , π node_modules , and π package.json |
π /config |
Where logs will be placed, as well as the web server generated SSH key and cert π cert.key and πͺͺ cert.crt |
These are quick instructions on how to start the TVApp2 docker container once you have finished the section Quick Install.
If you want to bring the container up using π docker run
; execute the following:
docker run -d --restart=unless-stopped \
--name tvapp2 \
-p 4124:4124 \
-e "DIR_RUN=/usr/bin/app" \
-e "TZ=Etc/UTC" \
-v ${PWD}/app:/usr/bin/app ghcr.io/thebinaryninja/tvapp2:latest
If you want to use a π docker-compose.yml
to bring TVApp2 up; you may use the following example:
services:
tvapp2:
container_name: tvapp2
image: ghcr.io/thebinaryninja/tvapp2:latest # Image: Github
# image: thebinaryninja/tvapp2:latest # Image: Dockerhub
# image: git.binaryninja.net/binaryninja/tvapp2:latest # Image: Gitea
restart: unless-stopped
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./config:/config
- ./app:/usr/bin/app
environment:
- TZ=Etc/UTC
- WEB_IP=0.0.0.0
- WEB_PORT=4124
- DIR_RUN=/usr/bin/app
- DIR_RUN=/usr/bin/app
- STREAM_QUALITY=hd
- FILE_PLAYLIST=playlist.m3u8
- FILE_EPG=xmltv.xml
- LOG_LEVEL=4
Once you bring the docker container up; open your web-browser and access the container's webserver by going to:
http://container-ip:4124
Copy both the M3U playlist URL and the EPG guide URL, and paste it in your favorite IPTV application; Plex, Jellyfin, Emby, etc.
If you need more extensive instructions on installing and using this container, read the section:
- TVApp2 makes fetch request to tvapp2-externals making updates to external formats agnostic to pushing a new container image.
- TVApp2 makes fetch request to XMLTV-EPG making updates to EPG data based on customized channel ids. Channel ids are specific to each EPG record which makes obfuscating channel ids difficult.
graph TD
A[tvapp2] <--> |Fetch Formats| B(tvapp2-externals)
A[tvapp2] <--> |Fetch XMLTV/EPG| C(XMLTV-EPG)
B(tvapp2-externals) --> D{Pull Dynamic Formats}
C(XMLTV-EPG) ---> E{Pull Dynamic EPG}
These instructions outline how the TVApp2 docker image is set up, and how to build your own TVApp2 docker image.
The TVApp2 application requires one dependency docker image, which is utilized as the base image and contains Alpine linux. You may use the pre-compiled docker image provided by us on Github, or you may choose to build your own. The base alpine image is available at:
This base Alpine image contains π¦ s6-overlay and comes with several features such as plugins, service management, migration tools, etc.
The process of building both images are outlined below. But please remember that you do not need to build the base Alpine image; we already provide it at: https://github.com/Aetherinox/docker-base-alpine/pkgs/container/alpine-base
%%{init: { 'themeVariables': { 'fontSize': '10px' }}}%%
flowchart TB
subgraph GRAPH_TVAPP ["Build tvapp2:latest"]
direction TB
obj_step10["`> git clone git.binaryninja.net/BinaryNinja/tvapp2.git`"]
obj_step11["`**Dockerfile
Dockerfile.aarch64**`"]
obj_step12["`> docker build \
--build-arg VERSION=1.0.0 \
--build-arg BUILDDATE=20250225 \
-t tvapp:latest \
-t tvapp:1.0.0-amd64 \
-f Dockerfile . \`"]
obj_step13["`Download **alpine-base** from branch **docker/alpine-base**`"]
obj_step14["`New Image: **tvapp2:latest**`"]
style obj_step10 text-align:center,stroke-width:1px,stroke:#555
style obj_step11 text-align:left,stroke-width:1px,stroke:#555
style obj_step12 text-align:left,stroke-width:1px,stroke:#555
style obj_step13 text-align:left,stroke-width:1px,stroke:#555
end
style GRAPH_TVAPP text-align:center,stroke-width:1px,stroke:transparent,fill:transparent
subgraph GRAPH_ALPINE["Build alpine-base:latest Image"]
direction TB
obj_step20["`> git clone -b docker/alpine-base github.com/Aetherinox/docker-base-alpine.git`"]
obj_step21["`**Dockerfile
Dockerfile.aarch64**`"]
obj_step22["`> docker build \
--build-arg VERSION=3.20 \
--build-arg BUILDDATE=20250225 \
-t docker-alpine-base:latest \
-t docker-alpine-base:3.20-amd64 \
-f Dockerfile . \`"]
obj_step23["`Download files from branch **docker/core**`"]
obj_step24["`New Image: **alpine-base:latest**`"]
style obj_step20 text-align:center,stroke-width:1px,stroke:#555
style obj_step21 text-align:left,stroke-width:1px,stroke:#555
style obj_step22 text-align:left,stroke-width:1px,stroke:#555
style obj_step23 text-align:left,stroke-width:1px,stroke:#555
end
style GRAPH_ALPINE text-align:center,stroke-width:1px,stroke:transparent,fill:transparent
GRAPH_TVAPP --> obj_step10 --> obj_step11 --> obj_step12 --> obj_step13 --> obj_step14
GRAPH_ALPINE --> obj_step20 --> obj_step21 --> obj_step22 --> obj_step23 --> obj_step24
This repository offers two types of docker image; stable
and development
. You may create both or just one. We also offer two different architectures which are amd64
and arm64
. These architectures are tied to the same release.
Build | Tags |
---|---|
Stable |
π tvapp2:latest π tvapp2:1.1.0 π tvapp2:1.1 π tvapp2:1 |
Development |
π tvapp2:development |
Prior to building the docker image, you must ensure the sections below are completed.
If the listed tasks above are not performed, your docker container will throw the following errors when started:
Failed to open apk database: Permission denied
s6-rc: warning: unable to start service init-adduser: command exited 127
unable to exec /etc/s6-overlay/s6-rc.d/init-envfile/run: Permission denied
/etc/s6-overlay/s6-rc.d/init-adduser/run: line 34: aetherxown: command not found
/etc/s6-overlay/s6-rc.d/init-adduser/run: /usr/bin/aetherxown: cannot execute: required file not found
You cannot utilize Windows' Carriage Return Line Feed
. All files must be converted to Unix' Line Feed
. This can be done with Visual Studio Code. OR; you can run the Linux terminal command π dos2unix
to convert these files.
If you cloned the files from the official repository π gitea:binaryninja/tvapp2 and have not edited them, then you should not need to do this step.
Caution
Be careful using the command to change ALL files. You should NOT change the files in your π .git
folder, otherwise you will corrupt your git indexes.
If you accidentally run π dos2unix
on your π .git
folder, do NOT push anything to git. Pull a new copy from the repo.
# Change ALL files
find ./ -type f | grep -Ev '.git|*.jpg|*.jpeg|*.png' | xargs dos2unix --
# Change run / binaries
find ./ -type f -name 'run' | xargs dos2unix --
The files contained within this repo MUST have chmod 755
/ +x
executable permissions.
find ./ -name 'run' -exec sudo chmod +x {} \;
Optional - If you want to set the permissions manually, run the following below. If you executed the find
command above, you don't need to run the list of commands below:
sudo chmod +x ./root/etc/s6-overlay/s6-rc.d/init-adduser/run \
./root/etc/s6-overlay/s6-rc.d/init-crontab-config/run \
./root/etc/s6-overlay/s6-rc.d/init-custom-files/run \
./root/etc/s6-overlay/s6-rc.d/init-envfile/run \
./root/etc/s6-overlay/s6-rc.d/init-folders/run \
./root/etc/s6-overlay/s6-rc.d/init-keygen/run \
./root/etc/s6-overlay/s6-rc.d/init-migrations/run \
./root/etc/s6-overlay/s6-rc.d/init-permissions/run \
./root/etc/s6-overlay/s6-rc.d/init-samples/run \
./root/etc/s6-overlay/s6-rc.d/init-version-checks/run \
./root/etc/s6-overlay/s6-rc.d/svc-cron/run
After completing the steps above; we will now build the π gitea:binaryninja/tvapp2 image.
Before you build the TVApp2 image; open the π Dockerfile
and ensure you are pulling the correct Alpine base image. This instruction is located near the top of the π Dockerfile
:
ARG ARCH=amd64
FROM --platform=linux/${ARCH} ghcr.io/aetherinox/alpine-base:3.21
Note
The ARCH
argument supports two options; which you will specify by using the argument --build-arg ARCH=amd64
in your buildx command:
amd64
arm64
Next, select which type of image you want to build below.
All of the needed Docker files already exist in the repository. To get started, clone the repo to a folder
mkdir tvapp2 && cd tvapp2
# to clone from our gitea website
git clone https://git.binaryninja.net/binarynina/tvapp2.git ./
# to clone from our github website
git clone https://github.com/thebinaryninja/tvapp2.git ./
If you do not need to build both amd64
and arm64
, you can simply build one architecture. First, create a new buildx container:
docker buildx create --driver docker-container --name container --bootstrap --use
Optional - If you first need to remove the provider container because you created it previously, run the command:
docker buildx rm container
docker buildx create --driver docker-container --name container --bootstrap --use
To list all buildx build containers, run:
docker buildx ls
Before you can push the image, ensure you are signed into Docker CLI. Open your Linux terminal and see if you are already signed in:
docker info | grep Username
If nothing is printed; then you are not signed in. Initiate the web login:
docker login
Some text will appear on-screen, copy the code, open your browser, and go to https://login.docker.com/activate
USING WEB BASED LOGIN
To sign in with credentials on the command line, use 'docker login -u <username>'
Your one-time device confirmation code is: XXXX-XXXX
Press ENTER to open your browser or submit your device code here: https://login.docker.com/activate
Waiting for authentication in the browserβ¦
Once you are finished in your browser, you can return to your Linux terminal, and it should bring you back to where you can type a command. You can now verify again if you are signed in:
docker info | grep Username
You should see your name:
Username: thebinaryninja
You are ready to build the TVApp2 docker image, run the command for your platform:
Creates the TVApp2 amd64
docker image:
# Build tvapp2 amd64
docker buildx build \
--build-arg ARCH=amd64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1 \
--tag ghcr.io/thebinaryninja/tvapp2:1 \
--tag ghcr.io/thebinaryninja/tvapp2:latest \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/amd64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--push \
.
Creates the TVApp2 arm64
docker image:
# Build tvapp2 arm64
docker buildx build \
--build-arg ARCH=arm64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1 \
--tag ghcr.io/thebinaryninja/tvapp2:1 \
--tag ghcr.io/thebinaryninja/tvapp2:latest \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/arm64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--push \
.
Note
If you want to only build the TVApp2 docker image locally; remove --push
.
After building the image, you can now use the image either with π docker run
or a π docker-compose.yml
file. These instructions are available by skipping down to the sections:
These instructions tell you how to build the stable
and development
releases for both the amd64
and arm64
architectures. Then you will combine all manifests into one release.
All of the needed Docker files already exist in the repository. To get started, clone the repo to a folder
mkdir tvapp2 && cd tvapp2
# to clone from our gitea website
git clone https://git.binaryninja.net/binarynina/tvapp2.git ./
# to clone from our github website
git clone https://github.com/thebinaryninja/tvapp2.git ./
First, create a new buildx container:
docker buildx create --driver docker-container --name container --bootstrap --use
Optional - If you first need to remove the container because you created it previously, run the command:
docker buildx rm container
docker buildx create --driver docker-container --name container --bootstrap --use
To list all buildx build containers, run:
docker buildx ls
Before you can push the image, ensure you are signed into Docker CLI. Open your Linux terminal and see if you are already signed in:
docker info | grep Username
If nothing is printed; then you are not signed in. Initiate the web login:
docker login
Some text will appear on-screen, copy the code, open your browser, and go to https://login.docker.com/activate
USING WEB BASED LOGIN
To sign in with credentials on the command line, use 'docker login -u <username>'
Your one-time device confirmation code is: XXXX-XXXX
Press ENTER to open your browser or submit your device code here: https://login.docker.com/activate
Waiting for authentication in the browserβ¦
Once you are finished in your browser, you can return to your Linux terminal, and it should bring you back to where you can type a command. You can now verify again if you are signed in:
docker info | grep Username
You should see your name:
Username: thebinaryninja
Next, in order to build the amd64
and arm64
images on the same machine; you must install QEMU using:
docker run --privileged --rm tonistiigi/binfmt --install all
Once the emulator is installed; we will now build two images. When building these two images; we will ensure the --tag
value is different for each one, by adding the architecture to the end. This ensures we don't overwrite one image with the newer one. We need to have two seperate docker images with two different tags.
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
Note
The build commands below will push the docker image to Github's GHCR registry. If you wish to use another registry, edit the --tag:
The --tag <registry>
argument is what determines what registry your image will be pushed to. You can change this to any registry:
Registry | Tag |
---|---|
Dockerhub | --tag thebinaryninja/tvapp2:1.1.0-amd64 --tag thebinaryninja/tvapp2:1.1.0-arm64 |
Github (GHCR) | --tag ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64 --tag ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64 |
Registry v2 | --tag registry.domain.lan/thebinaryninja/tvapp2:1.1.0-amd64 --tag registry.domain.lan/thebinaryninja/tvapp2:1.1.0-arm64 |
Gitea | --tag git.binaryninja.net/binaryninja/tvapp2:1.1.0-amd64 --tag git.binaryninja.net/binaryninja/tvapp2:1.1.0-arm64 |
After we built these two images and push them to a registry online, we will merge them into a single docker image which contains both arcitectures.
Warning
In order to merge the two architecture images into one; you MUST --push
each of the two docker images to a registry first. You cannot modify the manifests locally.
Creates the TVApp2 Stable release amd64
docker image:
# Build Tvapp2 amd64 - (stable release)
docker buildx build \
--build-arg ARCH=amd64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64 \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/amd64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--pull \
--push \
.
Creates the TVApp2 Stable release arm64
docker image:
# Build Tvapp2 arm64 - (stable release)
docker buildx build \
--build-arg ARCH=arm64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64 \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/arm64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--pull \
--push \
.
Creates the TVApp2 Development release amd64
docker image:
# Build Tvapp2 amd64 - (development release)
docker buildx build \
--build-arg ARCH=amd64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:development-amd64 \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/amd64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--pull \
--push \
.
Creates the TVApp2 Development release arm64
docker image:
# Build Tvapp2 arm64 - (development release)
docker buildx build \
--build-arg ARCH=arm64 \
--build-arg VERSION=1.1.0 \
--build-arg BUILDDATE=20250325 \
--tag ghcr.io/thebinaryninja/tvapp2:development-arm64 \
--attest type=provenance,disabled=true \
--attest type=sbom,disabled=true \
--file Dockerfile \
--platform linux/arm64 \
--output type=docker \
--allow network.host \
--network host \
--no-cache \
--pull \
--push \
.
After completing the docker buildx
commands above; you should now have a few new images. Each image should have its own separate docker tags which do not conflict. If you decided to not build the development releases below; that is fine.
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
--tag ghcr.io/thebinaryninja/tvapp2:development-amd64
--tag ghcr.io/thebinaryninja/tvapp2:development-arm64
Next, we need to take these two images, and merge them into one so that both architectures are available without having to push separate images. You need to obtain the SHA256 hash digest for the amd64
and arm64
images. You can go to the registry where you uploaded the images and then copy them. Or you can run the following commands:
Stable Release
If you are building the stable release images; you should see the following:
Registry v2: Newly created amd64
and arm64
images
You can also get the hash digests by running the commands:
$ docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64
Name: ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64
MediaType: application/vnd.docker.distribution.manifest.v2+json
Digest: sha256:0abe1b1c119959b3b1ccc23c56a7ee2c4c908c6aaef290d4ab2993859d807a3b
$ docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
Name: ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
MediaType: application/vnd.docker.distribution.manifest.v2+json
Digest: sha256:e68b9de8669eac64d4e4d2a8343c56705e05e9a907cf0b542343f9b536d9c473
Development Release
If you are building the development release images; you should see the following:
Registry v2: Newly created development-amd64
and development-arm64
images
You can also get the hash digests by running the commands:
$ docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:development-amd64
Name: ghcr.io/thebinaryninja/tvapp2:development-amd64
MediaType: application/vnd.docker.distribution.manifest.v2+json
Digest: sha256:8f36385a28c8f6eb7394d903c9a7a2765b06f94266b32628389ee9e3e3d7e69d
$ docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:development-arm64
Name: ghcr.io/thebinaryninja/tvapp2:development-arm64
MediaType: application/vnd.docker.distribution.manifest.v2+json
Digest: sha256:c719ccb034946e3f0625003f25026d001768794e38a1ba8aafc9146291d548c5
Warning
Wrong Digest Hashes
Be warned that when you push docker images to your docker registry; the SHA256
hash digest will be different than what you have locally. If you use the following command; these digests will be incorrect:
$ docker images --all --no-trunc | grep thebinaryninja
ghcr.io/thebinaryninja/tvapp2 1.1.0-arm64 sha256:48520ca15fed6483d2d5b79993126c311f833002345b0e12b8eceb5bf9def966 42 minutes ago 46MB
ghcr.io/thebinaryninja/tvapp2 1.1.0-amd64 sha256:54a9b7d390199532d5667fae67120d77e2f459bd6108b27ce94e0cfec8f3c41f 43 minutes ago 45MB
To get the correct sha256 digest, use:
docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64
docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:development-amd64
docker buildx imagetools inspect ghcr.io/thebinaryninja/tvapp2:development-arm64
Once you have the correct SHA256
hash digests; paste them into the command below. This command is where you can specify the real --tag
that the public image will have. The previous tags were simply placeholders and no longer matter.
For the stable releases, use:
# #
# Image > Stable
# #
docker buildx imagetools create \
--tag ghcr.io/thebinaryninja/tvapp2:1.1.0 \
--tag ghcr.io/thebinaryninja/tvapp2:1.1 \
--tag ghcr.io/thebinaryninja/tvapp2:1 \
--tag ghcr.io/thebinaryninja/tvapp2:latest \
sha256:0abe1b1c119959b3b1ccc23c56a7ee2c4c908c6aaef290d4ab2993859d807a3b \
sha256:e68b9de8669eac64d4e4d2a8343c56705e05e9a907cf0b542343f9b536d9c473
[+] Building 0.2s (4/4) FINISHED
=> [internal] pushing ghcr.io/thebinaryninja/tvapp2:latest 0.2s
=> [internal] pushing ghcr.io/thebinaryninja/tvapp2:1.1 0.2s
=> [internal] pushing ghcr.io/thebinaryninja/tvapp2:1 0.2s
=> [internal] pushing ghcr.io/thebinaryninja/tvapp2:1.1.0 0.2s
For the development releases, use:
# #
# Image > Development
# #
docker buildx imagetools create \
--tag ghcr.io/thebinaryninja/tvapp2:development \
sha256:8f36385a28c8f6eb7394d903c9a7a2765b06f94266b32628389ee9e3e3d7e69d \
sha256:c719ccb034946e3f0625003f25026d001768794e38a1ba8aafc9146291d548c5
[+] Building 0.1s (1/1) FINISHED
=> [internal] pushing ghcr.io/thebinaryninja/tvapp2:development 0.1s
Note
Compared to the stable release which has 4 tags; the development release only has one tag.
Alternatively, you could use the π manifest create
command; as an example, you can merge multiple architecture images together into a single image. The top line with π thebinaryninja/tvapp2:latest
can be any name. However, all images after --amend
MUST be already existing images uploaded to the registry.
docker manifest create ghcr.io/thebinaryninja/tvapp2:latest \
--amend ghcr.io/thebinaryninja/tvapp2:latest-amd64 \
--amend ghcr.io/thebinaryninja/tvapp2:latest-arm32v7 \
--amend ghcr.io/thebinaryninja/tvapp2:latest-arm64v8
docker manifest push ghcr.io/thebinaryninja/tvapp2:latest
In this example, we take the existing two files we created earlier, and merge them into one. You can either specify the image by SHA256 digest
, or tag:
# Example 1 (using tag)
docker manifest create ghcr.io/thebinaryninja/tvapp2:latest \
--amend ghcr.io/thebinaryninja/tvapp2:1.1.0-amd64 \
--amend ghcr.io/thebinaryninja/tvapp2:1.1.0-arm64
# Example 2 (using sha256 hash)
docker manifest create ghcr.io/thebinaryninja/tvapp2:latest \
--amend ghcr.io/thebinaryninja/tvapp2@sha256:0abe1b1c119959b3b1ccc23c56a7ee2c4c908c6aaef290d4ab2993859d807a3b \
--amend ghcr.io/thebinaryninja/tvapp2@sha256:e68b9de8669eac64d4e4d2a8343c56705e05e9a907cf0b542343f9b536d9c473
# Push manifest changes to registry
docker manifest push ghcr.io/thebinaryninja/tvapp2:latest
If you go back to your registry; you should now see multiple new entries, all with different tags. Two of the images are your old amd64
and arm64
images, and then you should have your official one with the four tags specified above. You can delete the two original images if you do not want them.
Registry v2: Existing amd64
and arm64
images combined into a single docker image with multiple architectures.
If you are pushing to Github's GHCR; the interface will look different, as Github merges all tags into a single listing, instead of Registry v2 listing each tag on its own:
Github GHCR: Existing amd64
and arm64
images combined into a single docker image with multiple architectures.
This node project includes build commands. In order to use them you must install node on your machine.
sudo apt-get install node
To build the project, π cd
into the project folder and run the build command:
cd /home/docker/tvapp2/
npm run docker:build:amd64 --VERSION=1.1.0 --BUILDDATE=20250325
The following is a list of the available commands you can pick from depending on how you would like to build TvApp2:
Command | Description |
---|---|
docker:build:amd64 |
Build image using docker build for amd64 |
docker:build:arm64 |
Build image using docker build for arm64 / aarch64 |
docker:buildx:amd64 |
Build image using docker buildx for amd64 |
docker:buildx:arm64 |
Build image using docker buildx for arm64 / aarch64 |
The run command above has several variables you must specify:
Variable | Description |
---|---|
--VERSION=1.X.X |
The version to assign to the docker image |
--BUILDDATE=20250325 |
The date to assign to the docker image. Date format: YYYYMMDD |
--ARCH=amd64 |
Architecture for image Options: amd64 , arm64 |
To use the new TVApp2 image, you can either call it with the π docker run
command, or create a new π docker-compose.yml
and specify the image:
If you want to use the tvapp docker image in the π docker run
command, execute the following:
docker run -d --restart=unless-stopped \
--name tvapp2 \
-p 4124:4124 \
-e "DIR_RUN=/usr/bin/app" \
-e "TZ=Etc/UTC" \
-v ${PWD}/app:/usr/bin/app ghcr.io/thebinaryninja/tvapp2:latest
If you'd much rather use a π docker-compose.yml
file and call the tvapp image that way, create a new folder somewhere:
mkdir -p /home/docker/tvapp2
Then create a new π docker-compose.yml
:
sudo nano /home/docker/tvapp2/docker-compose.yml
Add the following to your π docker-compose.yml
:
services:
tvapp2:
container_name: tvapp2
image: ghcr.io/thebinaryninja/tvapp2:latest # Image: Github
# image: thebinaryninja/tvapp2:latest # Image: Dockerhub
# image: git.binaryninja.net/binaryninja/tvapp2:latest # Image: Gitea
hostname: tvapp2
restart: unless-stopped
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./config:/config
- ./app:/usr/bin/app
environment:
- TZ=Etc/UTC
- DIR_RUN=/usr/bin/app
Once the π docker-compose.yml
is set up, you can now start your TVApp2 container:
cd /home/docker/tvapp2/
docker compose up -d
TVApp2 should now be running as a container. You can access it by opening your browser and going to:
http://container-ip:4124
This docker container contains the following env variables:
Env Var | Default | Description |
---|---|---|
TZ |
Etc/UTC |
Timezone for error / log reporting |
WEB_IP |
0.0.0.0 |
IP to use for webserver |
WEB_PORT |
4124 |
Port to use for webserver |
URL_REPO |
https://git.binaryninja.net/BinaryNinja/ |
Determines where the data files will be downloaded from. Do not change this or you will be unable to get M3U and EPG data. |
FILE_PLAYLIST |
playlist.m3u8 |
Filename for M3U playlist file |
FILE_EPG |
xmltv.xml |
Filename for XML guide data file |
FILE_GZIP |
xmltv.xml.gz |
Filename for XML compressed as gzip .gz |
STREAM_QUALITY |
hd |
Stream quality Can be either hd or sd |
DIR_BUILD |
/usr/src/app |
Path inside container where TVApp2 will be built. |
DIR_RUN |
/usr/bin/app |
Path inside container where TVApp2 will be placed after it is built |
LOG_LEVEL |
4 |
Level of logging to display in console6 Trace & below5 Debug & below4 Info & below3 Notice & below2 Warn & below1 Error only |
These paths can be mounted and shared between the TVApp2 docker container and your host machine:
Container Path | Description |
---|---|
π /usr/bin/app |
Path where TVApp2 files will be placed once the app has been built. Includes π formatted.dat , π xmltv.1.xml , π urls.txt , π node_modules , and π package.json |
π /config |
Where logs will be placed, as well as the web server generated SSH key and cert π cert.key and πͺͺ cert.crt |
Note
These steps are optional.
If you do not use Traefik, you can skip this section of steps. This is only for users who wish to put the TVApp2 container behind Traefik.
Our first step is to tell Traefik about our TVApp2 container. We highly recommend you utilize a Traefik π dynamic file, instead of labels. Using a π dynamic file allows for automatic refreshing without the need to restart Traefik when a change is made.
If you decide to use labels instead of a π dynamic file, any changes you want to make to your labels will require a restart of Traefik.
We will be setting up the following:
- A
middleware
to re-direct http to https - A
route
to access TVApp2 via http (optional) - A
route
to access TVApp2 via https (secure) - A
service
to tell Traefik how to access your TVApp2 container - A
resolver
so that Traefik can generate and apply a wildcard SSL certificate
To add TVApp2 to Traefik, you will need to open your π docker-compose.yml
and apply the following labels to your TVApp2 container. Ensure you change domain.lan
to your actual domain name.
services:
tvapp2:
container_name: tvapp2
image: ghcr.io/thebinaryninja/tvapp2:latest # Image: Github
# image: thebinaryninja/tvapp2:latest # Image: Dockerhub
# image: git.binaryninja.net/binaryninja/tvapp2:latest # Image: Gitea
hostname: tvapp2
restart: unless-stopped
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./config:/config
- ./app:/usr/bin/app
environment:
- TZ=Etc/UTC
- DIR_RUN=/usr/bin/app
labels:
# General
- traefik.enable=true
# Router > http
- traefik.http.routers.tvapp2-http.rule=Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
- traefik.http.routers.tvapp2-http.service=tvapp2
- traefik.http.routers.tvapp2-http.entrypoints=http
- traefik.http.routers.tvapp2-http.middlewares=https-redirect@file
# Router > https
- traefik.http.routers.tvapp2-https.rule=Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
- traefik.http.routers.tvapp2-https.service=tvapp2
- traefik.http.routers.tvapp2-https.entrypoints=https
- traefik.http.routers.tvapp2-https.tls=true
- traefik.http.routers.tvapp2-https.tls.certresolver=cloudflare
- traefik.http.routers.tvapp2-https.tls.domains[0].main=domain.lan
- traefik.http.routers.tvapp2-https.tls.domains[0].sans=*.domain.lan
# Load Balancer
- traefik.http.services.tvapp2.loadbalancer.server.port=4124
- traefik.http.services.tvapp2.loadbalancer.server.scheme=http
After you've added the labels above, skip the π dynamic.yml section and go straight to the π static.yml section.
If you decide to not use labels and want to use a π dynamic file, you will first need to create your π dynamic file. the Traefik π dynamic file is usually named π dynamic.yml
. We need to add a new middleware
, router
, and service
to our Traefik π dynamic file so that it knows about our new TVApp2 container and where it is.
http:
middlewares:
https-redirect:
redirectScheme:
scheme: "https"
permanent: true
routers:
tvapp2-http:
service: tvapp2
rule: Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
entryPoints:
- http
middlewares:
- https-redirect@file
tvapp2-https:
service: tvapp2
rule: Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
entryPoints:
- https
tls:
certResolver: cloudflare
domains:
- main: "domain.lan"
sans:
- "*.domain.lan"
services:
tvapp2:
loadBalancer:
servers:
- url: "https://tvapp2:4124"
These entries will go in your Traefik π static.yml
file. Any changes made to this file requires that you restart Traefik afterward.
Note
This step is only for users who opted to use the π dynamic file method.
Users who opted to use labels can skip to the section certificatesResolvers
Ensure you add the following new section to your π static.yml
:
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
network: traefik
watch: true
file:
filename: "/etc/traefik/dynamic.yml"
watch: true
The code above is what enables the use of a π dynamic file instead of labels. Change π /etc/traefik/dynamic.yml
if you are placing your dynamic file in a different location. This path is relative to inside the container, not your host machine mounted volume path. Traefik keeps most files in the π /etc/traefik/
folder.
After you add the above, open your Traefik's π docker-compose.yml
file and mount a new volume so that Traefik knows where your new dynamic file is:
services:
traefik:
container_name: traefik
image: traefik:latest
hostname: tvapp2
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /etc/localtime:/etc/localtime:ro
- ./config/traefik.yml:/etc/traefik/traefik.yml:ro
- ./config/dynamic.yml:/etc/traefik/dynamic.yml:ro
You must ensure you add a new volume like shown above:
/config/dynamic.yml:/etc/traefik/dynamic.yml:ro
On your host machine, make sure you place the π dynamic.yml
file in a sub-folder called config, which should be inside the same folder where your Traefik's π docker-compose.yml
file is. If you want to change this location, ensure you change the mounted volume path above.
After you have completed this, proceed to the section certificatesResolvers.
Note
This step is required no matter which option you picked above, both for π dynamic file setups, as well as people using labels.
Open your Traefik π static.yml
file. We need to define the certResolver
that we added above either in your dynamic file, or label. To define the certResolver
, we will be adding a new section labeled certificatesResolvers
. We are going to use Cloudflare in this example, you can use whatever from the list at:
certificatesResolvers:
cloudflare:
acme:
email: youremail@address.com
storage: /cloudflare/acme.json
keyType: EC256
preferredChain: 'ISRG Root X1'
dnsChallenge:
provider: cloudflare
delayBeforeCheck: 15
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
disablePropagationCheck: true
Once you pick the DNS / SSL provider you want to use from the code above, you need to see if that provider has any special environment variables that must be set. The Providers Page lists all providers and also what env variables need set for each one.
In our example, since we are using Cloudflare for dnsChallenge
-> provider
, we must set the following environment variables:
CF_API_EMAIL
CF_API_KEY
Create a .env
environment file in the same folder where your Traefik π docker-compose.yml
file is located, and add the following:
CF_API_EMAIL=yourcloudflare@email.com
CF_API_KEY=Your-Cloudflare-API-Key
Save the π .env
file and exit. For these environment variables to be detected by Traefik, you must give your Traefik container a restart. Until you restart Traefik, it will not be able to generate your new SSL certificates. Before doing the restart, we need to create one more folder and file; this is where Traefik will store your SSL certificate generated by Cloudflare.
Run the commands below, which will do the following:
- Create a new folder called
cloudflare
- Create a new file named
π acme.json
- Set the permission for the
π acme.json
file tochmod 600
.- If you do not do this step, Traefik will fail to start. You must change the permissions in order to protect the file.
mkdir -p /home/docker/traefik/cloudflare
touch /home/docker/traefik/cloudflare/acme.json
chmod 0600 /home/docker/traefik/cloudflare/acme.json
The π acme.json
file will not be populated with an SSL certificate until the next time you restart Traefik. You can wait and restart in a moment after you finish editing the π static.yml
file, as there are more items to add below.
Finally, inside the Traefik π static.yml
, we need to make sure we have our entryPoints
configured. Add the following to the Traefik π static.yml
file only if you DON'T have entry points set yet:
entryPoints:
http:
address: :80
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: :443
http3: {}
http:
tls:
options: default
certResolver: cloudflare
domains:
- main: domain.lan
sans:
- '*.domain.lan'
If your website is behind Cloudflare's proxy service, you need to modify your entryPoints
above so that you can automatically allow Cloudflare's IP addresses through. This means your entry points will look a bit different.
In the example below, we will add forwardedHeaders
-> trustedIPs
and add all of Cloudflare's IPs to the list which are available here:
entryPoints:
http:
address: :80
forwardedHeaders:
trustedIPs: &trustedIps
- 103.21.244.0/22
- 103.22.200.0/22
- 103.31.4.0/22
- 104.16.0.0/13
- 104.24.0.0/14
- 108.162.192.0/18
- 131.0.72.0/22
- 141.101.64.0/18
- 162.158.0.0/15
- 172.64.0.0/13
- 173.245.48.0/20
- 188.114.96.0/20
- 190.93.240.0/20
- 197.234.240.0/22
- 198.41.128.0/17
- 2400:cb00::/32
- 2606:4700::/32
- 2803:f800::/32
- 2405:b500::/32
- 2405:8100::/32
- 2a06:98c0::/29
- 2c0f:f248::/32
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: :443
http3: {}
forwardedHeaders:
trustedIPs: *trustedIps
http:
tls:
options: default
certResolver: cloudflare
domains:
- main: domain.lan
sans:
- '*.domain.lan'
Remember to change domain.lan
to your actual domain name. Then save the files and then give Traefik and your TVApp2 container a restart. After the restart is complete; you should be able to access TVApp2 in your browser by going to
https://tvapp2.domain.lan
This section will not explain how to install and set up Authentik. We are only going to cover adding TVApp2 integration to Authentik.
Sign into the Authentik admin panel, go to the left-side navigation, select Applications -> Providers. Then at the top of the new page, click Create.
Authentik: Select Applications
βΊ Providers
For the provider, select Proxy Provider
.
Authentik: Select desired provider type, or select Proxy Provider
Add the following provider values:
- Name:
TVApp2 ForwardAuth
- Authentication Flow:
default-source-authentication (Welcome to authentik!)
- Authorization Flow:
default-provider-authorization-implicit-consent (Authorize Application)
Select Forward Auth (single application):
- External Host:
https://tvapp2.domain.lan
Authentik: Create new Provider
Once finished, click Create. Then on the left-side menu, select Applications -> Applications. Then at the top of the new page, click Create.
Authentik: Select Applications
βΊ Applications
Add the following parameters:
- Name:
TVApp2 IPTV
- Slug:
tvapp2
- Group:
IPTV
- Provider:
TVApp2 ForwardAuth
- Backchannel Providers:
None
- Policy Engine Mode:
any
Save, and then on the left-side menu, select Applications -> Outposts:
Authentik: Select Applications
βΊ Outposts
Find your Outpost and edit it.
Move TVApp2 IPTV
to the right side Selected Applications box.
Authentik: Assign application to outpost
If you followed our Traefik guide above, you were shown how to add your TVApp2 container to Traefik using either the π dynamic file or labels. Depending on which option you picked, follow that section's guide below.
- For label users, go to the section Labels below.
- For dynamic file users, go to the section π dynamic file below.
Open your TVApp2's π docker-compose.yml
and modify your labels to include Authentik as a middleware by adding authentik@file
to the label traefik.http.routers.tvapp2-https.middlewares
. You should have something similar to the example below:
services:
tvapp2:
container_name: tvapp2
image: ghcr.io/thebinaryninja/tvapp2:latest # Image: Github
# image: thebinaryninja/tvapp2:latest # Image: Dockerhub
# image: git.binaryninja.net/binaryninja/tvapp2:latest # Image: Gitea
restart: unless-stopped
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./config:/config
- ./app:/usr/bin/app
environment:
- TZ=Etc/UTC
- DIR_RUN=/usr/bin/app
labels:
# General
- traefik.enable=true
# Router > http
- traefik.http.routers.tvapp2-http.rule=Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
- traefik.http.routers.tvapp2-http.service=tvapp2
- traefik.http.routers.tvapp2-http.entrypoints=http
- traefik.http.routers.tvapp2-http.middlewares=https-redirect@file
# Router > https
- traefik.http.routers.tvapp2-https.rule=Host(`tvapp2.localhost`) || Host(`tvapp2.domain.lan`)
- traefik.http.routers.tvapp2-https.service=tvapp2
- traefik.http.routers.tvapp2-https.entrypoints=https
- traefik.http.routers.tvapp2-https.middlewares=authentik@file
- traefik.http.routers.tvapp2-https.tls=true
- traefik.http.routers.tvapp2-https.tls.certresolver=cloudflare
- traefik.http.routers.tvapp2-https.tls.domains[0].main=domain.lan
- traefik.http.routers.tvapp2-https.tls.domains[0].sans=*.domain.lan
# Load Balancer
- traefik.http.services.tvapp2.loadbalancer.server.port=443
- traefik.http.services.tvapp2.loadbalancer.server.scheme=https
If you opted to use the π dynamic file, open your Traefik's π dynamic.yml
file and apply the authentik@file
middleware to look something like the following:
http:
routers:
tvapp2-https:
service: tvapp2
rule: Host(`tvapp2.localhost`) || Host(`tvapp2.domain.com`)
entryPoints:
- https
middlewares:
- authentik@file
tls:
certResolver: cloudflare
domains:
- main: "domain.com"
sans:
- "*.domain.com"
After you've done everything above, give your Traefik and Authentik containers a restart. Once they come back up; you should be able to access tvapp2.domain.lan
and be prompted now to authenticate with Authentik. Once you authenticate, you should be re-directed to your TVApp2 home screen which is where you will get your m3u and epg files.
If you have issues building your TVApp2 docker image, please refer to the following sections below:
This error occurs at run-time when attempting to spin up your TVApp2 docker container. If you receive this error, restart your TVApp2 docker container. Ensure that your docker container also has access to your docker network so that it can connect to our repository and fetch the data files it needs to generate your playlist.
If the error continues after doing the above; delete the existing image, and re-pull from one of our official sources.
Build Error: s6-rc-compile: fatal: invalid /etc/s6-overlay/s6-rc.d/certsync/type: must be oneshot, longrun, or bundle
This error means that you are attempting to combine files which are utilizing CRLF over LF; which is CR = Carriage Return and LF = Line Feed
The CRLF line break type is commonly used in Windows operating systems and DOS-based text files. It combines two characters: Carriage Return (CR) and Line Feed (LF).
The LF line break type is predominantly used in Unix, Linux, macOS, and modern text editors, including those for web development. In this convention, a single Line Feed character \n
represents a line break. Unlike CR LF, there is no preceding Carriage Return character. The LF line break type solely relies on the line feed character to move to the next line.
If you attempt to build the TVApp2 docker image in Linux, but have modified the files in Windows, you may receive the following error:
s6-rc-compile: fatal: invalid /etc/s6-overlay/s6-rc.d/certsync/type: must be oneshot, longrun, or bundle
To correct this issue, π cd
into the folder with the TVApp2 files, and then convert them to LF
using the library π dos2unix
. The command below will convert all files to LF, but will EXCLUDE the following:
.git
folder.jpg
images.jpeg
images.png
images
cd /path/to/tvapp2
find ./ -type f | grep -Ev '.git|*.jpg|*.jpeg|*.png' | sudo xargs dos2unix --
Warning
Do not run π dos2unix
on your π .git
folder or you will corrupt your git indexes and will be unable to push commits.
If you accidentally run π dos2unix
on your .git folder, do NOT push anything to git. Pull a new copy from the repo.
There are multiple errors you can receive when attempting to run your TVApp2 docker image. You may receive any of the following errors:
Failed to open apk database: Permission denied
s6-rc: warning: unable to start service init-adduser: command exited 127
unable to exec /etc/s6-overlay/s6-rc.d/init-envfile/run: Permission denied
/etc/s6-overlay/s6-rc.d/init-adduser/run: line 34: aetherxown: command not found
/etc/s6-overlay/s6-rc.d/init-adduser/run: /usr/bin/aetherxown: cannot execute: required file not found
If you receive any of the above errors; this means that you have not set your run
files to have execute permissions +x
. Run the following command in the root directory of your TVApp2 project folder:
find ./ -name 'run' -exec sudo chmod +x {} \;
After you have set these permissions, re-build your docker image using docker build
or docker buildx
. Then spin the container up.
The following are other things to take into consideration when creating the TVApp2 image:
The TVApp2 docker image is built on Alpine Linux, but also includes the π¦ bash
package. Use one of the following to access the shell for this container:
docker exec -it tvapp2 ash
docker exec -it tvapp2 sh
docker exec -it tvapp2 bash
Note
These instructions are for Advanced Users Only; who wish to build their own image.
The π thebinaryninja/tvapp2 image supports the ability of adding custom scripts that will be ran when the container is started. To create / add a new custom script to the container, you need to create a new folder in the container source files π /root
folder
mkdir -p /root/custom-cont-init.d/
Within this new folder, add your custom script:
nano /root/custom-cont-init.d/my_customs_script
Your new custom script should be populated with the bash code you want to perform actions with such as the example below:
#!/bin/bash
echo "**** INSTALLING BASH ****"
apk add --no-cache bash
When you create the docker image, this new script will automatically be loaded. You can also do this via the π docker-compose.yml
file by mounting a new volume:
services:
tvapp2:
volumes:
- ./config:/config
- ./app:/usr/bin/app
- ./custom-scripts:/custom-cont-init.d:ro
Note
if using compose, we recommend mounting them read-only (:ro
) so that container processes cannot write to the location.
Warning
The folder π /root/custom-cont-init.d
MUST be owned by π₯ root
. If this is not the case, this folder will be renamed and a new empty folder will be created. This is to prevent remote code execution by putting scripts in the aforesaid folder.
The π thebinaryninja/tvapp2 image already contains a custom script called π /root/custom-cont-init.d/plugins
. Do NOT edit this script. It is what automatically downloads the official TVApp2 plugins and adds them to the container.
This repository and this project serves in memory of the developer dtankdempse. His work lives on in this project, and while a lot of it has changed, it all started because of him.
We are always looking for contributors. If you feel that you can provide something useful to Gistr, then we'd love to review your suggestion. Before submitting your contribution, please review the following resources:
Want to help but can't write code?
- Review active questions by our community and answer the ones you know.
The following people have helped get this project going: