Experiments and visualizations for better understanding and visusalizing inconsistency in LLM outputs.
$ python -m venv ts_py_server && . ts_py_server/bin/activate && pip install -r requirements.txt
$ python -m server
In a separate terminal window, build the frontend code, watching and rebuilding on new changes.
$ cd ui && yarn && yarn start
Navigate to http://localhost:5432/
Also a utility jupyter notebook for debugging and quick experimentation.