Skip to content

Latest commit

 

History

History
12 lines (7 loc) · 662 Bytes

README.md

File metadata and controls

12 lines (7 loc) · 662 Bytes

tinkering-with-vlms

various tinkering scripts and tools for experimenting with vision‑language models (VLMs), with a focus on providing more control and consistency in llava-based models, primarily looking at control vectors and modifying a model’s weights directions and scales by layer.

License

While the code in here is MIT, it's heavily borrowed from other control vector / abliteration projects.

This license does not apply to xtuner, llava, llama, and hunyuanvideo models, which have their own licenses and terms here: