Our paper on bidirectional human-AI visual alignment is out at the ICLR 2025 Workshop!
We introduce LVLM-Aided Visual Alignment (LVLM-VA), which aligns small vision models with human domain knowledge using large vision-language models. A bidirectional interface translates model behavior into natural language and expert instructions into image-level critiques, improving performance while reducing fine-grained feedback needs. Published at the ICLR 2025 Workshop on Bidirectional Human-AI Alignment.