llama.cpp shipped new support for vision models this morning, including macOS binaries (albeit quarantined so you have to take extra steps to run them) that let you run vision models in a terminal or as a localhost web UI

My notes on how to get it running on a Mac: simonwillison.net/2025/May/10/

0

If you have a fediverse account, you can quote this note from your own instance. Search https://fedi.simonwillison.net/users/simon/statuses/114482143906384869 on your instance and quote it. (Note that quoting is not supported in Mastodon.)