I got LLMs running locally on my System76 Lemur Pro w/ Intel GPU using open-sourced models

The Good…
- I don’t use a costly token-based API usage (no “slot machine” business model)
- I don’t have to send out information from my machine, code or idea
- I don’t request transactions from a large data center that use massive energy
- I gain some understanding on how they work, and strategies to minimize flop & overly-confident made-up answers

And the Bad… (1/2)

0

If you have a fediverse account, you can quote this note from your own instance. Search https://social.ayco.io/users/ayo/statuses/115075308712225616 on your instance and quote it. (Note that quoting is not supported in Mastodon.)