Z80-μLM is a 2-bit quantized language model small enough to run on an 8-bit Z80 processor via
@deejayyElefántgyűjtő https://lobste.rs/s/t5niao #ai #retrocomputing
https://github.com/HarryR/z80ai
If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/lobsters/statuses/115819748383523276 on your instance and quote it. (Note that quoting is not supported in Mastodon.)