@zkatkat Hi Kat, please could I ask if the LLMs in question are specific to the major players, or whether it’s LLMs in general?
I can understand if it were against the likes of Anthropic, Google, OpenAI due to what we know about how they operate and the resources involved in their training of models, but if the latter, is there a line where some model of LLM would ever be seen as acceptable?
That is, if a model could be trained fully using green energy, not impacting any communities, respecting copyrights and robots.txt when crawling (i’m being deliberately simplistic as I don’t know enough about this all to speak with any sort of authority) and then be able to run locally on people’s machines for inference - would this be acceptable? Or is it the technology itself that is the concern, that an LLM may never be seen as fully trustable given how they function?