with llama.cpp getting GPT4all inference support the same day it came out, I feel like llama.cpp might soon become a general purpose high performance inference library/toolkit. excited!
Heh, things seem to be moving in this direction, but I think it's still a very long way to go. But who knows - the amount of contributions to the project keep growing. I guess when we have a solid foundation for LLM inference we can think about supporting SD as well