Wow, that’s an old model. Great that it works for you, but have you tried some more modern ones? They’re generally considered a lot more capable at the same size
Wow, that’s an old model. Great that it works for you, but have you tried some more modern ones? They’re generally considered a lot more capable at the same size
Increase context length, probably enable flash attention in ollama too. Llama3.1 support up to 128k context length, for example. That’s in tokens and a token is on average a bit under 4 letters.
Note that higher context length requires more ram and it’s slower, so you ideally want to find a sweet spot for your use and hardware. Flash attention makes this more efficient
Oh, and the model needs to have been trained at larger contexts, otherwise it tends to handle it poorly. So you should check what max length the model you want to use was trained to handle
Sounds a bit like worldwar series by Harry Turtledove
And woman a combatant factory?
I still use http a lot for internal stuff running in my own network. There’s no spying there… I hope … And ssl for local network only services is a total pita.
So I really hope browsers won’t adapt https only
But even if you use GoMommy extra super duper triple snake oil security checked ssl cert, if I trick LetsEncrypt to sign a key for that domain I still have a valid cert for your site.
I doubt the disk will bottleneck at 40mb/s when doing sequential write. Torrent downloads are usually heavy random writes, which is the worst you can do to a HDD.
Llama3 8b can be run at 6gb vram, and it’s fairly competent. Gemma has a 9b I think, which would also be worth looking into.
No, all sizes of llama 3.1 should be able to handle the same size context. The difference would be in the “smarts” of the model. Bigger models are better at reading between the lines and higher level understanding and reasoning.