mirror of https://github.com/nomic-ai/gpt4all
non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just scale all the logits to infinity and give bad outputpull/911/head
parent
b14953e136
commit
47fbc0e309
Loading…
Reference in New Issue