Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1086
edd4c148
·
llama : more tokenizer fixes (#2810)
·
Aug 27, 2023
b1085
1591e2e5
·
ggml : detect SSSE3 (#2825)
·
Aug 27, 2023
b1083
c1ac54b7
·
server : add `/detokenize` endpoint (#2802)
·
Aug 27, 2023
b1081
c7d92e6d
·
llama : use Unicode Escape Sequence to replace encoded characters (#2814)
·
Aug 26, 2023
b1079
741ca7dd
·
llama : move #includes out of _GNU_SOURCE conditional (#2817)
·
Aug 26, 2023
b1078
72f895c9
·
main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (#1528)
·
Aug 26, 2023
b1077
50526f37
·
llama : use std::abs in llama_sample_tail_free (#2800)
·
Aug 26, 2023
b1076
04f4b1eb
·
k-quants : remove unnecessary tensor shape restrictions (#2811)
·
Aug 26, 2023
b1075
75923754
·
Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (#2807)
·
Aug 26, 2023
b1074
771551a7
·
Fix HellaSwag (#2805)
·
Aug 26, 2023
b1071
2ba83c86
·
Fix spm whitespaces (#2806)
·
Aug 26, 2023
ci_cublas_linux-b1071-5562e3e
5562e3e6
·
temporarily disable broken 512 build
·
Aug 26, 2023
b1069
232caf3c
·
llama : fix struct decl (#2790)
·
Aug 25, 2023
b1068
d046dcee
·
Faster perplexity computation (#2786)
·
Aug 25, 2023
b1067
c82742ac
·
llama : add llama_beam_search() (#2267)
·
Aug 25, 2023
b1065
154725c5
·
llama-bench : add model sizes (#2771)
·
Aug 25, 2023
b1063
29674ab4
·
server : display token probabilities in the UI (#2489)
·
Aug 25, 2023
b1060
6bbc598a
·
ROCm Port (#1087)
·
Aug 25, 2023
b1059
3f460a2b
·
cuda : add RoPE kernel for mode == 2 (NeoX) (#2760)
·
Aug 25, 2023
b1057
b91ad7f4
·
ggml-alloc : enlarge size of parse_seq (#2776)
·
Aug 25, 2023
Prev
1
…
61
62
63
64
65
66
67
68
69
…
99
Next