Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1617
05cd6e50
·
server : recognize cache_prompt parameter in OAI API (#4347)
·
Dec 06, 2023
b1616
caa92492
·
common : fix compile warning
·
Dec 06, 2023
b1615
da5eaef1
·
speculative : support `--color` (#4343)
·
Dec 06, 2023
b1614
5f6e0c0d
·
grammar : pre-computed pieces + reserve mem + less string copies (#4330)
·
Dec 05, 2023
b1613
5aa365d8
·
llama : allow overriding GGUF metadata when loading model (#4092)
·
Dec 05, 2023
b1612
52c8bc3c
·
sampling : custom samplers order (#4285)
·
Dec 05, 2023
b1611
e4b76bbe
·
swift : revert compiler checks for swift package (#4332)
·
Dec 05, 2023
b1610
23b5e12e
·
simple : update error message for KV cache check (#4324)
·
Dec 04, 2023
b1609
d208995c
·
swift : fix concatenation method to avoid invalid UTF8 stringfication (#4325)
·
Dec 04, 2023
b1608
5c9f90cb
·
swift : fix prompt tokenization logic (#4321)
·
Dec 04, 2023
b1607
4fa44e84
·
grammar-parser : fix typo (#4318)
·
Dec 04, 2023
b1606
fbbc4282
·
ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (#4308)
·
Dec 03, 2023
b1605
adf3de4f
·
ggml : fix soft max out-of-bounds access (#4307)
·
Dec 03, 2023
b1604
33e171d1
·
server : fix OpenAI API `stop` field to be optional (#4299)
·
Dec 03, 2023
b1602
d7b800b8
·
llama : pad KV cache size (#4280)
·
Dec 03, 2023
b1601
5a7d3125
·
llama : avoid using "optional" keyword (#4283)
·
Dec 01, 2023
b1600
d5a1cbde
·
llama : support optional tensors (#4283)
·
Dec 01, 2023
b1599
b220222a
·
swift : fix token_to_piece implementation (#4278)
·
Dec 01, 2023
b1598
511f52c3
·
build : enable libstdc++ assertions for debug builds (#4275)
·
Dec 01, 2023
b1597
03562f3a
·
llama : support attention bias on LLaMA architecture (#4283)
·
Dec 01, 2023
Prev
1
…
42
43
44
45
46
47
48
49
50
…
99
Next