Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1665
d3223afd
·
llama : disable per-tensor info prints on model load (#4562)
·
Dec 21, 2023
b1664
1d7a1912
·
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
·
Dec 21, 2023
b1663
799fc226
·
CUDA: Faster Mixtral prompt processing (#4538)
·
Dec 20, 2023
b1662
328b83de
·
ggml : fixed check for _MSC_VER (#4535)
·
Dec 19, 2023
b1661
a7aee47b
·
ggml-cuda: Fix HIP build (#4528)
·
Dec 18, 2023
b1660
0e18b2e7
·
llama.swiftui : add tinyllama 1.1B F16
·
Dec 18, 2023
b1659
6ff39b12
·
llama.swiftui : add more models
·
Dec 18, 2023
b1658
b9e74f9b
·
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
·
Dec 18, 2023
b1657
3c04bf6d
·
llama : fix try_override for bool_value which always return true (#4519)
·
Dec 18, 2023
b1656
2994f0c5
·
decode : fix logits_valid for legacy API (#4516)
·
Dec 17, 2023
b1654
800a489e
·
llama.swiftui : add bench functionality (#4483)
·
Dec 17, 2023
b1652
919c4066
·
build : Check the ROCm installation location (#4485)
·
Dec 17, 2023
b1651
45668633
·
finetune : keep allocs alive until all allocations are done (#4486)
·
Dec 17, 2023
b1650
0ffc92d2
·
server : disable llm logs if SERVER_VERBOSE is off (#3792)
·
Dec 17, 2023
b1649
8edd2b40
·
server : fix grammar being ignored (#4494)
·
Dec 17, 2023
b1648
eb16dae7
·
server : fix possible ambiguity in content type charset (#4501)
·
Dec 17, 2023
b1647
62bd52b7
·
server : allow requests larger than 8K (#4500)
·
Dec 17, 2023
b1646
5daa5f54
·
Link to cublas dynamically on Windows even with LLAMA_STATIC (#4506)
·
Dec 17, 2023
b1645
c6c4fc08
·
lora : add support for non-llama models (#3333)
·
Dec 16, 2023
b1644
8a5be3bd
·
llama : sanity checks for access to logits (#4274)
·
Dec 15, 2023
Prev
1
…
40
41
42
43
44
45
46
47
48
…
99
Next