Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
master-79f634a
79f634a1
·
embd-input : fix returning ptr to temporary
·
Jul 01, 2023
master-b8c8dda
b8c8dda7
·
Use unsigned for random seed (#2006)
·
Jun 29, 2023
master-d3494bb
d3494bb8
·
llama : replacing auto &kv with const auto &kv (#2041)
·
Jun 28, 2023
master-5b351e9
5b351e94
·
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
·
Jun 28, 2023
master-6432aab
6432aabb
·
cuda : fix missing const qualifier in casts (#2027)
·
Jun 28, 2023
master-b922bc3
b922bc35
·
llama : remove shards weight file support (#2000)
·
Jun 28, 2023
master-7f9753f
7f9753fa
·
CUDA GPU acceleration for LoRAs + f16 models (#1970)
·
Jun 28, 2023
master-9d23589
9d23589d
·
fix pthreads setaffinity usage on android (#2020)
·
Jun 27, 2023
master-0be54f7
0be54f75
·
baby-llama : fix build after ggml_rope change (#2016)
·
Jun 27, 2023
master-eaa6ca5
eaa6ca5a
·
ggml : increase max tensor name + clean up compiler warnings in train-text (#1988)
·
Jun 26, 2023
master-c824d2e
c824d2e3
·
ggml : avoid conv 2d kernel round up
·
Jun 26, 2023
master-b853d45
b853d456
·
ggml : add NUMA support (#1556)
·
Jun 26, 2023
master-9225bae
9225baef
·
k-quants : fix indentation
·
Jun 26, 2023
master-a84ab1d
a84ab1da
·
tests : fix quantize perf (#1990)
·
Jun 26, 2023
master-5743ca8
5743ca80
·
k-quants : add AVX support to dot functions (#1916)
·
Jun 26, 2023
master-6769e94
6769e944
·
k-quants : support for super-block size of 64 (#2001)
·
Jun 26, 2023
master-cbebf61
cbebf61c
·
Fix assert when free invalid cuda pointer (#2005)
·
Jun 26, 2023
master-bd34cdd
bd34cdde
·
ggml : sync latest ggml (custom operators)
·
Jun 25, 2023
master-c2a08f8
c2a08f87
·
fix server sampling: top k sampler first (#1977)
·
Jun 25, 2023
master-5ec8dd5
5ec8dd5a
·
#1869 Fix null reference errors when training from scratch with CUDA (#1907)
·
Jun 24, 2023
Prev
1
…
72
73
74
75
76
77
78
79
80
…
99
Next