Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
master-c36e81d
c36e81da
·
examples : add chat-vicuna.sh (#1854)
·
Jun 15, 2023
master-3559433
3559433f
·
cmake : set include path for OpenBlas (#1830)
·
Jun 15, 2023
master-69b34a0
69b34a0e
·
swift : Package compile breaks due to ggml-metal.metal (#1831)
·
Jun 15, 2023
master-cf267d1
cf267d1c
·
make : add train-text-from-scratch (#1850)
·
Jun 15, 2023
master-37e257c
37e257c4
·
make : clean *.so files (#1857)
·
Jun 15, 2023
master-64cc19b
64cc19b4
·
Fix the validation of main device (#1872)
·
Jun 15, 2023
master-4bfcc85
4bfcc855
·
metal : parallel command buffer encoding (#1860)
·
Jun 15, 2023
master-6b8312e
6b8312e7
·
Better error when using both LoRA + GPU layers (#1861)
·
Jun 15, 2023
master-254a7a7
254a7a7a
·
CUDA full GPU acceleration, KV cache in VRAM (#1827)
·
Jun 14, 2023
ci_cublas_linux-d9f3846
d9f38465
·
ci: add linux binaries to release build
·
Jun 13, 2023
master-9254920
92549202
·
baby-llama : fix operator!= (#1821)
·
Jun 13, 2023
master-e32089b
e32089b2
·
train : improved training-from-scratch example (#1652)
·
Jun 13, 2023
master-2347e45
2347e45e
·
llama : do a warm-up eval at start for better timings (#1824)
·
Jun 13, 2023
master-74d4cfa
74d4cfa3
·
Allow "quantizing" to f16 and f32 (#1787)
·
Jun 13, 2023
master-74a6d92
74a6d922
·
Metal implementation for all k_quants (#1807)
·
Jun 12, 2023
master-e4caa8d
e4caa8da
·
ci : run when changing only the CUDA sources (#1800)
·
Jun 12, 2023
master-58970a4
58970a4c
·
Leverage mmap for offloading tensors to GPU (#1597)
·
Jun 12, 2023
master-fa84c4b
fa84c4b3
·
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
·
Jun 11, 2023
master-4de0334
4de0334f
·
cmake : fix Metal build (close #1791)
·
Jun 10, 2023
master-3f12231
3f122315
·
k-quants : GCC12 compilation fix (#1792)
·
Jun 10, 2023
Prev
1
…
75
76
77
78
79
80
81
82
83
…
99
Next