Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1477
629f917c
·
cuda : add ROCM aliases for CUDA pool stuff (#3918)
·
Nov 02, 2023
b1476
51b2fc11
·
cmake : fix relative path to git submodule index (#3915)
·
Nov 02, 2023
b1474
c7743fe1
·
cuda : fix const ptrs warning causing ROCm build issues (#3913)
·
Nov 02, 2023
b1473
d6069051
·
cuda : use CUDA memory pool with async memory allocation/deallocation when available (#3903)
·
Nov 02, 2023
b1472
4ff1046d
·
gguf : print error for GGUFv1 files (#3908)
·
Nov 02, 2023
b1471
21958bb3
·
cmake : disable LLAMA_NATIVE by default (#3906)
·
Nov 02, 2023
b1470
2756c4fb
·
gguf : remove special-case code for GGUFv1 (#3901)
·
Nov 02, 2023
b1469
1efae9b7
·
llm : prevent from 1-D tensors being GPU split (#3697)
·
Nov 02, 2023
b1468
b12fa0d1
·
build : link against build info instead of compiling against it (#3879)
·
Nov 02, 2023
b1467
4d719a6d
·
cuda : check if this fixes Pascal card regression (#3882)
·
Nov 02, 2023
b1466
183b3fac
·
metal : fix build errors and kernel sig after #2268 (#3898)
·
Nov 02, 2023
b1465
2fffa0d6
·
cuda : fix RoPE after #2268 (#3897)
·
Nov 02, 2023
b1464
0eb332a1
·
llama : fix llama_context_default_params after #2268 (#3893)
·
Nov 01, 2023
b1463
d02e98cd
·
ggml-cuda : compute ptrs for cublasGemmBatchedEx in a kernel (#3891)
·
Nov 01, 2023
b1462
898aeca9
·
llama : implement YaRN RoPE scaling (#2268)
·
Nov 01, 2023
b1461
c43c2da8
·
llm : fix llm_build_kqv taking unused tensor (benign, #3837)
·
Nov 01, 2023
b1460
523e49b1
·
llm : fix falcon norm after refactoring (#3837)
·
Nov 01, 2023
b1459
e16b9fa4
·
metal : multi-simd softmax (#3710)
·
Nov 01, 2023
b1458
ff8f9a88
·
common : minor (#3715)
·
Nov 01, 2023
b1457
50337961
·
llm : add llm_build_context (#3881)
·
Nov 01, 2023
Prev
1
…
47
48
49
50
51
52
53
54
55
…
99
Next