Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1690
ba661751
·
sync : ggml (fix im2col) (#4591)
·
Dec 22, 2023
b1689
a5587695
·
cuda : fix jetson compile error (#4560)
·
Dec 22, 2023
b1688
6724ef16
·
Fix CudaMemcpy direction (#4599)
·
Dec 22, 2023
b1687
48b7ff19
·
llama : fix platforms without mmap (#4578)
·
Dec 22, 2023
b1686
48b24b17
·
ggml : add comment about backward GGML_OP_DIAG_MASK_INF (#4203)
·
Dec 22, 2023
b1685
28cb35a0
·
make : add LLAMA_HIP_UMA option (#4587)
·
Dec 22, 2023
b1684
f31b9848
·
ci : tag docker image with build number (#4584)
·
Dec 22, 2023
b1682
0137ef88
·
ggml : extend `enum ggml_log_level` with `GGML_LOG_LEVEL_DEBUG` (#4579)
·
Dec 22, 2023
b1681
c7e9701f
·
llama : add ability to cancel model loading (#4462)
·
Dec 22, 2023
b1680
afefa319
·
ggml : change ggml_scale to take a float instead of tensor (#4573)
·
Dec 21, 2023
b1678
32259b2d
·
gguf : simplify example dependencies
·
Dec 21, 2023
b1677
4a5f9d62
·
ci : add `jlumbroso/free-disk-space` to docker workflow (#4150)
·
Dec 21, 2023
b1676
d232aca5
·
llama : initial ggml-backend integration (#4520)
·
Dec 21, 2023
b1675
31f27758
·
llama : allow getting n_batch from llama_context in c api (#4540)
·
Dec 21, 2023
b1673
0f630fbc
·
cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)
·
Dec 21, 2023
b1672
562cf222
·
ggml-cuda: Fix HIP build by adding define for __trap (#4569)
·
Dec 21, 2023
b1671
8fe03ffd
·
common : remove incorrect --model-draft default (#4568)
·
Dec 21, 2023
b1670
91544948
·
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
·
Dec 21, 2023
b1667
66f35a2f
·
cuda : better error message for ggml_get_rows (#4561)
·
Dec 21, 2023
b1666
13988239
·
cuda : replace asserts in wrong architecture checks with __trap (#4556)
·
Dec 21, 2023
Prev
1
…
39
40
41
42
43
44
45
46
47
…
99
Next