Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
b1782
72d8407b
·
llama.swiftui : use llama.cpp as SPM package (#4804)
·
Jan 07, 2024
b1781
d117d4dc
·
llama : print tensor meta for debugging
·
Jan 07, 2024
b1779
63ee677e
·
ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787)
·
Jan 07, 2024
b1778
67984921
·
server : fix n_predict check (#4798)
·
Jan 07, 2024
b1777
c75ca5d9
·
llama.swiftui : use correct pointer for llama_token_eos (#4797)
·
Jan 06, 2024
b1775
eec22a1c
·
cmake : check for openblas64 (#4134)
·
Jan 05, 2024
b1773
91d38876
·
metal : switch back to default.metallib (ggml/681)
·
Jan 05, 2024
b1770
c1d7cb28
·
ggml : do not sched_yield when calling BLAS (#4761)
·
Jan 05, 2024
b1768
b3a7c20b
·
finetune : remove unused includes (#4756)
·
Jan 04, 2024
b1767
012cf349
·
server : send token probs for "stream == false" (#4714)
·
Jan 04, 2024
b1766
a9192801
·
Print backend name on test-backend-ops failure (#4751)
·
Jan 04, 2024
b1765
3c0b5855
·
llama.swiftui : support loading custom model from file picker (#4767)
·
Jan 04, 2024
b1763
dc891b7f
·
ggml : include stdlib.h before intrin.h (#4736)
·
Jan 04, 2024
b1761
cb1e2818
·
train : fix typo in overlapping-samples help msg (#4758)
·
Jan 03, 2024
b1760
ece9a45e
·
swift : update Package.swift to use ggml as dependency (#4691)
·
Jan 03, 2024
b1759
7bed7eba
·
cuda : simplify expression
·
Jan 03, 2024
b1752
f3f62f0d
·
metal : optimize ggml_mul_mat_id (faster Mixtral PP) (#4725)
·
Jan 02, 2024
b1751
0ef3ca2a
·
server : add token counts to html footer (#4738)
·
Jan 02, 2024
b1750
540938f8
·
llama : llama_model_desc print number of experts
·
Jan 02, 2024
b1749
0040d42e
·
llama : replace all API facing `int`'s with `int32_t` (#4577)
·
Jan 02, 2024
Prev
1
…
36
37
38
39
40
41
42
43
44
…
99
Next