Projects with this topic
Sort by:
-
LLM.c
LLM training in simple, raw C/CUDA LLMs in simple, pure C/CUDA with no need for 245MB of PyTorch or 107MB of cPython. Current focus is on pretraining, in particular reproducing the GPT-2 and GPT-3 miniseries, along with a parallel PyTorch ref
Updated