Identi.ca
Login
Renato Candido
at
2026-01-12T13:49:24Z
https://github.com/0xSojalSec/airllm
allows 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning.