Renato Candido

Renato Candido at

https://github.com/0xSojalSec/airllm allows 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning.