Use this calculator to estimate the RAM requirements for running different large language models on your local hardware.
Running Large Language Models locally requires significant memory resources. Here's what you need to know:
Pro Tip: For the best balance of quality and performance on consumer hardware, 7B parameter models with 8-bit quantization offer a good compromise, typically requiring 12-14GB of RAM.