artificial New technique to run 70B LLM Inference on a single 4GB GPU /u/tinny66666 December 3, 2023 December 3, 2023 submitted by /u/tinny66666 [link] [comments] Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to email a link to a friend (Opens in new window) Email Click to print (Opens in new window) Print