Hacker News

Post-transformer inference: 224× compression of Llama-70B with improved accuracy