For 24GB 30b anons struggling with context on webui, I figured out a nasty hack that helps.
Basically just forcing a GC every 8 tokens in streaming mode. The reason this works is that HF transformers have almost the worst memory allocation pattern possible and vram gets horribly fragmented. ooba tries to fix this by putting calls to clear_cache everywhere, but the problem is so bad it happens even during a single generation. It seems like the easiest/fastest way to defragment is to just churn memory back to the US, otherwise Pytorch will try to and it's horribly slow and often fails completely.
I looked over the attention code that causes this and I have no idea where anyone could begin to fix it without a rewrite that avoids transformers, this ugly hack might be the best we can get for a while.