Is there any reason to use a character encoding other than UTF-8 (for new content)? There is a (lot of) legacy content in non-Unicode encodings. Also, UCS-4 might be a useful optimization for processing text.
I spent many years maintaining a USENET newsreader, first on BeOS, then on Mac OS X. Being that it's so old, USENET users are often very entrenched in their ways. Chinese users were particularly fond of BIG5 and seemed in no hurry to move to Unicode. I supported around 15 different encodings for displaying messages, yet it was still very likely to see an article that my app wasn't displaying properly.
I've never maintained an email client, but I'd bet the situation there is much the same.
Email is absolutely horrible. Email is where I started to appreciate statistical analysis for charset detection. Email starts with 7-bit characters and rapidly goes downhill from there.
Email is also where I started to dislike UNIX devs who think that \n is a proper line ending on networked systems. "\r\n" is not the Windows way, it's the network way.
When dealing with email you eventually learn to completely disregard the specs and do whatever works which further screws up the ecosystem for everyone else.
Is size actually a concern in the modern world? With transmission and storage size is practically a nonissue and we have proper rather cheap compression anyway. And in memory? We have gigabytes of ram, who cares? Of the potential situations where it may become an issue that I can think of, pretty much none of them are on end user hardware.
the other way round, having a wildly variable size depending on the character can be a weakness. In japanese EUC was sometime used despite its shortcomings to force 16bit by character.