"Since our servers can encode video much faster than most of your users can upload it, this means there is literally no more delay between the end of the upload and the video finishing encoding. In the screencast above this makes a 150x speed difference."
Surely the upper bound is 2x if you could transcode faster than the upload before.
Unless you are just measuring the time between the upload finishing and the transcode being done. But why would a user care about that metric rather than the total elapsed time?
With the exception of video formats (you mention quicktime) that puts the metadata at the end, how about real-time preview as the file is still uploading? Even a series of static thumbnails every 10 seconds would be an interesting new feature.
I demand you address the OP question of why you haven't fixed the physical constraints of crappy upload bandwidth? Seriously, wtf would we pay your for otherwise? </snark>
As funny as this sounds, fixing the upload bandwidth problem is also something we will work on. Getting a good route between your users and servers can make a real difference.
So at some point in the future we'll offer upload servers in all major geographic areas.
It's interesting how difficult this can be. I was recently evaluating Linode locations for a new virtual server, and though I'm geographically much closer to California, the Texas location gave me almost 20x the download speed. To be precise, I was able to download from a California node at 300K/s, while I got close to 6MB/s from Texas.
Comcast used to route my traffic to California through Seattle, Washington, then down to San Jose and then Fremont, but now, it's going to Texas first, then across to San Diego, then up to Fremont, across a saturated link.
You're derping, but a browser plugin seems like it might actually be a good idea for this. Even if JS is too slow, someone who uploads lots of video through a site might well be willing to install a little plugin to make it faster, especially if it comes with other simple things like queuing a directory of videos, etc.
NaCl might be good for doing the transcoding on the client side before uploading - and would be especially helpful if the transcoding is a downsampling (i.e. reducing stream length), and if the client machine is beefy enough to do it without adversely affecting performance when pumping out bits at the rate of the upload connection.
However, it wouldn't be particularly useful for things like queuing uploads from a directory. The point of NaCl is running native machine code in a provably secure sandbox. It's about making possible web app features faster on the client side, rather than adding new capability, per se.
Most users who have uploaded videos are probably aware that it isn't available right away, but may not really understand why as far as encoding goes. If that process is sped up by such a significant factor, they don't have to worry about it. The 150x number is at worst misleading if you take Amdahl's law into consideration, but upload speeds really aren't at the mercy of site owners. Improving any part of the pipeline is going to be a big win, especially for video.
"Since our servers can encode video much faster than most of your users can upload it, this means there is literally no more delay between the end of the upload and the video finishing encoding. In the screencast above this makes a 150x speed difference."
Surely the upper bound is 2x if you could transcode faster than the upload before.
Unless you are just measuring the time between the upload finishing and the transcode being done. But why would a user care about that metric rather than the total elapsed time?