Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought that most font rendering was still happening on CPU. Usually, characters are rather small, so the overhead of shipping the data to the GPU does not seem worth it.


Even with CPU rendering, there will be differences depending on the specific software libraries and system configuration, e.g. aliasing settings. Patches to one of the pieces of software doing rendering could create pixel level differences in rendered fonts that could be used to fingerprint.


Still, "latest Chrome on Windows 10 and an 1080p laptop screen with Intel Graphics" should be at least ... half of the users out there ?

Yet CoverYourTracks says that even my up to date Chrome User Agent is used by one in 223 browsers, which sounds hard to believe.


It's actually an interesting case study how "identical installs" often have minor config variations which produce a sort of chaos in the end result. Also, staggered downstream distribution of software updates doesn't help.

I haven't tested it much myself, but I suspect there's a lot to unpack here.


Wait, isn't font rendering client-side? Why would the server have to know how fonts are rendered on my end?


They render it to a canvas, webgl or similar, and read the pixel data using javascript.


That's correct, it's not that reliable, which is why these are paired with drawing some geometric shapes on the canvas as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: