As someone who’s worked with many different OpenGL versions the big things I noticed were ‘missing’ from WebGL 1.0 are the following (some were available as extensions, but they are now standard):
- uniform buffer objects. OpenGL is limited to how many uniforms you can pass to each shader via glUniform* calls, making things that use a lot of uniforms (such as skeletal animation or instancing) effectively a no-go. Uniform buffers enable uniforms to use much much more memory.
- vertex array objects. Previously, you had to rebuild and respecify vertex attributes, in addition to actually pointing them to the right data, dynamically every frame. Vertex array objects capture all that state into one structure, which is usually built at the start of the program, so that it’s nothing more than a single pointer swap per-frame to render different kinds of models. It can be faster or slower depending on the hardware and driver but it’s faster for developers, at least.
- depth textures. Previously the depth buffer was pretty much write-only, making things like shadow mapping impossible without hacks like writing the z-coordinate to an extra color buffer off-screen.
- instancing. Now you can render many copies (‘instances’) of a model in just one draw call by passing a ‘count’ parameter, rather than making N draw calls. This is almost universally faster, where applicable.
- uniform buffer objects. OpenGL is limited to how many uniforms you can pass to each shader via glUniform* calls, making things that use a lot of uniforms (such as skeletal animation or instancing) effectively a no-go. Uniform buffers enable uniforms to use much much more memory.
- vertex array objects. Previously, you had to rebuild and respecify vertex attributes, in addition to actually pointing them to the right data, dynamically every frame. Vertex array objects capture all that state into one structure, which is usually built at the start of the program, so that it’s nothing more than a single pointer swap per-frame to render different kinds of models. It can be faster or slower depending on the hardware and driver but it’s faster for developers, at least.
- depth textures. Previously the depth buffer was pretty much write-only, making things like shadow mapping impossible without hacks like writing the z-coordinate to an extra color buffer off-screen.
- instancing. Now you can render many copies (‘instances’) of a model in just one draw call by passing a ‘count’ parameter, rather than making N draw calls. This is almost universally faster, where applicable.