Granted it's silly, but I'm fascinated by this thought experiment.
It's not easy to quantify the amount of useful content on the internet. The 2bn figure above seems to stem from registered domains, and depending who you ask [1][2] around three quarters of them are "inactive" (e.g. landing page for a parked domain).
At the other end of the spectrum, Google's index surpassed 130 trillion pages four years ago [3]. Point in favour of my opponent!
If everyone connected to the internet indexed one page a day over the course of their lifetime we might just about do it. And anyone creating a new page would need to [arrange to] index it themselves.
Wikipedia is one website. There are 2 billion. I don't believe wikipedia's methods scale to the internet.
Also, any setup that allows everybody to be a moderator will be promptly gamed.
What you want is what yahoo used to do - a hand-curated search engine. It worked when the internet was small, but got buried under the eventual avalanche of web sites.
We can say the exact same thing on all the technical debts that we've been accumulated in our code base. But I've never seen those technical debts properly payed off :D
All I can say is, good luck with your human curation startup.