"Unless you specify otherwise when you post Content, you agree to license Content you contribute to the Platform under the Creative Commons Attribution Noncommercial license (CC BY-NC)." [0]
The specific analogy doesn't hold but the sentiment does.
Instead of using Palantir, working at the FSF, the Linux Foundation, etc. It's not that they don't make good money, it's that it's often a fraction of what could be made at a comparable for profit company.
I think the video game industry is an apt comparison. The pay is often not very good with the motivation being, for many people, prestige based, in some form or another. I suspect there are analogies in the game industry and publishing 50-100 years ago.
b) there is no breakdown into theoretical vs experimental research, or scientific field; theory I would expect to be over-represented at the younger end especially as the science discipline becomes "harder".
Overall I would say it lends credence to the idea physics is a young person's game at the very highest levels.
a) the inflection point is in the high 30s. Further $\int_{40}^{50} f(x) dx > \int_{20}^{30} f(x) dx$.
b) true there is no breakdown but I would expect the exact opposite as fields get harder. More context requires more training and familiarity, which I would expect to increase age.
My point is that I think there's a bias in the field towards the youth narrative but the majority of discovery, even in physics, happens at a later age.
I don't think there is a bias in the field towards a youth narrative. I think there is a bias in the media.
Nobody I've ever met would expect a breakthrough from a 20 something year old no matter how much of a genius they are. Communicating a breakthrough requires time, effort, and credibility to begin with, which nobody has at that age.
Your 30's are when you can start to really do great things. And then depending on the field you can kind of just keep going as long as you have the energy for it. But lots of people begin to wear out into their 40's (for lots of different reasons).
In terms of great breakthroughs. If you haven't had your great idea by 40. It's probably increasingly unlikely that you'll have one later in life (but not impossible). Not everyone needs to have a paradigm changing idea to have a successful career though.
This is dogma, but I'm not sure it's possible or even ideal. Booms and busts seem endemic to any economy that targets inflation, and of course most entities (that don't understand the balance) want to encourage booms and limit busts. Meanwhile, there's another way to think of inflation (and also deflation):
Inflation obfuscates the value of money and therefore of goods, services, etc. In an environment where value is volatile, it makes sense to keep moving, keep trading, because you might come into possession of something that was undervalued before you owned it, or that you'll need in a future when it would otherwise be too expensive. The people who skim off the top of all of this activity love this environment.
Deflation, on the other hand, makes value readily and immediately apparent. What was speculative and risky goes to zero and people hold onto things with intrinsic value. Those who skim profit off of economic activity hate the slowdown, obviously, but maybe you need periods of this to reset when valuations becomes too far removed from reality.
Is it a bad thing for people to buy what they need, when they need it, instead of being forced by inflation to anticipate their needs further and further out?
I keep feeling like there's a set of fundamental assumptions that can be optimized for, or relaxed and optimized for, in order to get at what a better method might be.
For example, stability of dithering under rotation and or some type of shear translation. What about stability under scaling?
There's been some other methods that essentially create a dither texture on the surface itself but, to me at least, this has a different quality than the "screen space" dithering that Obra Dinn employs.
Does anyone have any ideas on how to make this idea more rigorous? Or is the set of assumption fundamentally contradictory?
One of the biggest issues with dithering stability in screen space is attributable to perspective projection. The division operation is what screws everything up. It introduces a nonlinearity. An orthographic or isometric projection is much more stable.
If sums of independent identically distributed random variables converge to a distribution, they converge to a Levy stable distribution [0]. Tails of the Levy stable distribution are power law, which makes them not Gaussian.
Yes but really what our brains do is use Gaussian Mixture model to cut up those distributions into more granular bell curves which we then call “normal”. Because we find what we are tuned to find.
Eg we find bell curves because we look for bell curves. And given infinite resolution we can find them at some granularity.
The fact the article said that is a gross error. You've identified the issue head on.
The sum of independent identically distributed random variables, if they converge at all, converge to a Levy stable distribution (aka fat-tailed, heavy tailed, power law). In this sense, Levy stable distributions are more "normal" than the normal distribution. They also show up with regular frequency all over nature.
As you point out, infinite variance might be dismissed but, in practice, this just ends up getting larger and larger "outliers" as one keeps drawing from the distribution. Infinities are, in effect, a "verb" and so an infinite variance, in this context, just means the distributions spits out larger and larger numbers the more you sample from it.
[0] https://www.inaturalist.org/pages/terms
reply