Exactly - the analogy fails if a few assumptions we have about ourselves or what GPT-3 is actually “doing” are wrong. Until we hit some asymptotic limit on training these kinds of language models, I’m withholding judgement on what such a model will be capable of representing if/when that limit arrives.