Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly - the analogy fails if a few assumptions we have about ourselves or what GPT-3 is actually “doing” are wrong. Until we hit some asymptotic limit on training these kinds of language models, I’m withholding judgement on what such a model will be capable of representing if/when that limit arrives.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: