Old war story: I had an old Sun 4/260 with 2 1G drives - I had SunOS on 1 and Gentoo on the other - my initial Gentoo install worked for a while but then the portage directory used all the configured iNodes - really weird errors and I could not figure it out at the time; error msgs maybe should mention inodes? I had to do #gentoo-sun IRC and someone suggested df -i which was indeed the issues (solve: you can configure extN filesystems to have more iNodes)
The places where solutions are discussed in a way that is best long term solution may well exist in a language subspace with politeness, calmness and thoughtfulness. Getting the model to those areas of linguistic space is useful; as is preserving my own habits of kind and thoughtful speech.
It's a language/compiler/function call stack feature, not existing as far as I know, but it would be awesome - the caller of a function would specify what resources/syscalls could be made, and anything down the chain would be thusly restricted. The library could try to do its phone home stats and it would fail. Couldn't be C or a C type language runtime, or anything that can call to assembly of course. @compute_only decorator. Maybe could be implemented as a sys-call for a thread - thread_capability_remove(F_NETWORK + F_DISK)? Wouldn't be able to schedule any work on any thread in that case, but Go could have pools of threads for coroutines with varying capabilities. Something to put the developer back in charge of the mountain of dependencies we are all forced to manage now.
There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.
For years I worked at a large company with so many blockers for everything that I always worked like this all the time - have 5 projects so when one becomes blocked for external deps, you have one to pull out and work on. There is a context switch (which lead me to context preservation habits like checking everything I write into git; using tmux so that the context is sitting around in the bash history of the shell where I was working, that sort of thing; lots of org files and Mac stickies with TODOs etc).
I still do this, and don't really think it's avoidable, but when the expectation around compressed timelines because of the imaginary ability to rapidly do synchronous non-trivial tasks in parallel, that's both things get sone poorly.
Feels akin to something like driving in stop and go traffic while playing chess with a passenger who's shit talking me.
I have lived in the DC metro area inside the beltway or in Sillicon Valley my entire adult life and have only had above ground power wiring. Despite tree ordnances and wind storms and a grid so aged if we see lightning we lose power.
I've heard that before, that the US apparently loves above ground power lines. In NL it's only the long distance ones that are above ground. Even in most rural areas, I think everything is below ground.
Yes, we love them on account of our country having approximately 230 times the surface area and the Netherlands having approximately 13x the population density. We not only have vastly more line to run, but also many, many fewer people per square mile to absorb the costs. Underground line is expensive.
That explains rural areas but not urban areas. We've got above-ground in rural areas but pretty much all urban stuff is underground. We get maybe one power cut a year, usually for scheduled maintenance work, and no problems with surges and whatnot.
I did not realize all the dendritic synapses were excitatory, I always thought it depended on the specific neurotransmitters released. Thanks, this is cool. I am curious what will happen when we build LLMs that have the equivalent of chemical diffusions between synaptic release areas as well as the temporality of spiking neural nets.
Also your eyes are good at seeing patterns. If the formatting is all consistent the patterns they see will be higher level, long functions unintuitive names, missing check for return success; make bad good look bad is the idea. Carefully reading every line is good but getting hints of things to check more deeply because it looks wrong to the eyes is extremely useful.
Human life includes a lot of adversarial training (lying relatives) and training in temporal logics, which would seem to be a somewhat different domain than purely linguistic computations (e.g. staying up late, feeling bad; working hard at a task for months, getting better at it; feeling physical skills, even editing Go with emacs, move from the conscious layer into the cerebrellar layer). I think attention is a poor mans "OODA" loop; cognitive science is learning that a primary function of the brain is predicting what will be going on with the body in the immediate future, and prepping for it; that's not a thing that LLMs are architecturally positioned to do. Maybe swarms of agents (although in my mind that's more of a way to deal with LLM poor performance with large context of instructions (as opposed to large context of data) than a way to have contending systems fighting to make a decision for the overall entity), but they still lack both the real-time computational aspect and the continuously tricky problem of other people telling partially correct information.
There's plenty of training data, for a human. The LLM architecture is not as efficient as the brain; perhaps we can overcome that with enough twitter posts from PhDs, and enough YouTubes of people answering "why" to their four year olds and college lectures, but that's kind of an experimental question.
Starting a network out in a contrained body and have it learn how to control that, with a social context of parents and siblings would be an interesting experiment, especially if you could give it an inherent temporality and a good similar-content-addressable persistent memory. Perhaps a bit terrifying experiment, but I guess the protocols for this would be air-gapped, not internet connected with a credit card.
reply