Current LLMs struggle with compositional inference because they lack physical boundaries. CSCT implements a neurological multi-gate mechanism (Na⁺/θ/NMDA) to enforce L1 geometry and physical grounding. In my experiments (EX8/9), this architecture achieved 96.7% success in compositional inference within the convex hull—far outperforming unconstrained models.Key features:Stream-based: No batching or static context; it processes information as a continuous flow.Neurological Gating: Computational implementation of θ-γ coupling using Na⁺ and NMDA-inspired gates.Zero-shot Reasoning: Incurs no "hallucination" for in-hull compositions.Detailed technical write-up: [
https://dev.to/csctnail/-a-new-ai-architecture-without-prior...]I’m eager to hear your thoughts on this "Projected Dynamical System" approach to cognition.
One question is to what extent you dig into or have considered oversampling? One of the core hypotheses we've converged on is that nearly all models are optimized for source coding vs. channel coding. The implication is path to AGI likely involves oversampling to capture channel coding gains and which will resolve phase errors, etc.
Random sampling naturally does this albeit inefficiently. Curious if you do something more structured than random in terms of oversampling and especially partial overlapped samples / think supersaturated subspaces / subchannels, etc.