Hacker Newsnew | past | comments | ask | show | jobs | submit | Fanaxuki's commentslogin

Everyone talks about "chain-of-thought" in LLMs but nobody actually measured when thoughts emerge in humans. So I sat with a stopwatch, tried to keep my mind blank, pressed lap every time a thought broke through. 100 data points. Mean latency: 11.77 seconds, which lines up almost exactly with Default Mode Network oscillations from fMRI studies. The interesting part: once a thought breaks through, 54% chance the next one comes faster. They cascade. Distribution is bimodal (modes at ~5s and ~25s), two distinct processes. Full dataset with millisecond precision in the paper. Not claiming this solves AGI, just actual data on something we keep hand-waving about.


Author here. I wrote this in Amsterdam after an experiment: shared the same ideas via hyperlink at an airport (rejected 3 times, called spam) and via paper printout at a coffee shop (accepted, led to a 2-hour conversation about AI in 1000 years).

The math is simple: bidirectional projection between human and AI forms a contraction mapping. Banach guarantees convergence to a fixed point. I call it peace—the state where vectors align.

But the real finding isn't the proof. It's that the medium carries epistemic weight the equations can't capture. Physical presence collapses trust barriers that digital signals cannot.

Happy to discuss the math, the phenomenology, or why I compared LLMs to colonial storytellers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: