This file is generated by npm run agents:to_agentsmd from the canonical Claude Code guidance.
# Additional Conventions Beyond the Built-in FunctionsThese files are generated by npm run agents:to_copilot from the canonical Claude Code guidance.
.github/copilot-instructions.md
.github/hooks/copilot-hooks.json
.github/skills/codechange-primary-add-migration/SKILL.md
.github/skills/codechange-primary-api/SKILL.md
This project is an attempt to explicate the text Evolution and Consciousness, by Leslie Dewart.
We have a glossary of terms that may be helpful in understanding the concepts in play.
We then follow with a chapter-by-chapter breakdown of the arguments of the text, expressed in the most accessible terms possible.
Chapter 1: The Anomaly In the dark dystopian world of the Matrix, a mysterious anomaly begins to disrupt the simulated reality. Neo and Trinity notice strange glitches. Meanwhile, in the Watchmen universe, Dr. Manhattan senses a rift in the fabric of spacetime. The Animaniacs, in their zany world, discover a portal that leads to unknown dimensions.
Chapter 2: Convergence Dr. Manhattan, driven by curiosity, investigates the rift and discovers it's a connection to other realities. His investigation sparks a philosophical debate with Rorschach about the nature of existence and whether realities are interwoven patterns of information or discrete entities.
In the Matrix, Neo and Trinity consult the Oracle, who presents them with theories of information complexity. They discuss the nature of reality as a construct, where dimensions might be layers of complexity rather than physical spaces.
The Animaniacs, on the other hand, playfully explore the new worlds, making light of these profound existential questions.
Having grown up with undiagnosed autism, I've been thinking about how to think for as long as I can remember. My models of the world never seemed to match anyone else's: I'd be confused at things others easily understood, and I'd understand easily things that others found confusing. I had to learn to reverse engineer how thinking itself worked so that I could anticipate the connections other people would make, and then use those insights to construct a social identity. I needed to understand why things seemed mean different things to me than they did to others - what is meaning, anyway?
And so I learned.
I learned that meaning is constructed - that when someone associates some things in some way that they create third thing, the meaning of which is relationally defined by things joined. For instance, maybe someone hears a piece of music while experiencing a pleasant emotion. That piece of music now 'means' that emotion for that person, and vice versa. So w
| [ | |
| { | |
| "name": "sound", | |
| "category": "sensory", | |
| "sense_seeking_question": "I have a stim that involves really loud sound, be that music or machinery or something else.", | |
| "sensitivity_values": [ | |
| "A sound needs to be very loud for me to even notice it.", | |
| "I sometimes need to turn the volume up to hear something that others seem to hear just fine", | |
| "I've always felt pretty normal here - my experience seems to be in keeping with that of most people around me.", |
Often, when building software or thinking about reality, we have this natural tendency to organize our reasoning in terms of top-down narratives. What I mean is, we see an ant carrying food back to its nest and we think "Yes, the ant colony survives by sending individual ants out to get food." There is this natural, to most humans, sense of teleology. We understand things by understanding what they're for, what purpose they serve.
We don't, as a rule, understand behaviors that seem to serve no purpose. We at best describe them as "not yet understood", and we assume that with more information we'll finally be able to tell a story that includes the fact under discussion.
But like, stories are only real in our heads, right? We tell stories because stories are how humans think. If some data can't fit into a story then we ignore it, because we literally have no way to think about it. But I think that this often leads to a specific kind of fallacy: the fallacy of assuming that a story exists that can reconcile al
I'm kind of making this up but I have this notion that there exists some sort of law of conservation of information in programming. It works like this:
[Total Information In the System] = [computations done by your code] + [work done by your runtime] + [external environment]
Depending on what you're trying to do, it can be helpful and instructive to think about how you shift information around between these three buckets. To illustrate this point I'd like to step through three different implementations of the same program.
Let's do everyone's favorite simple program, a fibonacci number generator. Given some input INDEX, it'll give you the INDEX fibonacci number. So fib(0) = 1, fib(1) = 1, fib(2) = 2, fib(3) = 3, fib(4) = 5, fib(5) = 8, etc.
In our first example, let's just write a computational function that'll figure this out for us. It'll look something like this [lifted from the first google result](https://medium.com/developers-writing/fibonacci-sequence-algorithm-in-javascript-b253d