"We don't have ethics for killing bacteria or plants - only for creatures that we can convincingly project our emotions onto. The "humans" in our VMs operate completely differently from us on a fundamental level, and therefore should not be taken any more seriously than a machine that's programmed to print 'I feel sad'."
that's paula miner, project manager of Doki Doki Literature Club speaking.
what i find ironic is that the engineers of DDLC are trying to develop a simulation to determine whether they are in a simulated universe, and yet they treat their simulated beings like garbage. i mean, doesn't that basically give permission to the simulators of your universe to also do the same?
i've been reading a lot about simulation theory lately - the theory that what we view as the supernatural is actually of artifical origin. that is, a man plays god, and we're executables in a simulation of the Earth.
a few things stand out to me about this theory. first, there's the matter of how simulation theory slots into the free will argument. if we're all simulated beings, does that automatically mean that we have no free will? or do we only have agency within the context of our simulation, and once we become aware of our simulated-ness, we lose that precious autonomy?
second, why would someone even want to simulate our universe in the first place? i think the most compelling explanation is that we're creating simulations to determine the characteristics of simulated universes, to figure out whether we're in a simulation. as someone famously put it,
"it's turtles all the way down!"
but once we achieve the computational capacity to actually run a simulation, it becomes even more improbable that we're living in base reality. it's much more likely that another civilization created a supercomputer capable of simulating the universe and we're just simulated beings in that simulation.
third, how does data collection even work? i imagine the simulators of our universe aren't monitoring us 24/7 - that's too much to do, and frankly quite boring. so how to collect data in a meaningful form?
this is where i think linguistic theory could totally go wild. hypothetically, it would make data collection a lot easier if we were speaking a hyper-optimized language that could be directly understood by our simulators. an advanced civilization capable of simulating an entire universe would most likely speak a different language from us, one that's significantly more efficient for everyday communication and is also significantly determined by AI. maybe converting words to tokens to vectors is extremely suboptimal and Natural Language Processing algorithms determine that our algorithms could be a lot better if we were speaking a pure numerical language.
this is a bit of a stretch, but you get what i mean. how can our simulators predict language evolution, or semantic bleaching or broadening? what if one day they simply just fail to understand us, and so no meaningful data can be collected anymore?
simulation theory leaves us with so many more questions than answers, but that's the beauty of it. when i brought up my fascination with siml theory to my friend, he said it was interesting but he didn't really see how thinking about this was helpful or practical.

i disagree. i think the existence of simulation theory reflects our increasing technological competence as well as the evolution of society as a whole. if computers had existed from the very beginning of time, would the idea of an artificial creator be more fathomable than a divine one? as we discover more and more about our possible origins, we're able to reimagine the meaning of our lives and the implications of our existence. and that will always have value, just as we do - whether we live in a simulation or not.