To be mobile and agile in a three dimensional world, a creature needs to be able to process information about its environment such as adjacency, and direction - that is, the relationship of different parts of space, or things in space, to one another, matters a lot to the organism. You could store store each individual perceivable bit of spacial information randomly, but then to get this structural information you'd need an elaborate mapping layer that allowed the brain to compute distance and direction. It's far simpler to rely on actual spatial relationships in the cortex to model the structural information by mapping it to an analogous spatial structure information. Simpler means less energy in construction and operation of the facility, and that wins out in an evolutionary race.
This just seems like conjecture. There is already a mapping layer between the specific nerve activation and spatial information. A random mapping wouldn't be any more or less efficient in that regard.
I could see something like going from one to two eyes causing this. When having one eye, you'd have a random nerve mapping. It's advantageous to have two eyes over one, but if the optical inputs for two eyes were to be randomly remapped, then the evolutionary knowledge stored in the single-eye mapping would be lost. So it would advantageous to map the optical nerves from two eyes in a way that mostly fits the single-eye mapping. Obviously, this is just a random theory without any evidence. I offer it only as an example of a logical argument as to why the spatial orientation of an object would affect the spatial orientation of the nerve mapping.