I had one of these at my desk at work for a while. They were different enough from the final hardware that it was a huge relief when we finally got the first real dev kits. That was also when we started to realize how many problems the lack of out of order execution on the real CPU was going to cause.
It made many things slower than you would expect. There was a much bigger gap between the theoretical performance the hardware was capable of and the real world performance of typical code people were used to writing or had in their existing codebases. There was also a much bigger gap between performance of a debug build and release build than people were used to, to the extent that debug builds could be completely unplayable.
Generally you paid a significantly higher penalty for branching and for cache misses than people were used to and that meant changing the way you designed and wrote code. The cost of things like if statements, function calls, virtual function calls and jumping around in memory were relatively much higher than people were used to. There were also some particular quirks of the hardware that compounded these problems and made certain things usually considered cheap or even optimizations very expensive like converting between floats and ints, shifting by non constant amounts or mixing SIMD code with traditional floating point.
Interesting, I never considered how different the CPUs were (despite implementing very similar ISAs). The G5 was 2x2.0 GHz with OoO, Xenon was 3x3.2 GHz with 2-way SMT and no OoO