Forgive my ignorance, but why is it that it is Python-only?
Does Python have intrinsic qualities that other languages don't possess or is it that the huge initial investment in creating TensorFlow was based on Python and duplicating that effort somewhere else would require too much work?
Traditionally, most neural network architectures have been implemented in C/C++ - for performance reasons. But ML researchers are not hackers, for the most part, and Python has the lowest impedence mismatch for interfacing with C/C++ of all the major languages. Julia was popular for a bit, but now Python is dominant. Programs tend to be very small, and not modular - so static type checking is less important than it would be in picking up errors in larger systems.
It's not just the lowest impedance mismatch, but it's also a framework coming out of google, where python and Java were really the only two language choices for a high level interface, and of the two python is the clear winner in prototyping / scientific community acceptance. I think it's because of the ease in experimentation and expressiveness of the language.
Does Python have intrinsic qualities that other languages don't possess or is it that the huge initial investment in creating TensorFlow was based on Python and duplicating that effort somewhere else would require too much work?