It doesn't directly help UTF8, since all the bytes it uses for encoding non-ASCII have the high bit set.
It might directly help with UTF16, I'm not sure.
But the general idea of "block only a few specific characters (\0 and /) and allow all the rest" does help with UTF8. If the designers said something like "only ASCII letters and number and dashes and underscores" then that would block UTF8, and we might end up with something like URL hostnames, where you use punycode to encode non-ASCII into ASCII.
The point is that unix behaviour is to treat filenames as byte strings, so no particular encoding is mandated by the kernel or by most tools. That made the transition to utf-8 fairly painless.
Not filtering untrusted inputs, and not escaping or handling them correctly is how you write insecure software. Arbitrary input guarantees (unless very strict, then that's indirectly filtering inputs anyways) don't change that.
Why does that make it easier to write insecure software? Which is easier: dealing with bytes, only 2 of which have special meanings (/ and \0) or dealing with a ton of different character classes, each of which you have to think about and code for. The second case happened with URLs, so there's all sorts of weird rules about you can have a ? in this section but not that section, and percent encoding and punycode and stuff like that.