I worked on a thing many years ago that involved machine learning, which usually produced reasonable results but all users hated it nonetheless, because machine learning made it completely opaque. The correct predictions it made were mostly acceptable, but the incorrect predictions it made were hilariously bad, and in both cases nobody could explain why it generated those outputs.
Eventually we concluded that machine learning wasn't a good fit for our problem, and our users were very keen to maintain that conclusion.
I'm thinking of a very complex logistics system I wrote, that had to trace millions of possible paths to optimize a route. Even when the range of choices is too extensive to present to the user directly, and you need to resort to a list of best choices, it's indispensable to show somehow how the logic was arrived at and present ways of disabling portions of what went into the deductive process. That's something machine learning simply isn't geared towards, because the reasoning doesn't rest on reproducible sets of hierarchical rules.