The reason these two collide so often in American law is that the two historically overlap.
When a generation of Americans force all the people of one race to live in "the bad part of town" and refuse to do business with them in any other context, that's obviously discrimination. If a generation later, a bank looks at its numbers and decides borrowers from a particular zip code are higher risk (because historically their businesses were hit with periodic boycotts by the people who penned them in there, or big-money business simply refused to trade with them because they were the wrong skin color), draws a big red circle around their neighborhood on a map, and writes "Add 2 points to the cost" on that map... Discrimination or disparate impact? Those borrowers really are riskier according to the bank's numbers. But red-lining is illegal, and if 80% of that zip code is also Hispanic... Uh oh. Now the bank has to prove they don't just refuse Hispanic business.
And the problem with relying on ML to make these decisions is that ML is a correlation engine, not a human being with an understanding of nuance and historical context. If it finds that correlation organically (but lacks the context that, for example, maybe people in that neighborhood repay loans less often because their businesses fold because the other races in the neighborhood boycott those businesses for being "not our kind of people") and starts implementing de-facto red-lining, courts aren't going to be sympathetic to the argument "But the machine told us to discriminate!"
When a generation of Americans force all the people of one race to live in "the bad part of town" and refuse to do business with them in any other context, that's obviously discrimination. If a generation later, a bank looks at its numbers and decides borrowers from a particular zip code are higher risk (because historically their businesses were hit with periodic boycotts by the people who penned them in there, or big-money business simply refused to trade with them because they were the wrong skin color), draws a big red circle around their neighborhood on a map, and writes "Add 2 points to the cost" on that map... Discrimination or disparate impact? Those borrowers really are riskier according to the bank's numbers. But red-lining is illegal, and if 80% of that zip code is also Hispanic... Uh oh. Now the bank has to prove they don't just refuse Hispanic business.
And the problem with relying on ML to make these decisions is that ML is a correlation engine, not a human being with an understanding of nuance and historical context. If it finds that correlation organically (but lacks the context that, for example, maybe people in that neighborhood repay loans less often because their businesses fold because the other races in the neighborhood boycott those businesses for being "not our kind of people") and starts implementing de-facto red-lining, courts aren't going to be sympathetic to the argument "But the machine told us to discriminate!"