I've just started my career in ML, and I already sort of feel that a majority of the work that people around me are doing can be done by an automated pipeline (like Python's AutoML) and throwing enough compute power at the problem. It's quite worrying.
For instance, if (for example) you model insurance data in the EU, you cannot use gender as a factor in pricing (even though it's effective).
In general, the modelling/ML pipeline is the easy bit, the hard part is the data cleaning and figuring out how to translate a business problem into one that can be solved by data.
tl;dr as many people have said about Comp Sci over the years, learn the fundamentals (statistics and experimental design) and you'll be in a much better position.
automl is pretty horrible if you're doing anything that is complicated, even at Google-scale compute power SotA DNNs are hand crafted, not auto-found. Feature engineering is also quite important for a lot of data types and that has too large of a phase space to probe with automl. At the same time if your problems are mostly solved by logistic regression/random forest/etc with simple features (ie you have well defined categories and/or states or your task is well solved in literature) then ML is not your value proposition, it's more data/business analytics (and autoML should enable you to provide your value faster).