> - The product engineer: highly if not completely AI driven. The human supervises it by writing specification and making sure the outcome is correct. A domain expert fluent in AI guidance.
If AI continues to improve - what would be the reason a human is needed to verify the correct outcome? If you consider that these things will surpass our ability, then adding a human into the loop would lead to less "correct" outcomes.
> - The tech expert: Maintain and develop systems that can't legally be developed by AI. Will have to stay very sharp and master it's craft. Adopting AI for them won't help in this career path.
This one makes some sense to me but I am not hopeful. Our current suite of models only exist because the creators ignored the law (copyright specifically). I can't imagine they will stop there unless we see significant government intervention.
If AI continues to improve - what would be the reason a human is needed to verify the correct outcome? If you consider that these things will surpass our ability, then adding a human into the loop would lead to less "correct" outcomes.
> - The tech expert: Maintain and develop systems that can't legally be developed by AI. Will have to stay very sharp and master it's craft. Adopting AI for them won't help in this career path.
This one makes some sense to me but I am not hopeful. Our current suite of models only exist because the creators ignored the law (copyright specifically). I can't imagine they will stop there unless we see significant government intervention.