The way humans communicate is ineffective. The most likely scenario is that there will be different systems that AGI integrates with to do the job. AGI itself will be a distributed system that scales horizontally so it will be a single huge entity with lots of interfaces.
You're assuming that the AGI will communicate with the agents directly instead of through an LLM. If the agents are actually intelligent agents then the AGI may not be able to assume that the agents are not human, in which case it's safer for the AGI to use the LLM to define instructions for all tasks. And if that's the case then it will want to do all the work itself, if it's generally intelligent.
The only reason human communication is ineffective is because it's slow. If an AI can read/write 1000s of words per second there's no reason it shouldn't use natural language to communicate.