I was responding to someone earlier saying a user agent should respect robots.txt. An LLM powered user-agent wouldn't follow links, invisible or not, because it's not crawling.
There's a fuzzy line between an agent analyzing the content of a single page I requested, and one making many page fetches on my behalf. I think it's fair to treat an agent that clicks an invisible link as a robot/crawler since that agent is causing more traffic than a regular user agent (browser).
Just trying to make the point that an LLM powered user agent fetching a single page at my request isn't a robot.