The operator of a crawler doesn't need to sign an agreement for the prohibition to be enforceable. See eBay v. Bidder's Edge. This was 14 years ago, folks.
What is the point of having such silly prohibition? It's silly because anyone can crawl it if they want, Facebook may block such DDoS attack, but why would they bother to put up such sign when they know it's useless?
It's probably just their way of explaining how those user-agents that do not get the catchall Disallow: / treatment got into that robots.txt file. Also, including some lawyerisms might be quite effective at reminding upstart scrapers that faking the googlebot UA would be even less cool than simply ignoring robots.txt.
I frequently wonder - is Facebook allowed to say, "bing, you can crawl us. NewCompetitor, you cannot."
I feel like once a company allows public access by posting stuff on the web, they can specify terms, but not include/exclude groups specifically. (In a legal sense; I understand blocking systems that hammer servers but will respect robots.txt. IME bing is the worst offender -- they hammer my sites, send no traffic, but will stop if I specify in robots.txt.)
Does anyone have an opinion about "once public, I can crawl"?
I can think of no reason why there would be any such restriction.
Suppose Facebook is getting paid by Bing, and won't offer crawling to those that aren't paying it? Suppose Facebook considers Baidu's crawler to be evil and chooses to prohibit it for that reason? Suppose Facebook just kind of likes the guys at Bing and decides to allow them special access? If you agree in the first place that Facebook should have the right to put ANY sort of restrictions on who can crawl their side, then why should ANY of these be prohibited? This is not a "common carrier" kind of situation.
That's because it isn't. Or at least not in the traditional sense. It's not just some old bullshit, scripted in PHP, running on an array of scrappy LAMP boxen.
...and yeah, it executes PHP code, for sure. But right there, things are already different, and the reality is that they've written a substantial code base in C/C++.
And, two, I'm sure they retain some serious business proprietary trade secrets about their server infrastructure, meaning that while the web front-end might render out HTML like a souped-up CDN, behind the scenes, there is a shit ton of other stuff going down.
Honestly, I think they just leave the file name extensions in the URL for the sake of nostalgia.
https://www.facebook.com/robots.txt