In a sense, this is true. The view that all bugs are static, determinable, and caused by a lack of care and attention, is old fashioned and demonstrably wrong.
In mature code bases (e.g core linux kernel), (almost) all the low hanging bugs have been picked clean. There remain only faults triggered by untested configurations, unconsidered usages, and by ordering non-determinism. The last one is particularly chaotic - you can have a system which stays up for years, but falls over due to an extremely unlikely combination of events.
Think about it this way: every statically compiled program with an unbounded loop, in which there are calls to malloc() with runtime-determined sizes, and differently ordered but balanced free() at later times, may crash. This is a possibility despite there being no leaks and worst case size fits in memory. The issue is the heap, with some combinations of allocation sizes, and order of malloc vs free, will fragment and effectively "leak", eventually exhausting resources. It is generally not determinable whether this will happen to a program. They all survive at the whim of probability, although the odds are usually very much in their favor.
And yet the meat-and-potatoes of guys like Eliot and his adversaries are things like heartbleed or goto fail. (Missing bounds check, improper braces + copy-paste). The bugs you speak of are IMO mostly unexploitable, don't make the news, and are generally about as existentially terrifying as the possibility of being hit by lightning.
If I was searching for some deeper meaning in bugs, I'd rather think about how the most elaborate and secure of digital structures are still vulnerable to a moment of distraction or laziness in just one person, and how software seems to be resistant to the sorts of engineering best-practices that make aerospace as safe and consistent as it is.
> (almost) all the low hanging bugs have been picked clean
This has never ever been true. Every new framework has to design mass-assignment protection hopefully. Every new language needs to handle file access. Every new authentication library will have to rediscover reams of textbooks/whitepapers.
In mature code bases (e.g core linux kernel), (almost) all the low hanging bugs have been picked clean. There remain only faults triggered by untested configurations, unconsidered usages, and by ordering non-determinism. The last one is particularly chaotic - you can have a system which stays up for years, but falls over due to an extremely unlikely combination of events.
Think about it this way: every statically compiled program with an unbounded loop, in which there are calls to malloc() with runtime-determined sizes, and differently ordered but balanced free() at later times, may crash. This is a possibility despite there being no leaks and worst case size fits in memory. The issue is the heap, with some combinations of allocation sizes, and order of malloc vs free, will fragment and effectively "leak", eventually exhausting resources. It is generally not determinable whether this will happen to a program. They all survive at the whim of probability, although the odds are usually very much in their favor.