W3C was way too optimistic about XML namespaces leading to creation of infinitely extensible vocabularies (XHTML2 was DOA, and even XHTML1 couldn't break past tagsoup-compatible minimum).
This was the alternative – simpler, focused, fully IE-compatible.
W3C tried proper Semantic Web again with RDF, RDFa, JSON-LD. HTML5 tried Microdata a compromise between extensibility of RDF and simplicity of Microformats, but nothing really took off.
Eventually HTML5 gave up on it, and took position that invisible metadata should be avoided. Page authors (outside the bunch who have Valid XHTML buttons on their pages) tend to implement and maintain only the minimum needed for human visitors, so on the Web invisible markup has a systemic disadvantage. It rarely exists at all, and when it does it can be invalid, out of date, or most often a SEO spam.
Schema.org metadata (using microdata, RDFa or JSON-LD) is quite common actually, search engines rely on it for "rich" SERP features. With LLMs being able to sanity-check the metadata for basic consistency with the page contents, SEO spam will ultimately be at a disadvantage. It just becomes easier and cheaper to penalize/ignore spam while still rewarding sites that include accurate data.
The schema.org vocab is being actively maintained, the latest major version came out last March w/ the latest minor release in September.
This was the alternative – simpler, focused, fully IE-compatible.
W3C tried proper Semantic Web again with RDF, RDFa, JSON-LD. HTML5 tried Microdata a compromise between extensibility of RDF and simplicity of Microformats, but nothing really took off.
Eventually HTML5 gave up on it, and took position that invisible metadata should be avoided. Page authors (outside the bunch who have Valid XHTML buttons on their pages) tend to implement and maintain only the minimum needed for human visitors, so on the Web invisible markup has a systemic disadvantage. It rarely exists at all, and when it does it can be invalid, out of date, or most often a SEO spam.