I had the same question, it seems silly to build a collection of copyrighted content and apply your own copyright to it.
I guess the argument is the same one all the AI people are relying on: I built this collection of fair use material and I am applying my copyright to the product of my work. I wouldn't want to argue that one in court.
IANAL: As far as I know, making the compilation earns you a copyright. But for a reader to make a copy, they need licenses from you and from the copyright holders of all the images. So in this case maybe the release notice that you quoted means that there is still all the image copyrights to obtain licenses for.
Fair use is a justification for why copyright restrictions may not apply in a given scenario, not a license to apply new legal restrictions to work you do not own.
A webpage that uses numeric identifiers for external references that are found only when scrolling to the very bottom of the page and show their URLs as plain text. Now that is a train wreck.
Hyperlinks are the cornerstone of the web. Don't be afraid of using them!
Hyperlinks would be convenient, but something about the raw text / ascii art vibe makes me happy everytime I read a blog post from j3s even if it doesn't have the conveniences of the modern web.
You mean footnotes? As they have been used for centuries in print?
The difference between them and a simple hyperlink is that they can and often will provide some additional context, that is out of the scope of the original text. Ideally on a website meant for computer screens you wouldn't have them on the end, but in the margins, next to the information, but for short stuff it is okay to put them at the end of the chapter – bonus points if the reference numbers can be clicked and take you to the foot note, extra bonus points if there is an arrow taking you up again.
But this is scientific literature style writing, not everything needs footnotes.
Also using a monospaced font for both the written text and command line output is certainly a choice. I get that it is often an aesthetic choice, but given that a blog post is written with the idea to be read, one I don't think is a particularly good one. Although the last time I made a remark about that on HN it became clear to me that a lot of people don't see the issue. Even if there are decades worth (at this point) of research that makes it clear that a sans serif font (or even a serif font on modern displays) works better for readability. ¯\_(ツ)_/¯
It is clear that the author is very explicitly going for the aesthetics of a terminal, given that all formatting of the text is ASCII based down to the line length being hard coded as if we are dealing with a hard limit of columns.
Rad.FM maker here. Thanks for at least trying it. A) the web app is still in alpha. B) Rad needs your location to make what it says to the listener relevant to where you are and your current time. Also, news & weather need your location.
Thanks for the feedback though, I'll delay the location request to later in the flow + add a popover which explains why it's needed when we get to beta.
The subject is in a vertical orientation, so it is perfect and desirable that the original video has all its resolution dedicated to capturing the phenomenon in the best quality possible. A horizontal video would mean that there are less pixels on the subject matter.
Honestly, I got similar feedback when I got this reviewed internally. At this point I am not sure how to write so that it doesn't seem LLM generated.
Would be helpful if you could share why you thought this was LLM generated. The suggestions I have gotten so far has been to remove bullet points and sections - which I feel breaks readability.
I don't think it's so bad, but if I had to guess, it's from the division / breakdown of sections and lists, which reads a lot like the formulaic approach you get from an LLM (which is not necessarily bad, just common in the output). E.g. "Docker and Docker Compose can simplify the process of installing and managing services. They allow you to:" etc etc. This may sound like an LLM covering all its bases rather than a human explaining subject matter.
That's just my take, again I don't think it's that bad. The article would be a useful breakdown for beginners.
(Also, I'm sure you know, LLM content sounds that way because the LLM was trained on content just like this, so it's not really surprising that a guide generated by an LLM would sound like the kind of guide that was used to train an LLM...)
Not parent commenter, but I've been trying to verbalize why it feels LLM-like.
- h2 titles feel as basic as possible, just "what self-hosting, who self-hosting, why self-hosting, ..."
- SEO spam often overuses keywords; on this page, it feels like "self-hosting" is used a bit too often, even if it's well-intentioned
- the text ends in a classic LLM warning "remember to be careful"
- predictable sentence patterns
Some of these things are good for readability. I guess this article feels a bit too plain? I think tech company blog posts add a unique style and voice these days, because otherwise they'll blend in with the average SEO/LLM content.
Also editing nits:
> self hosing
> Self Hosting
> atleast
Good self-hosting tips, though. Thanks for sharing.
The overuse of "Self Hosting" is fair. Better H2 titles would have made it less frequent. Will be more thoughtful about this the next time.
The unique style and voice is where I am struggling with. Have always been instructed to write in a plain tone and simple English so that its easier to read through.
I tried reading the article with the GP's comment in mind. For most of the sections it didn't feel like there was anything that would flag it as LLM generated for me.
But when I got to "How to Start Self-Hosting?", which is the section I was most interested in, I got a strong sense of déjà vu.
Reading this section felt exactly like I feel when I hit a bad prompt on ChatGPT. I feel I'm being given a huge dump of keywords but nothing that lets me make any progress. Reading it I felt the same frustration I do with ChatGPT as I have to prompt it again with "Can you elaborate on bullet point 6" to get anything useful out of it.
With ChatGPT the reason is usually a prompt that was either too broad/open-ended or a difficult topic for ChatGPT to answer. And it has a tight limit on how long the answer can be, which is understandable. For an article though it feels a bit jarring and there is no immediate way to ask for details.
I think the rest of the article is fine really. Sure the word of caution is exactly what LLMs do but unlike LLMs, which usually state the obvious, it has a lot of useful information.
This does illustrate a problem when talking about complex topics or mechanisms is the need for specificity. Using short, simple sentences comes at the risk of making things seem overly vague and hand wavey, or worse, misrepresent the concept.
In continental philosophy or mathematical papers this gets all too apparent, as alot of argument hinge on very fine differences and nuances that need to specified else people get the wrong idea.
> The prompt comprehension is incredible! #auraflow
> "a cat that is half orange tabby and half black, split down the middle. Holding a martini glass with a ball of yarn in it. He has a monocle on his left eye, and a blue top hat, art nouveau style "
Plus an image that somewhat resembles that prompt. The cat has a human-like hand with a chopped off thumb and 6 fingers in total, differently colored eyes, a branch in front of its face, the ball of yarn is somehow floating in mid-air.]
These are somewhat valid issues. But given the currently available open models, this is a massive improvement. The human-like hand and changing the styles on the sides of the head isn't even bad - those are valid artistic choices you'd see on similar illustrations - they're just badly executed here.
How can this be legal? All imagery is taken from (usually) non-free movie trailers.