Hacker Newsnew | past | comments | ask | show | jobs | submit | mdeck_'s commentslogin

Sounds like a classic case of F.A.F.O.


The argument in this piece is a classic example of slippery slope-ism. The question of fair use, or of balancing between copyright maximalism and copyright minimalism, is a so-called line-drawing problem (given a spectrum of cases, deciding where the line of “ok” vs “not ok” should be). The article says that ANY decision in favor of the photographer/against Warhol would basically ruin fair use doctrine.

This proposition has barely a hint of truth in it. Especially considering how clever lawyers, SCOTUS justices included, can be in distinguishing one case from the next.

I’d hardly be worried.


I’m not sure how it could possibly be useful to anyone to hear that they shouldn’t be in the place where a nuclear explosion occurs.


> Between late 1978 and early 1981, drivers in the U.S. saw the price at the pump nearly double from 63 cents to $1.31 a gallon.

Not sure how that change would amount to NEARLY doubling...


I don’t think inflation was on holiday during those years…also nearly doesn’t have to mean almost


Next perhaps they can get Wolf Blitzer to stop shouting literally every sentence that comes out of his mouth as if it is all “BREAKING NEWS.”


I just hope they take Don "pandering propaganda" Lemon off the air. Hard to take them seriously as a news source when you see that show.


what if we all just agreed that "teevee news presenter personalities" were a dumb cultural relic of the pre-internet era & as a society moved on from accepting the validity of their entire schtick?


It won't happen. There's a lot of demand for media personalities, particularly from lonely people who like to form parasocial relationships with these celebrities. Pithily put: cable news personalities are to lonely boomers what twitch streamers are to lonely zoomers.


I guess us Millennials are stuck feigning conversations with our morning avocado toast.


> I think if you have the luxury of assuming every token is a dictionary word, you can do much better by simply encoding each word as its index in the dictionary.

Then you have to store the dictionary.


It's the standard pool/index tradeoff : instead of storing an array of possibly-duplicate objects, you simply store all unique objects once in a pool and store the array as an array of references into that pool.

You win if the original array of duplicate objects was so long or so duplicated that replacing it with references into the pool is a worthwhile reduction. The objectSize/indexSize ratio also plays some role.


The entire English dictionary (being very generous on what a "word" is) is around 4MB - nothing nowadays.

Not to mention of course that your computer probably already has 50 copies of it somewhere if you really don't want to bundle it


The vast majority of text objects anybody generates won't exceed 4MB though. 4MB is 4 million ascii/utf8 characters, if we assume a typical word in English is 4 characters for simplicity, that's 1 million words without spaces. A quick Google search for "Novel that is 1 million words" yields the fact that this is twice the word count of Lord of the Ring and the word count of the first 3 A Song of Ice and Fire (Game of Thrones) books. Accounting for whitespace, longer common words, and inefficient encoding schemes would bring that overestimate down to, what, 150K words? a 300-page or 600-page book (depending on the spacing) according to Google, still massive.

I see it only working where there's massive pooling, like you say. An OS or a tool provides a known dictionary service and you call into it with a text string and get back an array of indices that you can then decode with another call. That amortizes the dictionary cost among all possible uses of the service, which is a lot if it's a standard well-known service. Another scenario is perhaps in a database\cloud storage\social media, any single text object might be small but the total text they store overall is massive.


Except that the corpus doesn't need to be stored as part of the compressed message, and can be considered part of the compression algorithm. It increases the size of the decoder by ~4MB, but doesn't increase the size of each message.


The autocorrect method also uses a dictionary, and then some.


The ultimate significance of the patents contained within the portfolio (how they will empower business plans, why they may be quite valuable) will not necessarily be apparent from simply looking at the technology described. And a potential buyer would always want to keep these hidden things (essentially its own business plans) secret.


Comparing ALL software developers to builders or veterinarians isn’t fair. Veterinarians are continually holding animals’ lives in their hands. Builders almost invariably build things that can fall on their head and kill them—so a license is needed. If you just want to build a doll house, though, no one is going to require a license.

Software developers sometimes work on critical infrastructure—and licensing there does make sense. But requiring a license for me to build a silly web app seems like a classic example of what people complain about when they complain about unnecessary government regulation.


Sounds like a modified/more complex version of the well-known RICE framework.

https://www.intercom.com/blog/rice-simple-prioritization-for...


Admittedly it seems less common, but... there certainly are some Smeds in Sweden (and the rest of Scandinavia). The names are not ALL sons/sens and geographic features... https://forebears.io/surnames/smed


Less common is an understatement. Less than 300 people have Smed(h) as their surname in Sweden. We also don't know if they originated in Sweden or if this was a Swedification of a foreign surname.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: