Hacker Newsnew | past | comments | ask | show | jobs | submit | U2EF1's commentslogin

They could steal a smaller cannon first and use that on any chains or locks.


Join the rest of us in post-1950's complexity theory. We can use register machines now. Or you can bust out the big bucks and buy a RASP.


LE by default successfully ran 2 months prior. 2 months and 5 years are two completely different worlds in terms of bit rot. That and there are many generic tutorials and scripts and knowledgeable devs for configuring LE fresh.


It's all just theories. But yes the article is basically summarizing a new paper/model. Most likely: the model is incorrect in some ways. But maybe useful in others.


Do you mean hypothesis? A theory is a well-substantiated explanation based on evidence...and has undergone some kind of rigorous testing/verification. A hypothesis, on the other hand, is "just" a proposed explanation/prediction based on little (or no?) evidence...which is then tested through experiments, etc.


A hypothesis is a proposition. A theory is a model. The two are different concepts but neither implies rigor or lack of rigor. It’s harder to build a complete model from nothing but it happens all the time.


Theories about the universe as a whole (or anything outside the solar system at this point) can't really be tested in the normal sense. You can make a theory and then try to look for more data/make observations about the universe and see if they match, but there's no control and you might never get the data you need to say one theory is better than another. You can get lucky of course, but it's not like you can recruit 40_000 universes for a double blinded placebo controlled clinical trial.


That's poppycock. RCTs aren't necessary for high quality evidence, in neither physics, nor nutrition.


You can at least test some parts of quantum physics and general relativity on and around Earth, with satellites in orbit with very precise clocks and double slit experiments and that sort of thing. For everything outside the solar system you can just observe and hope new data arrives that you are not already unblinded to.


Try to convince a member of science's fan base (which includes many actual scientists) of this during an object level discussion about a particular point of contention and see how well it goes over.


I mean yeah that's basically what high compression solutions like paq have done, depending on the compression level desired apply increasingly speculative and computationally intensive models to the block and pick whichever one worked the best.


And then, when nobody wants to implement all the compression algorithms in a compressor or decompressor, we end up with files out in the wild that only pick one of them anyway.


But what if they stole it for their 7 year old cancer ridden son, who only wants to be happy once before he dies.


[citation needed]


Think about it -- a vinyl can hold ~22 minutes of music per side, and a DVD-R can hold ~120 minutes of video per side, in a much smaller area.


Just a rough calculation:

Each pit on the DVD represents 1 bit, but need at least 1 pixel to stored. You need 8 bit to store a pixel. So you have write amplification of at least 8 when taking picture of DVD.


That's why I suggested using a couple of light sources - you could take an image that's lower resolution than 1 pixel per bit and still resolve the image data to the DVD data based on the color. Eg 10 would be a different color to 01.

It's not even remotely practical but I thought it was an interesting idea. Judging by the way the votes are yoyo'ing up and down, about half of HN readers agree. :)


I'd assumed some optical magnification would be required in combination with multiple photos per disc.

Then software would have to sort out the mess.


Come on, you can calculate that without a calculator.


You don't even need to do the maths. Just looking at a vinyl would give the game away since you can see grooves with your naked eye (as a DJ in an earlier life, we used to use that as a visual clue for when to queue up the next record, swap the basslines, etc).

Not to mention how albums would often be spread over two records where as the same album could fit on a single audio CD (let alone DVD).

I'm guessing the GP has never owned any vinyl. I could forgive someone questioning the density of records if he's not familiar with the tech.


I'm guessing the GP has never owned any vinyl. I could forgive someone questioning the density of records if he's not familiar with the tech.

I'm old enough to know about vinyl. That's not really the point though - the principle is the same, just with much higher information density. With a suitably high resolution camera, or set of cameras, or video, and some serious magnification it should be possible to photograph a DVD and play back the data from the image. I find that quite interesting, or at least entertaining, to think about. I guess the HN readers who're downvoting the post don't.

Something to consider - technically we can already do it - it's how DVD players work, albeit with a single coherent light source and only reading one bit at a time.


> I'm old enough to know about vinyl. That's not really the point though

I was talking about user U2EF1 (the "citation needed" comment). I didn't have any particular issue with your comment aside the impracticality of it with current consumer hardware (which I'll address below). But as a concept it's an interesting point.

> the principle is the same, just with much higher information density.

The problem is in the detail. Even with gramophone records, you'd get a very low success rate with a consumer camera. Plus you'd need multiple macro pictures and a precise way to stich them together. At which point it would be quicker to play the record while recording it in Audacity (or similar) in real time. So you're talking a less than 1x record speed and lower success rates to boot; and that's just low density vinyl. When talking about DVDs you'd need to improve the operation by several orders of magnitude in terms of accuracy and resolution.

So I think your comment was interesting from a conceptual point but we're a long long long way off having that level of detail in consumer photographic devices.

> Something to consider - technically we can already do it - it's how DVD players work, albeit with a single coherent light source and only reading one bit at a time.

I think it's a little disingenuous having a laser reading binary reflections sequentially compared to a digital sensor detecting literally billions of analogue reflections (because a camera isn't just detecting the existence or absence of light) in parallel and then forming a precise sequence of digital bits from that. The technologies are completely different, the scales are completely different, the concept is completely different. They're just not equatable.

> I find that quite interesting, or at least entertaining, to think about. I guess the HN readers who're downvoting the post don't.

It wasn't me who downvoted you but if it's any consolation, I've gotten downvoted for factually accurate comments before (let alone impractical suggestions). You just have to remind yourself that it happens occasionally and usually the positive outweighs the negative. :)


So he really means they were "waiting for IO" bound.


C has some of the best static analysis and debugging tools of any language, but all of that's worthless if you don't use them. Doubly so if you specifically handicap those tools and write your own obfuscated allocation scheme. Heartbleed was less an indictment against C and more an indictment against shitty code.


I don't see how one statement excludes the other. C is not the best tool to write secure software, but at the same time people have figured out how to use it securely despite its deficiencies. Heartbleed was a failure of both the tool and how it was used.


Heartbleed wasn't the fault of C... It would have been caught if OpenSSL didn't implement their own allocator by Valgrind and other tools. Seriously, have you ever used valgrind? not a difficult tool to use

What he's (erik) really saying is, if someone hires me to build a skyscraper, and I show up with fatigued and rusty scrap iron.. when the skyscraper fails, I should blame iron, and we should all have a talk about how terrible iron is, and why no one should be using iron. And whenever someone points out that I used rusty scrap, I'll just say, well titanium wouldnt have been rusty! Why arent we all using titanium?


You could run something like TAILS in a VM or docker, I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: