Hacker Newsnew | past | comments | ask | show | jobs | submit | JunkDNA's commentslogin

I have fond memories of playing the original Falcon on my Amiga 500. It felt like magic after years of playing F15 Strike Eagle on the Apple IIc. Hearing real sound effects kicking in the afterburner and getting too close to the ground ("Pull Up! Pull Up!") were all so satisfying.

I remember being so excited when Falcon 3.0 came out. But it just felt like a let down. The graphics were amazing for the time and it seemed so realistic, but for me the realism is what killed all the fun. As a kid, I didn't actually want to BE an expert F16 pilot. I just wanted to feel like I was. I didn't want to have to learn all the systems and controls.


I felt a similar way when I moved from Chuck Yeager's Air Combat to (I think) F15 Strike Eagle 3. Yeager was the perfect mix for me. I remember rarely even seeing the enemy planes I was firing at in newer games.


That and F-18 as well.

I think that magic is now gone, back then playing games was a bit like reading a book, we had to make use of our imagination to compensate for the lousy graphics, especially when being able to visit arcades.

Then suddenly having computers at home with similar arcade like graphics felt like the future.

Now we get real time rendering without useful gameplay, in many AAA games.


Having been the kid who loved to geek out over flight sims, and then the adult who was fortunate enough to have flown military jets, I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.

I mean, the software itself is impressive. But the idea of grown adults geeking out over old versions of the NATOPS and trying to develop tactics and such is frankly cringe. You're never going to get it right, because the actual thing is classified. And from the outside looking in, it's like watching a kid put on Dad or Mom's suit jacket to play "office."


>I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.

But things like Ace Combat, "H.A.W.X", War Thunder, Project Wingman, Nuclear Option, all those games are incredibly popular. The arcade combat genre is alive and well.

I do wish games like VTOL VR, DCS, and other more serious sims had a "maybe don't make me read 700 pages of manual to lock and fire a missile" option. Even something as simple as VTOL VR telling me exactly what machine my targeting pod is locked onto would be nice. Just give me a name and basic specs. It's a damn game, I shouldn't need to memorize target silhouettes unless I want to.

IL-2 Sturmovik: Great Battles actually does this well, with an entire page of "simplify things please" options.


> You're never going to get it right, because the actual thing is classified.

Except for the stuff leaked on the War Thunder forum.


Or, often Russian, FTP servers. Aaah, good times.


Long-time Navy jet jock finds it "cringe" when people try to get a little break from the stresses of their life by attempting in a very small way to emulate what he achieved.

I get your point but come on man, ease up. At least remember that some of those DCS-playing wage slaves helped fund your adventures.


I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?


It depends on how far behind you believe the model-available LLMs are. If I can buy, say, $10k worth of hardware and run a sufficiently equivalent LLM at home for the cost of that plus electricity, and amortize that over say 5 years to get $2k/yr plus electricity, and say you use it 40 hours a week for 50 weeks, for 2000 hours, gets you $1/hr plus electricity. That electrical cost will vary depending on location, but let's just handwave $1/hr (which should be high). So $2/hr vs ChatGPT's $0.11/hr if you pay $20/month and use it 174 hours per month.

Feel free to challenge these numbers, but it's a starting place. What's not accounted for is the cost of training (compute time, but also employee and everything else), which needs to be amortized over the length of time a model is used, so ChatGPT's costs rise significantly, but they do have the advantage that hardware is shared across multiple users.


These estimates are way off. The concurrent requests are near free with the right serving infrastructure. The throughput per token per dollar is 1/100-1/1000 the price for a full saturated node.



This article isn’t particularly helpful. It focuses on a ton of specific OpenAI business decisions that aren’t necessarily generalizable to the rest of the industry. OpenAI itself might be out over its skis, but what I’m asking about is the meta-accusation that AI in general is heavily subsidized. When the music stops, what does the price of AI look like? The going rate for chat bots like ChatGPT is $20/month. Does that go to $40 a month? $400? $4,000?


OK, how about another article that mentions the other big players, including Anthropic, Microsoft, and Google. https://www.wheresyoured.at/reality-check/


How much would OpenAI be burning per month if each monthly active user cost them $40? $400? $4000?

The numbers would bankrupt them within weeks.


Exactly. If you are an L7 and making an average L7 salary, nobody should have to squint and tie themselves in knots to figure out the connection between your contribution and your employer’s revenue. You are a Ferrari that is purchased new each year.

There’s a place where you can work on things that don’t generate revenue but are morally/technically interesting: it’s called “academia”.


What is a professor's job other than revenue? Instead of selling a product, they have to beg for money instead. Alternatively, they sell their research. Publish or perish. Raise enough money for the school or you're out.


You are a Ferrari that is purchased new each year.

That's a pretty cheap way of looking at yourself.


They’re not using mRNA. They are using a viral vector that contains DNA.


I’m not sure if it’s a false memory or not but I believe there was a pretty cool demo that used this as the background music as well.


Bingo. I wish I could upvote this comment more. All the geeks get distracted by words like “cloud” or “virtual” and forget that all this stuff we depend on has a physical presence at some point in the real world. That physical presence necessitates humans interacting with other humans. Humans interacting with humans falls squarely in the “things governments poke their noses into” bucket. It’s like the early days of Napster when people were all hot for “peer to peer”, as if that tech was some magic that was going to make record labels and governments throw up their hands over copyrights.


Dont worry, ill fix all this by creating a unique javascript framework that will change the world.


Maybe we could make this framework future-proof by using blockchains? Somehow? Maybe it can use blockchains, or it can be stored on a blockchain, or maybe both at the same time. Surely that will help society in some nonspecific, ambiguous manner.


Remember the people who, decades after the invention of the Internet, kept on insisting that it was useless and only for porn addicts?

Remember the people who, after the invention of the phone, insisted that it was a nice trick but probably only useful for a few businessmen with dictation needs?

Yeah, they all had to change their tone at some point, under the shame of having been wrong for so long.


No, we just need another anonymous distributed networking/storage/socialmedia/coffeemaker protocol to save us


I highly recommend the piece by Derek Lowe down thread, but the tldr is basically that researchers have believed amyloid plaques cause the symptoms of Alzheimer's and so the theory is that if you eliminate them, you can treat the disease. This drug gets rid of them. But the gold standard is whether or not the drug actually helps people, not whether it meets a technical definition of "working".

This is the drug equivalent of an engineer following a requirements document and saying to a product manager, "Hey, you said the form has to be submitted through the website. You can see here when I hit submit, it submits! The website doesn't save the data anywhere because that wasn't in the requirements".


I was on one of the teams that refuted the claims of horizontal gene transfer in the original human genome paper. The bar for establishing a true case of horizontal transfer in vertebrates is high. It’s really improbable given the required sequence of events laid out in the article. It’s one thing for some DNA to get picked up by random cells in the organism (happens with viral infection all the time). Getting to the germline cells and becoming inherited is a whole other story given that vertebrates have evolved mechanisms to guard against this specific scenario.


When "vertebrates have evolved mechanisms to guard against this specific scenario", it hardly sounds "improbable."


well, not when the protection is against any form of DNA contamination and not specifically foreign DNA intrusion

the fact that random large mutations typically lead to an inviable zygote should be enough evolutionary pressure, it doesn't need to be specific protection against the entry of external DNA


The sequences we're discussing aren't really random, though. Presumably the chance of viability with such a sequence incorporated, though still low, is much higher than if it were a truly random sequence.


Are you sure?

Having one foreign sequence which have some specific features (to keep the originating organism viable) could have a chance of never being compatible with the target organism.

Having a completely random sequence by definition have some chance of being compatible.

The question is which scenario has a higher chance of success.


This is a case I could see going either way. Random mutations are probably much smaller and closer to the original, and therefore potentially more viable. Yes, it's random, but most of the time it won't have a major effect on the proteins the DNA generates. On the other hand, if we are talking about transferring segments, there's the potential of that DNA to create actively harmful proteins.


I just read the PLOS one paper. The arguments they brought forth were strong. If this had been my paper, I would have been livid if I had been rejected. However, given the fragmented and buggy state of bioinformatics tooling and databases at the time, I can easily imagine how their extraordinary claims did not the cross the "beyond reasonable doubt" threshold. From a reviewer's perspective, a couple matching disulfide bridges and a negative Southern alone might not have convinced me either. Glad it worked out for her in the end though.


The issue with the evidence in that paper is that they used primers to amplify the specific genes of interest. That introduces a strong assumption at the start of their analysis: specifically, that these genes appeared in the genomes by some HGT process instead of independently being duplicated internally in each genome from another gene shared among the species. Whole genome sequences were not available for these species at the time. A modern, more complete analysis would look into homologs across whole genomes and try to reject that hypothesis, which is much less extraordinary than animal germline HGT.

That's precisely why the authors published the new Cell paper https://www.cell.com/action/showPdf?pii=S0168-9525%2821%2900... with stronger evidence from whole genome sequence to support the HGT hypothesis. I'm still trying to wrap my head around Figure 2 there, so I'm on the fence.


The Trends in Genetics (not Cell) paper seems plausible. I don't study fish genetics or evolution. As I remember, fish genomes tend to have more genome-wide duplications and losses in comparison to other vertebrates. One possibility is that some fish lose AFPs because they don't need them – i.e. the observation could be caused by loss of function instead of gain of function due to HGT. I have to admit that the chance of gene losses across multiple fish lineages is pretty tiny but it is at least associated with a known mechanism.

Anyways, an interesting article.


My understanding is that inherited HGT in vertebrates is now an established mainstream position and that it was mainly the low quality of the original sequences that prevented people from refuting this point (specifically in humans). A lot of the stuff published in 2001 about human genomes was later shown to be of dubious quality, massively overstating the value of the data to make strong conclusions.


So did you know about the paper in question and if so how convinced are you of the claims/evidence in this specific case?


In business, many times the lack of any decision (good or bad) wastes valuable time. Especially for leaders, unblocking teams to move forward has real value beyond whether or not the actual decision is optimal. A great many decisions are reversible. If your decision is reversible (even at some expense), it may be better to just decide and move on. In many cases, you don't have perfect information anyway, so trying to make the right decision causes you to delay the very experiments necessary to get you to an optimum outcome.


I completely agree. I try to position 'not taking a decision' as a decision option itself. Often it clarifies how poor that choice is or whether you are micro-optimizing alternatives.

And sometimes it is the better decision to postpone the decision to a moment later in time you have more information.


USB remains the only connector I use where I routinely get the orientation wrong 3 times.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: