Hacker Newsnew | past | comments | ask | show | jobs | submit | d4vlx's commentslogin

I think he is referring to the F-35 only here. On military discussion forums it is the consensus that the F-35 is superior to everything else out there with the only exception being that the F-22 has superior air to air combat capabilities.


I have seen interviews with Norwegian pilots responsible for integrating F-35 into Norway’s airforce. Reading between the lines was that it seemed “complex”. I’m not site it is buggy, or overly complicated or what.


I heard a story the other day where kids tried to steal either a Bentley or a Porsche, and could not figure out how to get it into drive.

Is that a feature or a bug?


Its a sign of task saturation present even when not in combat.

Also why get all proud and defensive? Its a dead program now. Europe and other regions will buy homespun.


Dollar for dollar is the f35 or a drone superior?


“A drone” could mean literally anything from a twenty dollar quadcopter to the next generation $300M NGAD system.


There is currently no drone that can replace everything the F-35 does. There might be one in the future, and it will likely be the most expensive aircraft ever made (see the two NGAD programs' unmanned components)


No drone can take out a CRAM (aka: RADAR aimbot shooting bullets into the sky).

Meanwhile, a helicopter with an anti-radiation missile can take out CRAM, let alone a stealth F35. F35 (and F16) are the next step after helicopters: you send F35 when enemy antiair is good enough to threaten helis.


Couldn't you just fit your drone with an anti-radiation missile?


ARMs generally require a supporting electronic warfare suite to be effectively employed. EW equipment draws a lot of electrical power, so you need a large aircraft with large engines. This is against current drone trends, though it is likely such drones will eventually be deployed.

Another challenge is that ARMs are designed to fight air defense systems that could potentially include equipment to jam drones. Concepts for future drones include a "drone commander" manned aircraft nearby (as opposed to an easier to jam remote operator) and onboard AI to make autonomous combat decisions.

Overall the drone likely won't be much cheaper than an F-35. But it can be sent on suicidal missions without risking a human crew.


None of the drone people are talking about today are the Predator drone we were using 20 years ago in Afghanistan.

But yes. Large drones (like Predator) can launch missiles. But the news is about smaller drones that are being used to cheaply send small grenades over large distances.


Dollar for dollar a $10 net takes out a drone


There is some contiguousness but they are mostly spread all over the city. Take a look at the red properties on this map (zoomable):

https://whydontweownthis.com/2014/mi/wayne/detroit#15/42.347...


"but they are mostly spread all over the city"

6000 homes spread out also means you can also create a mini company town by essentially luring people (for a new company) to the city and giving them a choice of different properties, renovated and ready to go with efficiencies gained by having so many to deal with. (Similar to when, say, Levittown was built (all in one area of course but same concept with construction).


Many if not most of the houses are beyond renovtion. A fair amount probably have fire damage (arson in Detroit is legendary). Sitting for years with broken windows, holes in the roof, etc. leaves them rotted shells


The idea would be to bulldoze them and then just sit on the land until you need it for some reason. Carrying costs are obviously much lower that way you can't vandalize a lot (although you can dump on it of course..)


From what I remember, this is exactly what Detroit is trying to avoid. They have these auctions with the expectation that they will be lived in either by the owner or a lessee.


This was my idea as well. I also wondered if the owner of the land would attempt to join adjacent lots to create larger plots of land - which could then be drawn together further.

I'm not a city planner and am not very well versed in municipal zoning laws in Detroit so I wonder if its even a possibility.


I worked on online casino software for several years and we had a number of bugs like this. They would usually involve someone writing a client to hit our API directly and fiddling with the xml messages. The operators would usually catch the issues within hours, turn off the game and often deny the players their winnings, if it was a legitimate win. Several people wound up in jail for exploits.

In the early days of the company it was a fairly significant problem with incidents pretty frequently. The lack of QA and a release process really bit them a few times as well. A game was put into production with the result hard coded to a win. Goes to show that just because a company makes software that deals with money and is financially successful it does not mean they are at all competent. It took 9 years for them to turn things around and transform the company into a highly effective dev shop, at least by industry standards. I am proud I was part of that. Unfortunately the industry had some major problems in 2009-2010 and they ended up having to find a buyer who even more unfortunately does not appreciate or understand developers or software development.

We did integrations with many other gambling software companies and not a single organization was what I consider competent. I would love to see a new company with serious technical chops break into the world of online gambling and school the crusty behemoths. Hard to do however because of the network effects and importance of reputation and deal making.


You could buy a slower Xeon for around the same price if you really needed more than 64 gigs of memory.

http://ark.intel.com/products/75269/Intel-Xeon-Processor-E5-...

And it supports ECC.


The Xeon isn't overclockable, which is a big part of the niche this processor sits in.

If you read my post again, I'm not saying that 64gb is too little right now. It's probably the right match for the processor for most workloads, today. 32gb would seem weak with 8c/16t (I have that much in my 4770 system), and 128gb could be excessive.

But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing.

(Barring an Intel microcode revision, as is being speculated by the sibling commenters. But I'm not holding my breath, as Intel Ark is pretty definitive.)


Idk... I'm struggling to see why an average user in the overclocking/high-end pc market would run into the 64 gig limit assuming the high end market has a relatively short part lifetime. I mean if you're in it for the video editing then the sky is the limit but for an average user? A user could ram-cache 4 hard drives with a 4gb buffer each, power up the entire adobe suite including Illustrator, Photoshop, start a browser session with 100 tabs and 10 video streams, torrent client, email client, backup client, vpn, a couple modest ftp and web servers, a transcoding media streaming server AND crysis 3 and still likely have 10-20 gigs to play with. I think if you need much more than that running concurrently you probably should be starting to think about server hardware.

If you think 64gb will be an easy limit for an average user to hit in the near future I would love to hear your envisioned use case.


I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS. I think this is going to start happening regularly in the consumer space sooner rather than later (and already has in some cases like with XP Mode). This is pretty much modus operandi in the service space today.

There's lots of really good reasons to do this (sandboxing, ease of installation, compatibility, snapshots/state saving, etc.) and VM tech at the consumer level is good enough for most applications. Doing so also enables you to distribute the same application for different host architectures relatively easily (swap out the virtualization core with an emulation core).

VM technology basically will allow consumer software vendors to start treating your computer like a set-spec videogame console instead of worrying about millions or billions of possible complications from how your computer is set up. Once VMs in the consumer space get good enough to really run high-end games, imagine everybody just writes to some Valve defined Linux spec that just happens to match some Steam Box, but you can install the VM for that game on your Mac or Windows or whatever and get to gaming.

If this happens, VMs will chew through RAM faster than just about anything out there.

So instead of installing and running Adobe Suite, you start up the Adobe Suite VM and boom, 8GB of your RAM vaporizes. Fire up your web browser VM and boom, there goes another 4GB. Your e-mail client annihilates 4GB more and now we've eaten up 16GB of RAM to run a handful of applications. Open up an MS-Office component and there goes another 8-16GB. Run a non-virtualized legacy app? Why those all just get sandboxed into an automatic "old shit" VM so the virii keep out.

This isn't inconceivable and I wouldn't be at all surprised if this wasn't already on the drawing boards somewhere.


Containerization could offer close to the same level of isolation as VMs without the insane memory bloat. Plus, VMs might be able to share common memory pages if it becomes necessary.


Funny, this occurred to me more than ten years ago (as a result of seeing knoppix, actually) but it still hasn't come to pass. Given the increasing importance of mobile, i doubt many users will sacrifice battery life or laptop comfort for the dubious benefit of having their applications partitioned into VMs.

Using VMs for apps does make sense for some pro apps, especially those with idiotic system requirements and/or copy protection. And obviously for testing.


I can see it being spun pretty hard as an anti-virus initiative at some point, or a "guarantee you software investment into the future" kind of thing.

Nobody (consumers) really care that it makes it easier for app developers or most of the other benefits, but consumers can be scared into all kinds of weirdness.

Bonus for PC makers, it would give a huge boost to kick off the upgrade cycle again for home PCs. More cores, more RAM, more disk space needed for all these dozens of VMs (each with their own multi-GB OS and software install).

Heck, I know of at least half a dozen people who do a variant of this right now in order to run a single Windows only application on their Macs.


If this gets popular, I can see them stripping the OS and other cruft down so that the application almost runs on bare (virtual) metal. A complete desktop OS with user software and drivers for all the unused hardware sounds unlikely.


This is basically what OSv is. It's stripped down virtualization environment meant to only run a single application on bare (virtual) metal.


Proof of concept viruses are already out for this architecture, so it just becomes a bigger management headache.


The primary reason you are correct about this assumption is the fact that the going trend is to package up applications and run them as a SaaS service. Those 'appliances' you are talking about will be web applications running a more highly decentralized hosting model, occasionally hosting it on the user's computer and more frequently on a neighborhood's deployment as a whole. This newer model of hosting will likely resemble true cloud compute than what we consider it today: 8-9 data centers running 80% of the public cloud in an offering called AWS.


>I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS.

Full VMs for regular desktop apps? I don't think so. We already have sandboxes.

And in any case, this wont happen in the life span of this professor being relevant (say, 5 years), so, it can't be an issue that necessitates 64 bit.


Isn't the OP referring to what happens after the 'lifespan' of this processor?

I'm still happily running a Mac Pro I maxxed out in 2008 and expect a couple more years out of it at the least.

It would be nice if this kind of machine could last a similar 6-8 years instead of entering (and I think that was the OPs point) 'engineered' obsolescence in 4/5 years?


>Isn't the OP referring to what happens after the 'lifespan' of this processor?

No, he's reffering to what will happen in "2 years". I quote: "But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing".

And there's just no way that normal users will run their desktop apps in VMs in two years -- which is what was said a as a justification for needing > 64gb.


I feel though consumer hardware is going to be a lot more standard from now on. The wild west hardware age may be coming to an end, so VMs-everywhere would be trying to solve a problem of the past rather than to be a solution for the future.


This was what was revolutionary about Quake III, no? It ran inside some iD VM...


He is talking about using VMs for real arquitectures (SO + apps).


Um, no.


Your use case would not use more than 64gb of memory, no; but it would also run on the CPU side just fine with a $339 4c/8t 4770k. A user with that workload wouldn't need 128gb, but they'd also not need an 8 core CPU.

Put it this way: a $120, dual core Core i3-2100 released in 2011 has support for 32gb of ram. But a $1000 eight core processor, released more than three years later, for nearly ten times the price, supports just twice as much memory.

I believe this is imbalanced. And expecting to be able to upgrade a workstation tomorrow, when purchased today for likely well over $2000, is not unreasonable.


I doubt many people pair an i3 with 32gb RAM.


Adobe products (looking at you After Effects) consume way more memory than you are giving them credit for. 64GB is not enough memory to support the computation this part is capable of. Next year, this chip will support at least 128GB.


many people today still use 8GBs, pro users might use up to 32GBs but its not that RAM usage has been rising rapidly in the last 4-5 years. In 2009 8GB was pretty standard for high end desktops like 16GB is today. This is a desktop cpu mind you, workstation and server cpus obviously support much more ram.


If someone needs more than 64 gigabytes, buying a new processor should not be a huge deal.


No overclocking support though which is a huge deal if you're interested in top end performance right now for cheap. If it's anything like the 6 cores from the previous generation it can go up to 4.5 ghz reliably and possibly higher. Having only 3.4ghz turbo also leaves you with pretty poor single threaded performance in general.


I agree there is a lot of fraud and greed in the cryptocoin community which makes it risky for people new to it or the naive. For people who have done some reading or hung around for a while it's usually pretty easy to spot most scams. In cases like Monero it was probably a something the developers missed. The important thing to note is that the community fixed that oversight within a month or two.


Premining and instamining (where they just start the coin with x coins in the devs wallet) are fairly common practices among new cryptocoins. They are usually frowned upon by the community but not always as the coin developers will sometimes keep a 1-2% for use promoting and developing the coin. Which most people consider fair. An 80% premine like bytecoin is ridiculous.

It is very easy to tell if a coin has been premined by checking the state of the block chain for the number of outstanding coins. The coins with large premines are usually outed withing an hour of their release.


I realize that, which is why I didn't believe the Bytecoin claims to have been used for years. (That and I spend a lot of time on Tor so I was skeptical I'd've never heard of it if there really was an active community using it.) But that didn't explain where the huge apparently-PoW-intensive blockchain came from.


Love the story, reminds me of the early days of Primecoin where I had my first exposure to cryptocurrency code. I spent the first two months of it fighting to stay ahead of the curve as well and had a blast. Not nearly as successfully as Dave however and my wife wasn't too happy about me spending every waking hour when home from my full time job on it.


It looks like they may have added some growing possibly linear function to the unemployment rate instead of a constant starting in 2009. All the better to push their fear mongering agenda/sales tactic I guess.


She doesn't mention what methodology and practices she used to reach her conclusions. Did she study the same type of bugs in the same type of environment and climate in an area that was not close to Chernobyl or a nuclear plant? Did she take steps to avoid confirmation bias, such as double blind sampling? How did she gather and analyse her statistics? Without reasonable explanations for those and more her data is not sound.


The issue is that by providing housing subsidies more money would be put into the demand side of the rental market. This would most likely cause rents to rise as there is now more money to spend.

In this situation the only way out that makes sense to me is to add more supply by removing some artificial barriers to building.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: