The 2 smaller chiplets are for core complex, the memory controller is on the IO die(the other bigger one) that these 2 CCX share. So there is still one single memory controller, and hence, not NUMA.
Imagine if human beings could be vulnerable to such attacks. Someone sends you a video link, you watch it, you see weird shapes appearing and disappearing for a few minutes, then the next thing you know, you wake up in a bathtub full of ice-cubes with one of your kidneys stolen.
But if you look at the launch price 2700 was $300 and 2700X was $330. I think the difference is so small that it drives people who don't want to manually overclock buy the X version to skip the hassle.
Not sure about VC++, but in gcc you can use -march=native to let the compiler compile the code with all instruction sets available on your CPU, I think there is a VC++ equivalent.
As for already compiled binary, depending on how it was compiled it may or may not work of a different CPU. Also the compiler doesn't do the runtime checks.
I worked on both GA and SA and I think it really depends. There are some effective evolution strategies that can speed up the evolution, combine that with a parallel implementation GA can be really fast. But yeah it can take time to figure out a good evolution strategy and SA can solve some problems just fine.
Yep, I recently worked on an engineering project, where GAs were used to evolve new designs for large steel structures, with the aim of reducing weight (and, ergo, cost).
There were a lot of constraints, and several applications were used at different points (e.g. specialised 3D CAD) - a single generation took around 1 hour, so we had to let it run for days at a time on a cluster to be useful.
I wasn't in the AI team (I was the architect for the cloud infrastructure and backend), but my understanding was it was pretty much a "textbook" implementation (I dabbled with GAs, basic neural networks and swarm optimisation several years ago).
I actually kind of surprised, because I didn't realise people still used GAs any more, let alone such a standard implementation.
In the specific area I worked on, minutes are borderline tolerable so I didn't think twice before posting. But now that you said it, I totally see how it can go on for days.
I worked in oil and gas (Halliburton) for 12 years.
The amount of stuff we got in "for review", "for test", "preview", etc. was simply amazing. Even pre-production gear a lot of the times. I found a pair of Tesla cards just sitting in a box in an office I cleaned out one day... and I know we got a system with some Phi cards in it when they came out.
The most interesting thing I ran into was when cleaning out a facility after a move, we found a Dell Itanium-1 box that not only did Dell not want back, they wouldn't even admit to making it in the first place... It ended up going home with one of our devs...
Nice thing about being a sysadmin was that we would get "video cards and such from our developers who had just upgraded to the latest and greatest - and the stuff they were throwing out was only one or two years old.. so our own desktop workstations built with cast-off parts were pretty nice.
It wasn't really the sysadmins that got free stuff - it was department managers / tech leads, etc, that would get gear in for review to see if it fit with our workflow, processes, etc.
Us sysadmins just had to install/maintain it, and occasionally would "profit" when it was retired and the company/vendor didn't want it back.
Managed to build an entire multi-node NetApp cluster out of spare and retired parts one day when we were bored. Our NetApp rep said "I didn't see this, I don't know it's here, I don't know it exists, as far as I care it's a bunch of spare parts you just happened to put in a rack..." :D
Unfortunately employment opportunity doesn't help in this case. They are already offered jobs/interns but cannot legally start working (anywhere) because they have to wait for the work permit (OPT) to be approved.