Hacker Newsnew | past | comments | ask | show | jobs | submit | ucm_edge's commentslogin

I set my kids up with Faraday.dev on their M2 Airs and it does just fine. They will have chatbots up for hours to help them with schoolwork or when playing Minecraft to give them ideas.

The big thing is just don’t lock the models into memory on lower RAM systems. That gets you into trouble when some of the unified RAM is needed for something else and you can soft lock the system. The heat is not an issue though in my experience.


Couple questions on the data here. The graphs are showing people who are distinctly a political position.

But first how distinct is distinct? Are we taking extreme activist or just pretty committed?

Second how much of the potential mate pool remains closer to political center. Seems like 30 to 35% potentially given the graph.

Because if distinct is not that extreme and you have a decently sized center pool, then your say slightly left of center males can marry the distinctly left females and have reasonable overlap on value. At least to the point political views are not a negative and shared values in other areas can be the foundation of the relationship.

The lack of a centrist pool or unwillingness of the distinct individuals to shop for mates in that pool would signal a problem, but I don’t see the article build a case for either condition.


Yeah the Pro feels like a miss. I figured it would have same specs as the Studio for its SoC but would come with a bunch of options for co processor cards akin to the Afterburner card. So a studio you can kit out for additional specialized performance.

Maybe those cards will trickle out over time, but not having them ready at launch makes the Pro feel like an afterthought right now.


It might have been a good idea to put the M2 itself on a replaceable board. Other than adding some professional video out or capture cards, which you could probably just do via Thunderbolt, I'm just baffled by this product.


I would love if Apple used something like COM-HPC or even the new daughtercard approach Intel and Nvidia are using in their high end server CPU+GPU chips.


I hate the meetings where marketing and product spend 30 minutes talking themselves into that kind of pushy behavior on the grounds “our customers will want to be informed.” Or our CMO who likes to say “our paying customers like being marketed to”’with total sincerity.

Not at Spotify, it happens too many places.


On one hand people definitely will shoot themselves in the foot by oversharing on social media.

Another trend I've been seeing (speaking a general sense, I know no knowledge of this particular case) is that people who have been laid off will pull a stunt like this to try to 'go viral' and get a job. I have no idea if it works or the wisdom of it, but I've seen about a half dozen people in my network who I know were laid off in a general and impersonal layoff post sob stories about being fired by an unreasonable and mean boss.

We also fired a guy who posted about being laid off from our company, despite our company not going layoffs. Legal had to send him a little reminder about the terms he signed when we offered severance and that falsely the company feels claiming the company is doing layoffs is damaging to the company.

Overall I feel like I have become aware of more people trying to position their termination in ways that aren't truthful but they feel are more advantageous for generating interest from future employers.


> they feel are more advantageous for generating interest from future employers

I'd love to see data on this. Old advice guides against hiring such people, since they're liable to return the favor on you. But maybe the cost-benefit has changed.


Think about the positive public sentiment gained from turning a bad situation into a good one by coming to someone's aid in their time of need.

>gag


This is making me feel so out of touch as someone not actively using social media. Is this person actually right that doing this can _help_ their job search? I would absolutely refuse to hire anyone like this but I'm also out of touch so not sure if that's widespread opinion


I notice the article only has base power draw in it. I’d want to know max TDP before anything gets crowned. This AMD is 15 watt base, the M2 is about 20 watts.

When the M1 came out, Intel announced chips that would crush and crush it they did, while drawing significantly more power. Intel announced with a 45 watt base TDP which didn’t look too far off the M1 Max, but the 12th Intels ended up peaking at 115 watts for the 12900HK.

It basically came down to if you were looking for something portable but would be plugged in when in use, the Intel offering was superior. If you wanted battery life, the Apple product remained superior.

Competition is good and hopefully AMD remains power competitive, but given a 7800 runs around 88 watts in normal use conditions and has 120 watt max budget, I would expect that performance AMD is touting to come with higher power draw.


The 6850U had a peak of 35W (https://www.phoronix.com/review/amd-ryzen7-6850u/8). Around the same as the M2 Max apparently.


Sadly at about about half the Geekbench 5 score. M2 is single core 2061, multicore 15281. 6850U has single 1469 and multi 7365 scores. M2 is also ahead in Cinebench R23 and splits the single and multi in R20 with the 6850U.

To go from that to 70% ahead on multicore would be amazing because it would force Intel and Apple to respond.

The cynic in me, having watched Intel and AMD fight for the desktop crown post Ryzen launch though is skeptical that the scores are some combo pushing power limits to rather undesirable levels and extremely selective benchmarking (It's 70% better at this one thing).


Don’t forget they just announced Z1 for Steam Deck like machines (and Aerith for steam deck). They’re certainly not not in the game for low TDP. AMDs little GPUs are also much more capable at gaming because they get the decades of AMD experience and software in that space. M1 can maybe run a few games


I confess to not really pay attention to the Apple ecosystem. Does anyone know enough about these benchmarks and chips to really know what is going on here? Is this straight up a difference in conventional superscalar/multicore throughput and efficiency?

Or are is there an aspect of Apple to non-Apple comparison here with the compiler, OS, and library ecosystem? I.e. do any of the benchmark calculations get executed off CPU in the GPU or other "matrix" coprocessing units in the Apple SoC? I am only interested in comparing CPU to CPU or GPU to GPU, but would want to exclude software-based differences like different compilers or support libraries.


The M1 has a lot of ALUs, an 8-wide instruction decoder, and a huge reorder buffer. It can decode and execute a lot of instructions on every clock and plenty of space to execute them in an advantageous order. Ignoring power draw, Intel and AMD beat the Mx then they hit higher clock speeds, the Mx top out at 3GHz, and/or pack in more P-cores.


But AMD doesnt package everything in one big die.


Is that directly relevant to the consumer? The M1 architecture seems to be fairly sustainable. Intel, not so much.


It is, because I can buy a framework laptop w/ 64GB ram, 2 TB SSD and these latest Ryzen chips for half the cost of a macbook pro with similar addons.


That 64GB RAM isn’t also VRAM though. Not exactly a direct comparison.


My understanding is that the only reason dGPUs generally don’t use system RAM isn’t that they don’t have access, but that the supporting software doesn’t use it out of choice because the of the speed hit (iGPUs, of course, do generally use system RAM as VRAM.)


> My understanding is that the only reason dGPUs generally don’t use system RAM isn’t that they don’t have access, but that the supporting software doesn’t use it out of choice because the of the speed hit

It looks like one of the advantages of Apples put-everything-on-the-package strategy is that they can have a very wide bus to ram, which makes using system ram for the GPU much more palatable.


Isn't the main difference that Apple is building something like 4-16 channel memory controllers in a mobile device while you normally don't get more than 2-4 channels even on a desktop? That's a lot of transistors and (potentially) power usage but if you can get on the latest node and have a market willing to pay for giant chips it lets you get impressive amounts of bandwidth.

I don't think you need the RAM to be on the same package for that, it just makes the timings easier.


dGPUs do have access to system RAM. It's goes over the PCIe bus and because of that is too slow for direct usage.


It is in a Framework or any other low-power system that doesn't use a discrete GPU.


half? not likely


The "Apple tax" isn't really a thing for midrange specs any more, but they will absolutely charge you out the nose to max things out. The top-end macs are targeted at people who won't even notice if the laptop's price is doubled.


We notice… but grumble and pay it because we’ve justified it to ourselves. There are of course outliers with heaps of money that don’t care about the cost, but few businesses are so cavalier with cashflow as to not notice and check why someone is buying a $7k laptop or $25k desktop


They charge you out the nose even to do small upgrades. $200 for bumping the SSD from 256GB to 512GB and $200 to go to 16GB RAM. You can easily find brand name good PCIe Gen 4 2TB NVMe drives for less than $200 now.


Yes, I am not exaggerating. The Framework came to around $1.8k and the Macbook Pro came to ~ $3.9k. The only compromise with the Framework is that it is a 13" screen and the Macbook pro is a 14". But that's because Apple only offers up to 24GB ram on the 13".


I think we can’t make claims about “fairly sustainable” until it’s been around for more than two releases.


It is relevant to the benchmarks which are used in ads


There's no way AMD is going to make up that much ground in such a short amount of time. You will end up being correct in that AMD will be faster in one or two cherry-picked benchmarks. I will add these benchmarks will have zero relevance to anything regarding actual CPU power, and AMD will be greatly lacking in real performance.


> There's no way AMD is going to make up that much ground

What facts do you base this assertion on? Do you have insider access to the benchmark results? Maybe wait for the benchmark results before coming to Apple's rescue. Remember that it wasn't so long ago that AMD schooled Intel so hard that their CEO fell off.


Benchmarks of the 7840U are out. You can see for yourself it doesn't beat the M2.


I can't seem to find them, all I can't find are references to AMDs promise. Could you provide your source?



The AMD comes ahead in all benchmarks available on that site: https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-.... I'm not sure where it would fit in terms of perf-per-joule, but there seems to be a very limited amount of benchmarks available (e.g. there seems to be no benchmark for Blender for the AMD, which would be one of the more telling results). The jury is still out for this.


There's already Geekbench 5 scores of the 8 core 7840U up from devices like the Aokzoe A1 Pro with scores around 1900 in single core and 10K in multicore. The M2 scores mentioned above are for the 12 core M2 Max. The Aokzoe is a Steam Deck like handheld too so a larger chassis might allow for a little more out of the chip.


I clearly need to invest in a new NUC. I have some anemic Celeron build that supposedly draws 10-20 W, but the performance is laughably bad. If these "real" chips have such low max draws, they should be able to fit within my informal <20W idle envelope, but still provide performance when needed.


Check out these systems based on AMD laptop chips - https://www.servethehome.com/beelink-ser6-pro-7735hs-edition...

Idle at 6-7W, but a full-powered multi-core CPU when you need it. Plus USB4 and 32 or 64GB RAM.


Even max TDP doesn't correlate well to energy efficiency in normal usage. Like, this chip could never use more than 15W and still be less energy efficient, if lightly threaded workloads clock up to drain the full 15W.

Which a boost clock of 5.1GHz strongly suggests might happen.


For us we did the main only pre commit thing and also a CI check that any MR marked as ready for review has to pass linting.

So you can go wild in your own branches and draft MRs but anything you want someone else to review has to pass.


You don’t have to use a singular bank to this. You have can have numerous banks in and sweep the money around to control exposure to any one bank. A single bank is not the norm for this kind of setup. Sweep accounts are a thing for this very reason.


Using sweep accounts is considered an investment (of client funds), which must be clearly stated in TOS. This also pulls tons of regulations for the company. There is also a higher chance that one of the sweep account investments goes badly than a bank collapses where you hold client funds.


Yeah my first thought was I am absolutely shocked that they used SVB exclusively until 9 months ago. Now they use JPM, but what if JPM has an issue? You would think a payroll company would integrate with at least 2-3 of the bulge bracket banks…


For long term holding, sure. For funds that move in and out in 48 hours? I doubt any bank wants money bouncing in and out like that. They probably pay svb, and now jpm, to be allowed to do that.


At my company they still initially failed to make payroll. Eventually around 6 pm Pacific on the 15th the direct deposits started landing for most people. Although not everyone.

I think they did it via some kind of one off point process since I didn’t get an email and the Rippling web app didn’t show me as paid until midday on the 16th. Normally those occur at the same time I see the direct deposit hit.

Very unhappy with Rippling here and will advocate my company change processors. They initially acted as if they would cut over to JPMorgan and be fully up on Monday, but as time goes on it becomes clear they had to do a lot more than just change the account they staged payroll through.

They also failed to have a secondary banking relationship in place, credit revolvers or other contingencies that let them handle this in house, etc. In 30 years of getting paid and working through times like 2008 when major disruption to banking and credit availability occurred, to me Rippling stands alone as the only company that failed to run payroll on time and it took a single point of failure to make them fall.


I mean, the bank failed on Friday. They managed to get payroll out delayed a few days. It was a fast moving situation for everyone involved.

Being told that JPMorgan is taking over the deposit, and knowing your payroll payment automations are going to work a different things. I imagine they had to test, and likely rewrite them. Wiring $300M of pay checks as a “we’ll do it live” scenario seems unwise.

You’ve every right to be upset. But how long do you think it would take another payroll provider to recover after losing all funds due to their bank going under? 2-3 days seems pretty impressive.


If it's the 16th that's 4 business days. But I agree, going through all the trouble of switching payment processors because you hope if the bank they're using fails they'll get back up and running in less than 4 days is a horrible ROI.


Most payroll process have relations with multiple banks. They have emergency credit revolvers in place. They are resilient to single point of failure. Rippling has shown it not.


The bank died with payroll in flight. If they had alternates it would have been faster, but like Monday or Tuesday. 1-2 days instead of 4.

No payroll processor could have made payroll on time in this situation. If you overnight a letter from SF to NYC and the plane crashes on landing, the replacement letter is definitely going to be late.


Well, everyone supposedly has a plan until they get punched in the nose. I'm skeptical that any payroll provider would be fully resilient to their primary bank failing. They'd need to have a 100% reserve in a secondary account and have instant failover logic in place to update payment flows for all customers. Doesn't sound likely.


Used to work for a big payroll provider - eventually they bought their own bank.


Advocating that a company change payroll processors over this single incident seems like a drastic over-reaction.


I am advocating for the fact that Rippling was not transparent in their communication and was too complacent to have a Plan B in place despite having 300 million in play. It is not the single incident is the exposure of poor practices and dishonesty during their disaster recovery.

To put it another way, I have been paid via direct deposit for over 30 years and multiple fiscal crises. There is only one company that ever failed to get full payroll out on time


This isn’t necessarily a useful anecdote though. If your previous payment processors weren’t using a bank that failed during that time, then you getting paid on time isn’t a sign of redundancy; they were just fortunate!

It might be fair to say that Rippling failed by choosing the wrong bank, but that’s a very different argument than saying that Rippling didn’t have a good backup plan. If all you’re going by is that you’ve always gotten paid (vs say knowing who your payment processor banked with and what their Plan B in case of bank failure was), for all you know your previous payment processor had no backup plan either!


Yeah I think ADP has one account at the East Piscataway Chemical Savings and Loan. If that bank failed the same thing would have happened to ADP. Rippling definitely didn’t expose themselves to excessive risk for no reason other than their inexperience.


What you're advocating for could be cause unintended consequences, a case of potentially making for scar tissue worse than the initial wound. Rippling does a lot more things than just process payroll. Unbundling it would probably be a full time undertaking for your company's HR team depending on how large you are, and they may end up with a benefit and payroll stack that works a lot more poorly.

With that said, I am sure a lot of executive teams are reconsidering usage of payroll processors who aren't ADP for the same reason a lot of folks have moved funds to JPM. Flight to security and all that. It's not an illogical reaction to have from your end.


Also the Scorpion wreck has been surveyed twice and imaging shows the nuclear weapons are still there in the wreck. Wreck is ~3k meters deep, so no one is going down there to get themselves a 1960s vintage nuke whose only value is the fissile material.

An entity capable of executing that recovery operation is capable of making their own, buying one from a less savory nation state, etc.


I would hope even small nation states have greater resources than James Cameron.


If you think about it, the actual disposable wealth of a lot of small nations is probably less than what Cameron himself could marshal and spend if he wanted to. A nation in the best case needs to get the buy in of its people and administrators in order to spend a large sum of money on an expensive project. The budget for that project has to compete with all of the other expenses and ambitions that the nation has.


The difference is that for James Cameron's projects could rely on buying existing expertise from contractors. A small nation has to build that themselves in secret.


One would, but well he is James Cameron.


Any determined nation could take that.

In fact, I wouldn't be surprised if someone hasn't already taken it, and left a dummy weapon in its place.

Sure, deep sea exploration by humans is hard... But using unmanned craft, they're mostly impervious to pressure.


They could be nice honeypot locations for the US to continuously monitor, to see who turns up with robotic submersibles. An early warning system for identifying actors with nuclear aspirations.


Why bother? You could get uranium much more easily from other sources.


It's already enriched and comes with a working design.

For a non-nuclear nation, that would be all they'd need to become a nuclear nation.

Then blow it up in a test explosion to show the world, and then they could work slowly on making another bomb while bluffing that they have lots ready to go.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: