Hacker Newsnew | past | comments | ask | show | jobs | submit | quirkot's commentslogin

>> Building 29 separate settings with confusing and overlapping effects is less work than making a single setting of: [Local Only]?

Yes, absolutely. 29 separate overlapping settings likely match up precisely to arguments in various APIs that are used. On the other hand, what does local only even mean? No wifi? No hardwired connection? LAN only? Connection to the internet for system updates but not marketplace? Something else? All with a specified outcome that requires different implementation depending on hardware version and needs to be tweaked everytime dependencies change.


Having a separate setting for unconditionally disabling all wireless communication would be helpful. The other stuff you mention can be separate settings if it is useful to have them. (A setting to unconditionally all disable wired connections is less important since you can just avoid connecting it.)

>>what does local only even mean?

Let's start with this: Design the architecture so the core system works fine locally. Features requiring Internet connection are in separate modules, so they can be easily turned on/off, and designed so they are still primarily local.

E.g., store all current status locally and if requested another module sends it to the cloud, instead of cloud-first.

E.g.2, install updates by making a pull of all resources and then doing the update instead of requiring continuous communication.

Allow user control with options to completely shut off, whitelist, blacklist, etc.

Simple design decisions up front to make a software package meeting the user's local needs first, THEN allowing controlled access to the internet, under the USERS' control, instead of designing every feature to contact your servers first and compromising both usability and control at every step.


I think he's trying to differentiate himself from all the people who are not AI engineers


Problem is, he isn't even remotely an AI engineer™ himself.

The entire article is a complete joke and is ragebait.

Flagged.


it took me a minute to parse that sentence also. They are saying that health tissue makes up < 100% of the body, but that microplastics can be found in a full 100% of the body (healthy and non-healthy). Therefore microplastic prevalence > healthy tissue. It's saying that there is no part of the body that isn't impacted


I think it's saying there's more microplastics in unhealthy than healthy tissue is all, your interpretation is technically possible but phrasing it that way would be so misleading as to basically be lying.

The reason more microplastics in unhealthy tissue doesn't necessarily mean microplastics cause unhealthy tissue is that unhealthy tissue would be worse at removing substances irrespective of whether the substances cause the harm.


I'm not trying to throw shade, but the approach as described is very short sighted.

> That tiny spark of joy reminded me how much I love to build

Well, yes. That's the fun part. The hard part is the "bug fixes on that community library site you built 8 years ago, has 2k loyal users, and will never make you a dime."

This approach, as liberating as it feels, only makes the "who fixes the toilets for the utopia" problem harder.


The "G" part of AGI implies it should be able to hit all the arbitrary yard sticks


That is stupid. It would be possible to be infinitely arbitrary to the point of “AGI” never being reachable by some yard sticks while still performing most viable labor.


>It would be possible to be infinitely arbitrary to the point of “AGI” never being reachable by some yard sticks while still performing most viable labor.

"Most viable labor" involves getting things from one place to another, and that's not even the hard part of it.

In any case, any sane definition of general AI would entail things that people can generally do.

Like driving.

>That is stupid

That's just, like, your opinion, man.


I had a friend who had his Tesla drive him from his driveway in another city 3+ hrs away to my driveway with no intervention.

I feel like everyone’s opinion on how self-driving is going is still rooted in 2018 or something and no one has updated.


Rest assured, your friends driving was the same quality as the average drunk grandma on the road if they were exclusively using Tesla's "FSD" with no intervention for hours. It drives so piss poorly that I have to frequently intervene even on the latest beta software. If I lived in a shoot happy state like Texas I'm sure that a road rager would have put a bullet hole somewhere in my Tesla by now if I kept driving like that.

There's a difference between "I survived" and "I drive anywhere close to the quality of the average American" - a low bar and one that still is not met by Tesla FSD.


Yeah, and let's not forget that "I drive like a mildly blind idiot" is only a viable (literally) choice when everyone else doesn't do that and compensates for your idiocy.


>I had a friend who had his Tesla drive him from his driveway in another city 3+ hrs away to my driveway with no intervention.

I had anecdata that was data, and it said that full-self-driving is wishful thinking.

We cool now?


Good luck on your journey. I think the world is going to surprise you, and you’d be better for opening your eyes a little wider.


You're absolutely right.

The world never ceases to surprise me with its stupidity.

Thanks for your contribution.


ok but have you asked your Tesla to write you a mobile app? AGI would be able to do both. (the self-driving thing is just an example of something AGI would be able to do but an LLM can't)


So why are your arbitrary yard sticks more valid than someone elses?

Probable the biggest problem as others have stated is that we can’t really define intelligence more precisely than that it is something most humans have and all rocks don’t. So how could any definition for AGI be any more precise?


Where did I say my yardsticks are better? I don’t even think I set out any of mine

I said having to satisfy “all” the yard sticks is stupid, because one could conceive a truly infinite number of arbitrary yard sticks.


Is driving is infinitely arbitrary?

It's one skill almost everyone on the planet can learn exceptionally easily - which Waymo is on pace to master, but a generalized LLM by itself is still very far from.


OP said all yardsticks and I said that was infinitely arbitrary… because it literally is infinitely arbitrary. You can conjure up an infinite amount of yardsticks.

As far as driving itself goes as a yardstick, I just don’t find it interesting because we literally have Waymo’s orbiting major cities and Teslas driving on the roads already right now.

If that’s the yardstick you want to use, go for it. It just doesn’t seem particularly smart to hang your hat on that one as your Final Boss.

It also doesn’t seem particularly useful for defining intelligence itself in an academic sort of way because even humans struggle to drive well in many scenarios.

But hey if that’s what you wanna use don’t let me stop you, sure, go for it. I have feeling you’ll need new goalposts relatively soon if you do, though.


Humans are the benchmarks for AGI and yet a lot of people are outright dumb:

> Said one park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”

[1] https://www.schneier.com/blog/archives/2006/08/security_is_a...


And using humans as 'the benchmark' is risky in itself as it can leave us with blind spots on AI behavior. For example we find humans aren't as general as we expected, or the "we made the terminator and it's exterminating mankind, but it's not AGI because it doesn't have feelings" issues.


Not to be rude, but... uh... did your step mom steal your identity and use it for stuff? Minors are huge targets for that sort of stuff because generally no one is checking a 10 year old's credit


10 year olds cannot legally do a lot of things. Other things they can do, but the law gets weird. Not that you are wrong - kids are a target, but there are a lot of protections.

Though if step mom shares your name (not unlikely if OP is a girl with a common name) it isn't a surprise that they will mix you up.


Anyone with a Bloomberg terminal able to run how much free cash the top X companies in the fortune 500 generate? The amount of capital OpenAI is describing have to be a material % of all cash generated each year over the next 5 years.


Total FCF for top 500 by market cap is 2.56T


A rough but useful estimate is about $1tn. But you start to wonder if another $100bn of tokens is worth more than another $100bn of Amazon warehouses or Saudi Aramco refineries. Or if the demand for $100bn of tokens can even exist without something like another 10 nuclear reactors being constructed in the US.


I always wonder... if there was an AGI and it's chipset gave the wrong answer, how would it ever know?


The neural networks we use today have really terrible accuracy, and we tend to make them worse, not better, as having more neurons is better than having more precision. Human brains are also a mess, but somehow, they work, and we are usually able to correct our own mistakes.

Since by AGI, we usually mean human-like, that system should be able to self correct the same way we do.


How do humans know? usually someone corrects someone else. we have repeatability in physics, or we wait 30 years and quash convictions etc. etc.


I'd presume it could reason around the wrong answer, at least to realize something was off. Current LLMs will sometimes hallucinate that this has happened when they're "thinking".


I'd never buy anything as overt as an advertisement in an AI tool. I just want to buy influence. Just coincidentally use my product as the example. Just suggest my preferred technology when asked a few % more often than my competitors. I'd never want someone to observe me pulling the strings


> In principle, good looks, oratory eloquence, a charming personality, well-connectedness, and personal wealth are not particularly useful to creating and executing government policy.

This ignores the fact that "getting people to agree to the policy" is, in fact, extremely important and highly dependent on charisma, eloquence, and the ability to identify and form influential connections. This position imagines human politics devoid of politics and humans.


You're conflating the creation and execution, and overstating the role of salesmanship in the latter. Which is actually a huge part of the issue with contemporary politics. Instead of coming up with policy that a majority agree on, there's quite an emphasis on finding the right Stepford Smiler to sell whatever those who have influential connections want. In what will likely become an evergreen case study, see the recent NYC mayoral primary (though, in this case, they could barely get Cuomo to smile).

Suffice it to say, I don't want my phone jockeys taking on engineering duties.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: