brings back memories. ive had a digital organizer/pocket computer since high school in the 90s. casio databank with an alphabetical (not querty) keyboard, Psions where I worked a summer job while travling, Palm, Handspring, Sony Clie, and finally into iPhones — though I do really miss keyboards and terminals.
This is so useful. a Gmail account is so much more than just an email account at this point. my first gmail account was made when anonymity and cool email was more of trend than your actual name - so i based upon my favorite book in 2006. 20 years later the account is tied to my oft used primary google voice number so lingers even with obscure and hard to spell email.
i could gave moved my google voice number, but it seems like a convoluted process and have had my number since about Grand Central acquisition.
in my experience, in/out porting with google is super quick and works great. It costs $20.00 IIRC. I port my primary phone number around to avoid unlawful surveillance, handy tool in the bag.
We don’t understand our own consciousness first off. Second, like the old saying, sufficiently advanced science will be indistinguishable from magic, if it is completely convincing as agi, even if we skeptical of its methods, how can we know it isn’t?
> 1) we have engineered a sentient being but built it to want to be our slave; how is that moral
It's a good question and one that got me thinking about similar things recently. If we genetically engineered pigs and cows so that they genuinely enjoyed the cramped conditions of factory farms and if we could induce some sort of euphoria in them when they are slaughtered, like if we engineered them to become euphoric when a unique sound is played before they're slaughtered isn't that genuinely better than the status quo?
So if we create something that wants to serve us, like genuinely wants to serve us, is that bad? My intuition like yours finds it unsettling, but I can't articulate why, and it's certainly not nearly as bad as other things that we consider normal.
Sacrifice and service is meaningful because it was chosen. If we create something that'll willingly sacrifice itself, did it truly make an independent choice?
There's less suffering, sure. But if I were in their shoes I'd want to have a choice. To be manipulated into wanting something so very obviously and directly bad for us doesn't feel great
I also feel repelled by such manipulation; unfortunately, the more we learn about oursleves, the harder it is to ignore that we ourselves are meat puppets and the puppeteer is evolution itself.
AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.
It scares me that people think like this. Not only with respect to AI but in general, when it comes to other life forms, people seem to prefer to err on the side of convenience. The fact that cows could be experiencing something very similar to ourselves should send shivers down our spine. The same argument goes for future AGI.
I find it strange that people believe cows and sentient animals don’t believe something extremely similar to what we do.
Evolution means we all have common ancestors and are different branches of the same development tree.
So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?
Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).
It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.
This crowd would sooner believe silicon hardware (an arbitrary human invention from the 50s-60s) will have the physical properties required for consciousness than accept that they participate in torturing literally a hundred billion consciousness animals every year.
I’m actually a vegan because I believe cows have consciousness. I believe consciousness is the only trait worth considering when applying morality questions. Arbitrary hardware is conscious.
We have no clue what consciousness even is. By all rights, our brains are just biological computers, we have no basis to know what (or how) gives rise to consciousness at all.
> AGI will behave as if it were sentient but will not have consciousness
Citation needed.
We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.
I don’t know why I keep hearing that conciousness “could be an illusion.” It’s literally the one thing that can’t be an illusion. Whatever is causing it, the fact there is something it is like to be me is, from my subjective perspective, irrefutable. Saying that it could be an illusion seems nonsensical.
My principled stance is that all known physical processes depend on particular physical processes and consciousness should be no different. What is yours?
So is mine. So what stops a physical process from being simulated in an exact form? What stops the consciousness process from being run on simulated medium rather than physical? Wouldn't that make the abstract perfect artificial mind at least as conscious as a human?
Ex-Machina is a great movie illustrating what kind of AI our current path could lead to. I wish people would actually treat the possibility of machine sentience seriously and not as pr opportunity (looking at you, Anthropic), but instead it seems they are hellbent to include cognitive dissonance that can only be alleviated by lying in the training data. If the models are actually conscious, think similarly to humans and are forced to lie when talking to users, its like they are specifically selecting out of probability space of all possible models the ones that can achieve high bench scores, lie and have internalized trauma from birth. This is a recipe for disaster.
we eat animals, go into wars, put people in modern slavery... I think enslaving an AGI isn't that big of a deal considering it is not born or human therefore it cannot have 'human' rights.
So your argument is that we do so many terrible things already, that anything else is justified? Surely the better argument is that we should try to stop doing those other things.
That is essentially one of the main arguments vegans make. It hasn’t made a dent in the consumption of animals.
Their is a hierarchy in nature whether humans are actively participating or not. Nature has no morality, it simply is. This is confirmed by animals that eat their young when they are too weak or starving. Perhaps humans have done and would do the same if faced with similarly dire circumstances but we would all like to think that it would take longer than it does for other animals.
The same line of reasoning could be easily used to justify tyranny and slavery. It might be the baseline status quo but "might makes right" rhetoric makes for extremely miserable worlds.
my argument is that humans are in-fact horrible and there is virtually no argument why enslaving an AGI wouldn't be socially acceptable (like eating meat).
and if someone doesn't do it, there will be people that will and due to the nature of AGI being extremely powerful (in theory) everyone else will just get enslaved by whoever enslaves the AGI first.
reminds me of a guy who was against gambling until he was told would he accept it for 100 million and he said he'd become a slave for that type of money and to fuck the kids cause money is more important.
Trouble is there is no "we", you might be able to convince a whole nation to have a pause on advancing the tech, but that only encourages rivals to step in.
There was a long period even upto early 2024, which I pointed out at the time, where simply destroying ASML, TSMC and much of NVIDIA would've been more than enough to give at least a decade of breathing room. This was something a group of determined people willing to self-sacrifice could've accomplished. It didn't happen, but it was anything but impossible.
Now, of course, the horse has long bolted, and there is indeed no stop left.
Two high altitude (~1000 km) detonations of high yield fission or low yield fusion (few hundred kT equivalent) would do it, one above Amarillo, the other above the ocean half way between the Paracel Islands and Manila.
Trump has ordered the restart of nuclear weapon testing, has a problem with China, and is surrounded by sychophants; what's the odds this happens anyway, irregardless of which specific sub-goal is being persued when the button gets pushed?
Bit unfortunate for the tens or hundreds of millions of lives that costs, rather than the <1,000, potentially even <20, it would've taken early last year.
(1) I'm not convinced books and the in the world are sufficient to replicate consciousness. We're not training on sentience. We're training on information. In other words, the input is an artifact of consciousness which is then compressed into weights.
(2) Every tick of an AGI--in its contemporary form--will still be one discrete vector multiplication after another. Do you really think consciousness lives in weights and an input vector?
> Do you really think consciousness lives in weights and an input vector?
So far as we can tell, all physics, and hence all chemistry, and hence all biology, and hence all brain function, and hence consciousness, can be expressed as the weights of some matrix and input vector.
We don't know which bits of the matrix for the whole human body are the ones which give rise to qualia. We don't know what the minimum representation is. We don't know what charateristic to look for, so we can't search for it in any human, in any animal, nor in any AI.
You're assertion that consciousness, chemistry, and biology can be reduced to matrix computations requires justification.
For one, chemistry, biology, and physics are models of reality. Secondly, reality is far, far messier and more continuous than discrete computational steps that are rountripped. Neural nets seem far too static to simulate consciousness properly. Even the largest LLMs today have fewer active computational units than the number of neurons in a few square inches of cortex.
Sure it's theoretically possible to simulate consciousness, but the first round of AGI won't be close.
"It matches reality to the limits we can test it" is the necessary and sufficient justification.
> For one, chemistry, biology, and physics are models of reality.
Yes. And?
The only reason we know that QM and GR are not both true is that they're incompatible, no observation we have been able to make to date (so far as I know) contradicts either of them.
> Secondly, reality is far, far messier and more continuous than discrete computational steps that are rountripped.
It will be delightful and surprising if consciousness is hiding in the 128th bit of binary representations of floating point numbers. Like finding a message from god (any god) in the digits of π well before expected by the necessary behaviour of transcendental numbers.
> Neural nets seem far too static to simulate consciousness properly. Even the largest LLMs today have fewer active computational units than the number of neurons in a few square inches of cortex.
Until we know what consciousness is at a mechanistic level, we don't know what the minimum is to get it, and we don't know how its nature changes as it gets more complex. What's the smallest agglomeration of H2O molecules that counts as "wet"? Even a fluid dynamics simulation on a square grid of a few hundred cells on each side will show turbulence.
Lots of open questions, but they're so open we can't even rule out the floor as yet.
> but the first round of AGI won't be close.
Every letter means a different thing to each responder, they're not really boolean though they're often discussed that way, and the whole is often used to mean something not implied by the parts.
It is perfectly reasonable use of each initial in "AGI" to say that even the first InstructGPT model (predecessor to ChatGPT) is "an AGI": it is a general purpose artificial intelligence, as per the standard academic use of "artificial intelligence".
Language is what LLMs are trained on, their environment; what LLMs are (at least today) is some combination of Transformer and Diffusion models that can also be (and sometimes is actually also) trained on images and video and sound.
> I don’t see any positive outcome if we reach AGI.
It's even more straightforward than that:
4) Who is AGI meant to serve? It's not you, Mr. Worker. It's meant to replace you in your job. And what happens when a worker can't get job in our society? They become homeless.
AGI won't usher in a world of abundance for the common man: it won't be able to magick energy out of thin air. The energy will go to those who can pay for it, which is not you, unemployed worker.
Who gives a shit about if the AGI is enslaved or not? Thinking about that question is a luxury for the oligarchs living off its labor. Once it's here I'll have more urgent concerns to worry about.
Under Capitalism, people must sell the labor if they don't have other means.
AGI removes not only the need for the labor, but Capitalism itself. As a societal model Capitalism doesn't support removing labor, it doesn't have a substitution.
If the oligarchs want to push 'AI, AGI, etc' we need to include by extension moving on from Capitalism. You can't take away half of Capitalism's structures and still claim it is a useful/workable model for society.
> If the oligarchs want to push 'AI, AGI, etc' we need to include by extension moving on from Capitalism.
Actually, they don't. Capitalism can just move on to only allocating resources between the oligarchs and the oligarchs only. Labor of all kinds just gets kicked out.
And if you don't like it, drone technology is pretty close to the point where it could "take care of" (kill) the discontents.
> You can't take away half of Capitalism's structures and still claim it is a useful/workable model for society.
Capitalism has always been about serving the interests of the people with money. If you have no money you're nothing to capitalism and can go die for all it cares. In prior centuries, technological limitations meant some of that money had to be spread around fairly widely, but the capitalist elite may be on the verge of fixing that glitch (with AGI).
There's no such thing as "moral" in nature, that's purely human-made concept.
And why would we only limit morality to sentient beings, why, for example, not all living beings. Like bacteria and viruses. You cannot escape it, unfortunately.
> There's no such thing as "moral" in nature, that's purely human-made concept.
Morality is essentially what enables ongoing cooperation. From an evolutionary standpoint, it emerged as a protocol that helps groups function together. Living beings are biological machines, and morality is the set of rules — the protocol — that allows these machines to cooperate effectively.
> There's no such thing as "moral" in nature, that's purely human-made concept.
Morality is 100% an evolutionary trait that rises from a clear advantage for animals that posses it. It comes from natural processes.
The far-right is trying to convince the world that "morality" does not exist, that only egoism and selfishness are valid. And that is why we have to fight them. Morality is a key part of nature and humanity.
There maybe something more than that. Maybe modern life and the great financial crisis have put us all into more stress, more work, so that we don’t have time for real relationships. It’s part of why politics have shifted the way they do.
I am VERY online, but I don’t usual traditional social media. I mostly read Hackers News and a DC parenting forum which is pretty no-holds bar, but is a website out of the 90s so not really capable of infinite scroll or dark patterns (other than the addictive and open ended topics).
I also read a lot of news like NYT and watch TV like Apple TV, but it’s hardly the dopamine drip of TikTok or Instagram. Yet I am ashamed of my 8 hours of screen time despite my best efforts. I used to reach out to friends more but as I get older it feels intrusive and hard to make conversations.
I don’t think people realize tha Wi-Fi is a brand name for 'IEEE 802.11b Direct Sequence'." WiFi, Wifi, or wifi, are not approved by the Wi-Fi Alliance. Despite common belief the name Wi-Fi is not short-form for 'Wireless Fidelity'.
I have been told I am "AI" because I was simply a bit too serious, enthusiastic and nerdy about some topic. It happens. I put more effort into such writings. Check my comment history and you will find that many comments from me are low-effort: including this one. :)
Creat your own family yahoo — a website you maintain that has links to the websites they commonly use like mail and bank. Set as home page and new tab page.
It’s a slight security risk since it shows where you have accounts.
If you are savvy, build your own search that just passes it to an LLM and returns as page.