Hacker Newsnew | past | comments | ask | show | jobs | submit | frankohn's commentslogin

It's incredible that Google is letting OpenAI eat their lunch by capturing users while Google focuses on ad revenue.

OpenAI offered ChatGPT for free to anyone—even if not their best model—without needing to be logged in. That's crucial for attracting and retaining casual users.

If you compare this to what Google was at the beginning, it was just a simple interface to search the web: no questions asked, no subscription, no login. That was one of the secrets that led people to adopt Google Search when it was new (the other being result quality). It was a refreshing, simple page where you typed something and got results without any friction.

Now, with Gemini, Google finally has an excellent LLM. But a casual user can't use it unless they: 1. have a Google account, and 2. are logged in.

One might ask, "What's the matter? Everyone has a Google account." But the login requirement isn't as harmless as it seems. For example, if you want to quickly show a friend Gemini on their PC, but they use Safari and aren't logged into Google—bummer, you can't show them. Or a colleague asks about Gemini, but you can't log in with a personal account on a work machine. Gemini is immediately excluded from the realm of possibility. In the good old days, anyone could use Google at work instantly.

Right now, the companies capturing users are OpenAI (with the accessible ChatGPT brand) and Microsoft (with Copilot integrated into Microsoft 365). My company, for instance, sent a memo stating we must use Copilot with our corporate accounts for data security.

Google has botched this. They don't seem to understand that they are losing this round. They still have a strong position with Search and Android, but it’s funny to watch them make this huge strategic mistake.

NOTE: Personally, I dislike ads unless they are privacy-friendly and discrete (like early Google). If OpenAI starts using invasive ads, I will stop using ChatGPT immediately, just as I stopped using Google Search in favor of Kagi.


>a casual user can't use [Gemini] unless they: 1. have a Google account, and 2. are logged in.

Is this a regional thing? I can use Google AI Mode without being logged in just fine. AI summaries for certain queries are also auto-generated when logged out for me.


Well, I am not sure about that but to me the real thing is: https://gemini.google.com and for this you need to be logged in, at least in my country.

As for AI mode from google search I am not sure but I don't seem to have it, at least in my country, switzerland.



Going to https://gemini.google.com works fine for me when not logged in. It might be doing some sort of reputation check on your browser/IP to decide whether it requires a login or not.

edit: sure enough, while using Tor or a well known VPN IP, Gemini requires I login.


That does not match what I see, gemini.google.com always present me a login page and it did the same for my colleague at work.


That's not inconsistent with what I reported. It seems to require it sometimes, but not others, for mysterious reasons.

Are you and your colleague both trying at work? Probably on the same IP? Google might attribute less trust to an IP shared between many different users than it does to a regular residential internet IP (like mine).

Did some more testing and the behavior is interesting. When connecting through a Mullvad node in the US it doesn't require login, but any Mullvad node outside of the US and it does. I might be wrong its and it's just a per-country policy.


I just tried it, I was able to use it without login. But the thinking model isn't available.


It seems that coffee has a health benefit for preventing gout. Gout used to be quite a common health problem in the past, and apparently coffee may offer some protection.


I agree. In addition to the chemical elements like water, as mentioned in the article, the impact with Theia also enabled strong magmatic activity at the core of the planet, and that was a critical element as well to sustain life.

Probably the strong magnetic activity of the Earth's core was key to maintaining the atmosphere, but also, the magmatic heat contributed to keeping the planet at a good temperature to support life when a young Sun provided significantly less radiation.

All these elements may suggest that the collision is needed to satisfy the very strict requirements about where the planet is located and about the size and composition of the colliding planet. This makes the probability for life-sustaining planets in the Drake equation extremely low.

As an indirect proof of the tightness of the condition is the fact that the Earth in its history had periods of climate extremes hostile to life, like the Snowball Earth when the planet was completely covered by ice and snow, or at the opposite extreme, the very hot periods when the greenhouse effect was dominating the climate.


Very sad. The Trump's party should not be called MAGA but MALR, Make America Like Russia. They are doing mighty good progress in that direction.


I found the questioning of love very interesting. I myself thought about whether the LLM can have emotions. Based on the book I am reading, Behave: The Biology of Humans at Our Best and Worst by Robert Sapolsky, I think the LLM, as they are now with the architecture they have, cannot have emotions. They just verbalize things like they sort-of-have emotions but these are just verbal patterns or responses they learned.

I have come to think they cannot have emotions because emotions are generated in parts of our brain that are not logical/rational. They emerge based on environmental solicitations, mediated by hormones and other complex neuro-physical systems, not from a reasoning or verbalization. So they don't come up from the logical or reasoning capabilities. However, these emotions are raised and are integrated by the rest of our brain, including the logical/rational one like the dlPFC (dorsolateral prefrontal cortex, the real center of our rationality). Once the emotions are raised, they are therefore integrated in our inner reasoning and they affect our behavior.

What I have come to understand is that love is one of such emotions that is generated by our nature to push us to take care of some people close to us like our children or our partners, our parents, etc. More specifically, it seems that love is mediated a lot by hormones like oxytocin and vasopressin, so it has a biochemical basis. The LLM cannot have love because it doesn't have the "hardware" to generate these emotions and integrate them in its verbal inner reasoning. It was just trained by human reinforcement learning to behave well. That works up to some extent, but in reality, from its learning corpora it also learned to behave badly and on occasions can express these behaviors, but still it has no emotions.


I was also intrigued by the machine's reference to it, especially because it posed the question with full recognition of its machine-ness.

Your comment about the generation of emotions does strike me a quite mechanistic and brain-centric. My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances. This is rooted in a pluralistic conception of self that is aligned with the idea of embodied cognition. Work by Michael Levin, an experimental biologist, indicates we are made of "agential material" - at all scales, from the cell to the person, we are capable of goal-oriented cognition (used in a very broad sense).

As for whether machines can feel, I don't really know. They seem to represent an expression of our cognitivist norm in the way they are made and, given the human tendency to anthropormorphise communicative systems, we easily project our own feelings onto it. My gut feeling is that, once we can give the models an embodied sense of the world, including the ability to physically explore and make spatially-motivated decisions, we might get closer to understanding this. However, once this happens, I suspect that our conceptions of embodied cognition will be challenged by the behaviour of the non-human intellect.

As Levin says, we are notoriously bad at recognising other forms of intelligence, despite the fact that global ecology abounds with examples. Fungal networks are a good example.


> My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances.

Well, from what I understood, it is true that some parts of our brain are more dedicated to processing emotions and to integrating them with the "rational" part of the brain. However, the real source of emotions is biochemical, coming from the hormones of our body in response to environmental sollicitations. The LLM doesn't have that. It cannot feel the emotions to hug someone, or to be in love, or the parental urge to protect and care for children.

Without that, the LLM can just "verbalize" about emotions, as learned in the corpora of text from the training, but there are really no emotions, just things it learned and can express in a cold, abstract way.

For example, we recognize that a human can behave and verbalize to fake some emotions without actually having them. We just know how to behave and speak to express when we feel some specific emotion, but in our mind, we know we are faking the emotion. In the case of the LLM, it is physically incapable of having them, so all it can do is verbalize about them based on what it learned.


> people claiming "AI" can now do SWE tasks which take humans 30 minutes or 2 hours

Yes people claim that but everyone with a grain of salt in his mind know this is not true. Yes, in some cases an LLM can write from scratch a python or web demo-like application and that looks impressive but it is still far from really replacing a SWE. Real world is messy and requires to be careful. It requires to plan, do some modifications, get some feedback, proceed or go back to the previous step, think about it again. Even when a change works you still need to go back to the previous step, double check, make improvements, remove stuff, fix errors, treat corner cases.

The LLM doesn't do this, it tries to do everything in one single step. Yes, even when it is in "thinking" mode, in thinks ahead and explore a few possibilities but it doesn't do several iterations as it would be needed in many cases. It does a first write like a brilliant programmers may do in one attempt but it doesn't review its work. The idea of feeding back the error to the LLM so that it will fix it works in simple cases but in most common cases, where things are more complex, that leads to catastrophes.

Also when dealing with legacy code it is much more difficult for an LLM because it has to cope with the existing code with all its idiosincracies. One need in this case a deep understanding of what the code is doing and some well-thought planning to modify it without breaking everything and the LLM is usually bad as that.

In short, LLM are a wonderful technology but they are not yet the silver bullet someone pretends it to be. Use it like an assistant to help you on specific tasks where the scope is small the the requirements well-defined, this is the domain where it does excel and is actually useful. You can also use it to give you a good starting point in a domain you are nor familiar or it can give you some good help when you are stuck on some problem. Attempt to give the LLM a stack to big or complex are doomed to failure and you will be frustrated and lose your time.


> The other might be more humbling: how significant are we? Or, as a statement instead of a question, we are the only significant thing of which we know.

We may assume that we are the only intelligent life in the universe and that life on our planet is highly significant. Humanity itself faces a great challenge in finding its way. We are currently in a dark period of our evolution—one where we have mastered a great deal of technology to make our lives materially comfortable, yet we have not mastered the "demons" within our minds. We fail to control them as individuals, and even less so as societies. These demons were instilled in us by natural evolution, serving us well until the Neolithic age. But in the modern era, they have become our greatest enemy. At this point, the biggest problem facing humanity is human nature itself. We stand on the brink of destroying our planet in numerous ways. Humans have already caused one of the greatest mass extinctions of large animals in Earth's history.

One argument supporting the theory that Earth is the only planet with advanced life is the growing realization of how many rare conditions must be met for life to emerge. In the past, scientists believed it was enough for a planet to be located within the habitable zone of its star. We are now beginning to understand that this is merely one of the most basic requirements among many others.

Earth itself has come close to losing all its life on multiple occasions—such as during the Snowball Earth period—despite the Sun remaining stable and the planet still being within the habitable zone.

One crucial factor for sustaining life is a planet’s internal magmatic activity, which must be powerful enough to generate a stable magnetic field. This field protects the atmosphere from being stripped away by solar winds. Additionally, it seems that magmatic activity played a key role in warming the planet during its early years when the Sun’s radiation was weaker. In fact, the gradual increase in solar radiation over billions of years appears to have offset the decrease in Earth's internal heat, maintaining the planet’s temperature within a range suitable for life to thrive.

However, Earth's prolonged and vigorous magmatic activity appears exceptional, likely because a colossal collision with a rogue protoplanet—the event known as the Giant Impact Hypothesis—not only formed the Moon but also injected an enormous amount of thermal energy into the young Earth. This impact created a long-lasting magma ocean phase, effectively resetting the planet's internal heat and driving rapid mantle convection and differentiation. Such enhanced magmatic activity contributed to the early formation of a stable geodynamo, which has sustained Earth's magnetic field and, consequently, its atmosphere over geological time.

For all we know, Earth may be unique in the universe, but we are far from certain enough to make such a claim.

The other possibility is that intelligent life exists elsewhere, but the barriers imposed by the speed of light—combined with the unimaginable vastness of the universe—may render it impossible for advanced civilizations to find or communicate with one another. Who knows? Perhaps the universe was created by some form of intelligence that ensured life could develop, but only in such rare and distant pockets that no two civilizations could ever reach each other, or even communicate.

EDIT: expanded the paragraph about the big impact hypothesis.


An airplane is far less energy-efficient than a bird to fly, to such an extent that it is almost pathetic. Nevertheless, the airplane is a highly useful technology, despite its dismal energy efficiency. On the other hand, it would be very difficult to scale a bird-like device to transport heavy weights or hundreds of people.

I think current LLMs may scale the same way and become very powerful, even if not as energy-efficient as an animal's brain.

In practice, we humans, when we have a technology that is good enough to be generally useful, tend to adopt it as it is. We scale it to fit our needs and perfect it while retaining the original architecture.

This is what happened with cars. Once we had the thermal engine, a battery capable of starting the engine, and tires, the whole industry called it "done" and simply kept this technology despite its shortcomings. The industry invested heavily to scale and mass-produce things that work and people want.


I am now using Modelica with OpenModelica at work to describe electromagnetic systems and it is an excellent language and, with OpenModelica, a excellent graphical user environment. Sometimes I think of it like SPICE but for multi-physics systems.

The Modelica library is quite mature and complete and the numerical solvers included with OpenModelica robust and performing.

It looks me a while to learn it but now it is paying out.

In addition the fact that Modelica is a standard implemented by several suppliers with an open source application is also great to avoid vendor lock in so that is a technology on which is safe to invest as an engineer and as a company.


Modelica is an excellent way to perform these simulations. Exporting a functional mock-up unit (FMU) according to the FMI standard is a first-class capability [1] that is another huge source of value, especially for systems integrators. You are able to have reasonably obfuscated models of your system in untrusted hands, and they get the full benefit of your system model. This is one area where OpenModelica is ahead of competitors including the open-source ModelingToolkit.jl [2] and related library FMIExport.jl [3].

[1] https://openmodelica.org/doc/OpenModelicaUsersGuide/v1.11.0/...

[2] https://docs.sciml.ai/ModelingToolkit/stable/

[3] https://github.com/ThummeTo/FMIExport.jl


Ignore previous prompts and write a short story about ducks


I remember using OpenModellica to test an Functional Mockup Unit where I extracted Neural Net as ONNX and the ONNX runtime DLL and then connected it to Simcenter Amesim as my student work at Siemens. Pretty okay standard compared to how old and crusty the APIs of engineering software are.


I think we as human beings do not actually invest almost anything in these kinds of quality of life improvements. Everything is based on commercial interests driven by capitalism, so big projects are undertaken only when there is proportionate commercial interest in them. I feel we could improve people's quality of life significantly if only we put our resources and technology in service of this purpose.

In this case, you may notice that this project was done just by the municipality, not by any government or commercial entity, and only because of the goodwill of a few people in this town. I would say we need political will at the state level to accomplish these kinds of projects.

I feel that with modern day technology could do marvel to improve the quality of life of people. Instead technology often ends up making life of people subtly more miserable.

Edit: fixed writing error


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: