The 8 Gen 3 also still uses the previous tile-based A7x GPU architecture, while newer chips use the "A8x family of GPUs based on the new Slice architecture".
I think you aren't understanding the meaning of the world bubble here. No one can deny the impact LLM can have but it still has limits.
And the term bubble is used here as an economic phenomenon. This is for the money that openai is planning on spending which they don't have. So much money is being l poured here, but most users won't pay the outrageous sums of money that will actually be needed for these LLM to run, the break even points looks so far off that you can't even think about actual profitability.
After the bubble bursts we will still have all the research done, the hardware left and smaller llms for people to use with on device stuff.
the real innovation is that neural networks are generalized learning machines. LLMs are neural networks on human language. The implications of world models + LLMs will take them farther
The neural net was invented in the 1940s, and LLMs were created in the 1960s. It's 2025 and we're still using 80yo architecture. Call me cynical, but I don't understand how we're going to avoid the physical limitations of GPUs and data to train AIs on. We've pretty much exhausted the latter, and the former is going to hit sooner rather than later. We'll be left at that point with an approach that hasn't changed much since WW2, and our only solution is going to hit a physical limit law.
Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.
they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now
> they didnt have the compute or the data to make use of NNs
The compute and data are both limitations of NNs.
We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).
Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.
No, GAI will require new architectures no one has thought of in nearly a century.
We have evidence that general intelligence can be produced but a bunch of biological neurons in the brain and modern computers can process similar amounts of data to those so it's a matter of figuring how to wire it up as it were.
Despite being their namesake, biological neurons operate quite distinctly from neural nets. I believe we have yet to successfully model the nervous system of the nematodes, with a paltry 302 neurons.
dude who cares about data and compute limits. those can be solved with human ingenuity. the ambiguity of creating a generalized learning algorithm has been solved. a digital god has been summoned
So I know this because I did some research on why the crunchyroll subs didn't work in the pip mode on firefox. So it turns out they used a substitute format called .ass, which as the article mentioned was created by AegisSubs.
This is not natively supported by browsers, so they used to import a wasm bundle for reading this file, rendering it on a Canvas that was overlayed on the Video.
So they did put a lot of effort in the making it work not only in the labour of the translation but also in supporting it technically.
Sadly it looks like they will be switching to more lackluster formats That don't support the advance positioning features.
I think that was reasonable thing to bet over a decade ago. But these days there are very few Big time movie star who can carry a movie on their back. The Last big movie star would be Tom Cruise and he also isn't a surefire way to success by how the latest Mission impossible Performed. The more reasonable thing to bet on these days is Recognizable IP which is what the Big studios have been milking for years. And that also gives more power to the Studio since people don't go to theater for Robert Pattison, they go there for Batman.
From my experience looking at some free chapters on multiple sites, the don't serve any high resolution images for chapters either. Though I haven't checked any SFW manhwas sites for this so. And most piracy sites just compress them with jpeg or webp, they don't usually touch the resolution of the images.
There’s a stack of Blink-only Google specifications that people are pushing as “standard” that are anything but. Both Apple and Mozilla reject them on privacy or security grounds, then get criticised as being “behind” (or in Apple’s case, “deliberately holding back the web”). No, a specification created and implemented unilaterally by Google is not a standard.
Most standards start by being implemented by one browser. I would rather a standard exist based off a real implementation than one thrown out of an ivory tower.
The whole point of standards is that there should be multiple implementations that can support it. Now I do agree with your point on that standards should be based on feedback from implementations but as the above comment said, this is just google making adding features and making standard documents to shift blame on other browser for not implementing them.
To be fair where are the other browser's proposals for giving apps the capabilities they need? Browsers should be shamed for not making an effort to support common use cases that apps need.
I am curious what do you mean by Launcher here. Firefox Allows installing app to your android laucher home screen and you can add any website as a shortcut to the homescreen.