If 1% of indie games are solid, and all AAA game are solid, and there are 100 times more indie games than AAA games, then there would still be the same amount of solid indies as there are solid AAA games. As it is, I think for every good AAA game, there are somewhere between 50 and 500 great indie games.
Finding them is slightly harder, but absolutely worth it.
In any case, complaining about how many games there are out there that are not your thing is a waste of time. Much better to define what you like and look for recommendations from people who like similar games. Who care how many FPSs are released if you don't like FPSs? If you like RPGs, find RPG gamers and ask them what's good. Substitute for any genre; there is no genre out there that's not getting more releases than you could possibly play.
That explains why it happens, but doesn't really help with the problem. The expectation I have as a pretty naive user, is that what is in the .md file should be permanently in the context. It's good to understand why this is not the case, but it's unintuitive and can lead to frustration. It's bad UX, if you ask me.
I'm sure there are workarounds such as resetting the context, but the point is that god UX would mean such tricks are not needed.
I'm surprised this hasn't been been automated yet but I'm pretty naive to the space - the problem of "when?"/"how often?" seems like a fun one to chew on
I think Gemini 3 pro (high) in Antigravity does something like that because I can keep asking for different changes in the same chat without needing to create a new session.
Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.
Be careful with the word "just". It often makes something hard sound simple.
Not trying to oversimplify. But suppose 95% of the probe's mass was intended to be jettisoned ahead of it on arrival by an explosive charge, and would then serve as a reflector. That might give enough time for the probe to be captured by the star's gravity...?
It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.
We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.
I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.
IANan aerounatics engineer, so I could be totally wrong.
It does not. Social media platforms have had massive societal impact. From language, to social movements, to election results, social media has had effects, positive or negative, that impact the lives of even those who do not use them.
You are making a big assumption here, which is that LLMs are the main "algorithm" that the human brain uses. The human brain can easily be a Turing machine, that's "running" something that's not an LLM. If that's the case, we can say that the fact that humans can come up with novel concept does not imply that LLMs can do the same.
No, I am not assuming anything about the structure of the human brain.
The point of talking about Turing completeness is that any universal Turing machine can emulate any other (Turing equivalence). This is fundamental to the theory of computation.
And since we can easily show that both can be rigged up in ways that makes the system Turing complete, for humans to be "special", we would need to be able to be more than Turing complete.
There is no evidence to suggest we are, and no evidence to suggest that is even possible.
To make a universal Turing machine out of an LLM only requires a loop and the ability to make a model that will look up a 2x3 matrix of operations based on context and output operations to the context on the basis of them (the smallest Turing machine has 2 states and 3 symbols or the inverse).
So, yes, you can.
Once you have a (2,3) Turing machine, you can from that build a model that models any larger Turing machine - it's just a question of allowing it enough computation and enough layers.
It is not guaranteed that any specific architecture can do it efficiently, but that is entirely besides the point.
LLMs cannot loop (unless you have a counterexample?), and I'm not even sure they can do a lookup in a table with 100% reliability. They also have finite context, while a Turing machine can have infinite state.
If your argument is that a system incorporating a model is not an LLM if there is a loop around it, then reasoning models are not LLMs.
They can do lookup in a table with 100% reliability, yes, because you can make then 100% deterministic if you wish by using numerically stable inferencing code and setting temperature to 0.
Finite context is irrelevant, because the context can be used as an IO channel.
A Turing machine does not have infinite state within the mechanism itself - it requires access to a potentially infinite tape. A Turing machine can be constructed with down to 1 bit of state (a (2,3) or (3,2) Turing machine are the smalles possible, where one number represents the number of states, and the other represents number of discrete symbols it can handle).
An IO channel is computationally equivalent to an infite tape, and unlike an infinite tape, an IO channel is physically possible.
An LLM in itself is inert - it's just the model, so when talking about an LLM doing anything it is doing so as part of an inference engine. An inference system with a loop is trivially Turing complete if you use the context as an IO channel, use numerically stable inferencing code, and set temperature to 0 - in that case, all you need is for the model to encode a 6 entry lookup table to operate the "tape" via context.
arguably if there was a browser setting for this the current GDPR would require you to respect that setting. But that's arguably, it would still need to adjudicated.
My conclusion would be that under the current GDPR that if someone had the browser setting on, if a company did not respect that setting and kept private data, that they could be reported for GDPR violations and then the issue could be adjudicated, i.e that the courts would then decide if in fact GDPR violations occur by not following that browser setting.
Secondary conclusion - it might be more beneficial if one just contacted the EDPB and said since this browser setting exists and nobody is using it please issue a ruling if the browser setting must be followed, set it to go into effect by this date giving people time to implement it, and if they agreed the browser setting would be adequate to represent your GDPR wishes they might also conclude that it would be an onerous process to make you go through a GDPR acceptance if it were turned on, howe ver as this article is saying that they are "scaling back" the GDPR that would seem to be dead in the water, which is why I said under "the current GDPR".
In the absence of any explicit consent, no-consent is always assumed by the GDPR. The absence of a DNT header definitely doesn't count as consent, so that header is kind of useless, since the GDPR basically requires every request to be handled as if it has a DNT header.
A pre-existing statement of non-consent doesn't stop anyone from asking whether the user might want to consent now. So it is not legally required to not show a cookie dialog when the DNT header is set, which would be the only real purpose of the DNT header, but legislating such a thing, would be incompatible with the other laws. It would basically forbid anyone from asking for any consent, that's kind of stupid.
The GDPR requires the consent to be given fully informed and without any repercussions on non-consent. So you can't restrict any functionality when non-consenting users, and you can also not say "consent or pay a fee". Also non-consenting must be as easy as consenting and must be revocable at every time. So a lot of "cookie-dialogs" are simply non-compliant with the GDPR.
What would be useful is a "Track me" header, but the consent must be given with an understanding to the exact details of what data is stored, so this header would need to tell what exactly it consents to. But no one would turn it on, so why would anyone waste the effort to implement such a thing in the browser and web applications?
"rapid, iterative Waterfall" is a contradiction. Waterfall means only one iteration. If you change the spec after implementation has started, then it's not waterfall. You can't change the requirements, you can't iterate.
Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.
> Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.
If only this were accurate. Royce's chart (at the beginning of the paper, what became Waterfall, but not what he recommended by the end of the paper) has been adopted by the DOD. They're slowly moving away from it, but it's used on many real-world projects and fails about as spectacularly as you'd expect. If projects deliver on-time, it's because they blow up their budget and have people work long days and weekends for months or years at a time. If it delivers on budget, it's because they deliver late or cut out features. Either way, the pretty plan put into the presentations is not met.
People really do (and did) think that the chart Royce started with was a good idea, they're not competent, but somehow they got into positions in management to force this stupidity.
That's not what AGI used to mean a year or two ago. That's a corruption of the term, and using that definition of AGI is the mark of a con artist, in my experience.
I believe the classical definition is, "It can do any thinking task a human could do", but tasks with economic value (i.e. jobs) are the subset of that which would justify trillions of dollars of investment.
Any definition of AGI that doesn't include awareness is wrongly co-opting the term, in my opinion. I do think some people are doing that, on purpose. That way they can get people who are passionate about actual-AGI to jump on board on working with/for unaware-AGI.
> It makes some sense for an AI trained on human persuasion
Why?
> However, results will vary.
Like in voodoo?
I'm sorry to be dismissive, but your comment is entirely dismissing the point it's replying to, without any explanation as to why it's wrong. "You are holding it wrong" is not a cogent (or respectful) response to "we need to understand how our tools work to do engineering".
Finding them is slightly harder, but absolutely worth it.
In any case, complaining about how many games there are out there that are not your thing is a waste of time. Much better to define what you like and look for recommendations from people who like similar games. Who care how many FPSs are released if you don't like FPSs? If you like RPGs, find RPG gamers and ask them what's good. Substitute for any genre; there is no genre out there that's not getting more releases than you could possibly play.
reply