The PineTime [1] is the cheapest programmable option I'm aware of - it does need Gadgetbridge on Android, and the heartrate sensor didn't quite work for me, but otherwise it might be worth a look?
The case you might be thinking of is the JBIG2 implementation bug [1, 2] in Xerox photocopiers where the pattern-matching would incorrectly treat certain characters as interchangeable, leading to numbers getting rewritten in spreadsheets.
Richard Williams' The Animator's Survival Kit [1] is the standard recommendation, I believe - I've also seen animation courses recommend Preston Blair's Cartoon Animation [2]
Video game opponents are absolutely AI in the "good old fashioned AI" (GOFAI) sense - they have very clearly defined objectives and action spaces, and algorithms like A* pathfinding [1] and Goal Oriented Action Planning [2] which perform planning over a space of possible sequences of actions to achieve a specific goal. The Game AI Pro [3] articles online give a good picture of the kind of implementation decisions which go into game AI.
The problem is that a lot of the time, bruteforce search can beat the snot out of any human player, but the real game design objective isn't "beat the player", it's "give the player enough of a challenge to make beating the AI fun".
In Civ's case, it might be theoretically optimal play for the computer to capitalize on rushing players with warriors before they have a chance to establish defenses, but it is also a great recipe for players to angrily request refunds after the tenth consecutive round of being crushed by Gandhi in turn 5. A lot of game AI development time goes into tweaking action probabilities or giving the player advantages to counteract the AI advantage - the reluctance to build military units you saw could have been the result of such a tweak.
As for why LLMs typically aren't applied as game opponents:
* They are quite compute intense, which is tricky when players expect at most 16 ms latency per frame (for 60 FPS), and get ornery if they have to wait more than a few seconds, but also do not like having always-online requirements imposed by cloud compute (or subscription costs to fund running LLMs for every player)
* The bridge between tokens and actions also means it's hard to tweak probabilities directly - while A* can let you specify that a certain path should be taken approx. 20% of the time, implementing this in an agent-LLM approach means you have to actively select and weight token probabilities during the beam search, which is a bit of a hassle, to put it mildly
* The issues with long-term coherence in LLMs, famously demonstrated by Vending-Bench [4], makes reliability and debugging harder
> Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.
As a matter of scope I could understand leaving the social understanding of "AI makes errors" separate from technical evaluations of models, but the thing that really horrified me is that the author apparently does not think past experience should be a concern in other fields:
> AI both frustrates the producer/consumer dichotomy and intermediates access to information processing, thus reducing professional power. In response, through shaming, professionals direct their ire at those they see as pretenders. Doctors have always derided home remedies, scientists have derided lay theories, sacerdotal colleges have derided folk mythologies and cosmogonies as heresy – the ability of individuals to “produce” their own healing, their own knowledge, their own salvation. [...]
If you don't permit that scientists often experiencing crank "lay theories" is a reason for initial skepticism, can you really explain this as anything other than anti-intellectualism?
There are FHE schemes which effectively allow putting together arbitrary logical circuits, so you can make larger algorithms FHE by turning them into FHE circuits -- Jeremy Kun's 2024 overview [1] has a good summary
Good encryption schemes are designed so that ciphertexts are effectively indistinguishable from random data -- you should not be able to see any pattern in the encrypted text without knowledge of the key and the algorithm.
If your encryption scheme satisfies this, there are no patterns for the LLM to learn: if you only know the ciphertext but not the key, every continuation of the plaintext should be equally likely, so trying to learn the encryption scheme from examples is effectively trying to predict the next lottery numbers.
This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
I didn't mean to suggest otherwise! That's why I also linked the CryptoNets paper - to show that you're transforming the inference to happen inside an FHE context, not trying to learn encrypted data
Yes, you can do Cryptonets. What I’m saying is that you don’t have to do cryptonets, you can simply use FHE to train the network in fully encrypted manner: both the network and the data are FHE-encrypted, so the training itself is an FHE application. It would be insanely slow and I doubt it can be done today even for “small” LLMs due to high overheads of FHE.
> This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
I don't think @strangecasts was trying to say you couldn't. I believe their point was that you can't have a model learn to coherently respond to encrypted inputs with just traditional learning mechanisms (so without FHE). Doing so would require an implicit breaking of the encryption scheme by the model because it would need a semantic understanding of the plaintext to provide a cogent, correctly encrypted response.
[1] https://pine64.org/devices/pinetime/