$ dig www.nyu.edu +noall +answer -t A
; <<>> DiG 9.10.6 <<>> www.nyu.edu +noall +answer -t A
;; global options: +cmd
www.nyu.edu. 1 IN CNAME dsvdfvx64.github.io.
dsvdfvx64.github.io. 2995 IN A 185.199.109.153
dsvdfvx64.github.io. 2995 IN A 185.199.110.153
dsvdfvx64.github.io. 2995 IN A 185.199.111.153
dsvdfvx64.github.io. 2995 IN A 185.199.108.153
I wonder what would happen if we quantized each dimension to 0.5 (or even fewer) bits instead of 1, i.e., taking 2 (or more) scalar components at a time and mapping them to 0 or 1 based on some carefully designed rules.
I agree that Nix is the right idea, but I'm hesitant to adopt it right now because (i) rough edges like poor documentation and inconsistent interfaces make it feel immature, (ii) things are moving very fast with experimental stuff like flakes, which means it will take a while before Nix starts to stabilize, and (iii) recent community forks like Lix might create a fragmented ecosystem in the long run.
Regarding experimental features like flakes: the reality is they're incredibly stable. I've written about this before: https://determinate.systems/posts/experimental-does-not-mean.... They haven't realistically changed in years, because they work so well. The experimental label is practically FUD at this point.
If the Nix team were to change flakes in a breaking way, it would be stunning neglect for the vast, vast percentage of the ecosystem that has already adopted them. Our data shows that of all the (OSS) repositories created every day, almost 90% of them start with a flake.nix. Of all of those projects, less than 20% use the legacy file formats, and most of those are using the flake-compat library.
On documentation and interfaces, I agree, and we and the greater community are working hard to on that problem. I'll take time, but it is decidedly better than it was a few short years ago.
And on community fragmentation, I just don't see it becoming a problem. The core Nix ecosystem is so large and diverse, I don't see meaningful fragmentation coming out of this.
As a response to (ii), I assure you that things are most certainly not moving very fast with experimental stuff like flakes. Flakes were first released as "experimental" almost 3 years ago and have been stuck in feature purgatory ever since.
To be blunt, this is driven by the rejection of flakes by a significant group of the contributor base, despite it being much more adopted by the user base at large.
Even as someone who does think Flakes are better than the prior solutions, I'm increasingly of the opinion that Flakes would be better moved to a layer outside the core Nix project - advancing them within core Nix at this stage seems pretty impossible with many within the project opposed to their existence. I think if Flakes were an alternative project at the same level as something like Niv, a lot of the holy warring would get out of the way.
1. Those who wish to improve them could do so without the discussion being deadlocked by "hey, we haven't yet agreed these should be stable"
2. Those who don't want to paint them as the path forward for fear of precluding a better option, now don't have to.
> Even as someone who does think Flakes are better than the prior solutions, I'm increasingly of the opinion that Flakes would be better moved to a layer outside the core Nix project - advancing them within core Nix at this stage seems pretty impossible with many within the project opposed to their existence. I think if Flakes were an alternative project at the same level as something like Niv, a lot of the holy warring would get out of the way.
I posted some information and metrics about that on Discourse:
I wonder if this has anything to do with Apple Intelligence. Maybe they can have an LLM that operates on encrypted input text without decrypting it, so that users can send sensitive information to an Apple-controlled central server without worrying about privacy issues?
I bought ten small microSD cards, 32 GB, for 3 EUR each. Very likely they're counterfeits. But they did come in a small plastic case. You could reuse these for mail. They also all come with an adapter. I have tons of these adapters, usually throw them away.
As for the article, it mentions:
> It is normally recommended to use bubble wrap to protect SD cards in transit, but I have never seen bubble wrap inside a normal envelope which made me suspect that this would elevate the rate of delivery failure.
Bubble wrap envelopes exist, obviously. The envelopes are a bit larger but would work. When I order small items from Ali, this is often the packaging they used.
I don't understand why they sell them as counterfeits. I just want 3 EUR cards, I'm not fussed about the size (they're for things that need a few MB, usually). However, nobody will sell me cheap cards, unless they're counterfeits that claim to be 32 GB but are actually 16 GB instead.
You can get “genuine no-name” cards at that sort of price, though I can't testify to the long-term reliability. Some that I have on use are https://www.amazon.co.uk/gp/product/B09YGV2JGP/ which are £2.50 each if you get the 10x 16GB option. I got a 5-pack a larger version for a Thing a short while back and the ones I've used thus far fully checked out to support the claimed storage (I don't trust cheap SD cards without verifying, because of the counterfeit issue and quality issues, so ran a full test on each) and have so far maintained reasonable performance.
That is why there aren't genuine smaller cards: there just isn't a large market for them because the parts availability means the smaller capacity cards wouldn't work out any cheaper to source, so any noticeable drop in price would be through the seller reducing their markup. For the same price, people will buy the larger ones for the same price just-in-case they need more space later because why not? Even if there is a small price difference, if 8Gb or less is pennies cheaper than 16Gb or more, people will generally go for the larger option.
So to “why sell counterfeits?”: the scammy sellers can't sell them honestly in enough quantity to be worth bothering, so they lie.
I can't stand it because of the insanely low built in fixed framerate. It's something weird like 20fps. UI responsiveness is also terrible for the same reason, because everything is apparently spaghetti coded together and the game logic is tied to the framerate.
Besides the python implementations, we also implemented a standalone C++ implementation that runs locally with just CPU simd https://github.com/google/gemma.cpp
Are there any cool highlights you can give us about gemma.cpp? Does it have any technical advantages over llama.cpp? It looks like it introduces its own quantization format, is there a speed or accuracy gain over llama.cpp's 8-bit quantization?
Hi, I devised the 4.5 (NUQ) and 8-bit (SFP) compression schemes. These are prototypes that enabled reasonable inference speed without any fine-tuning, and compression/quantization running in a matter of seconds on a CPU.
We do not yet have full evals because the harness was added very recently, but observe that the non-uniform '4-bit' (plus tables, so 4.5) has twice the SNR of size-matched int4 with per-block scales.
One advantage that gemma.cpp offers is that the code is quite compact due to C++ and the single portable SIMD implementation (as opposed to SSE4, AVX2, NEON). We were able to integrate the new quantization quite easily, and further improvements are planned.