Hacker Newsnew | past | comments | ask | show | jobs | submit | nalzok's commentslogin

  $ dig www.nyu.edu +noall +answer -t A

  ; <<>> DiG 9.10.6 <<>> www.nyu.edu +noall +answer -t A
  ;; global options: +cmd
  www.nyu.edu.            1       IN      CNAME   dsvdfvx64.github.io.
  dsvdfvx64.github.io.    2995    IN      A       185.199.109.153
  dsvdfvx64.github.io.    2995    IN      A       185.199.110.153
  dsvdfvx64.github.io.    2995    IN      A       185.199.111.153
  dsvdfvx64.github.io.    2995    IN      A       185.199.108.153
Looks like someone added a CNAME record pointing https://www.nyu.edu to https://dsvdfvx64.github.io, but https://github.com/dsvdfvx64 is 404. I wonder how one can use GitHub Pages while the corresponding username doesn't exist.


It wasn't 404 before. It has probably been reported to GitHub and banned by them. The user only had that one repository though.



I wonder what would happen if we quantized each dimension to 0.5 (or even fewer) bits instead of 1, i.e., taking 2 (or more) scalar components at a time and mapping them to 0 or 1 based on some carefully designed rules.


I agree that Nix is the right idea, but I'm hesitant to adopt it right now because (i) rough edges like poor documentation and inconsistent interfaces make it feel immature, (ii) things are moving very fast with experimental stuff like flakes, which means it will take a while before Nix starts to stabilize, and (iii) recent community forks like Lix might create a fragmented ecosystem in the long run.

What are your thoughts on there concerns?


I think those are fair perceptions and concerns.

Regarding experimental features like flakes: the reality is they're incredibly stable. I've written about this before: https://determinate.systems/posts/experimental-does-not-mean.... They haven't realistically changed in years, because they work so well. The experimental label is practically FUD at this point.

If the Nix team were to change flakes in a breaking way, it would be stunning neglect for the vast, vast percentage of the ecosystem that has already adopted them. Our data shows that of all the (OSS) repositories created every day, almost 90% of them start with a flake.nix. Of all of those projects, less than 20% use the legacy file formats, and most of those are using the flake-compat library.

On documentation and interfaces, I agree, and we and the greater community are working hard to on that problem. I'll take time, but it is decidedly better than it was a few short years ago.

And on community fragmentation, I just don't see it becoming a problem. The core Nix ecosystem is so large and diverse, I don't see meaningful fragmentation coming out of this.


> Our data shows that of all the (OSS) repositories created every day, almost 90% of them start with a flake.nix

90% of OSS repositories that use Nix I suppose?


Of course :).


As a response to (ii), I assure you that things are most certainly not moving very fast with experimental stuff like flakes. Flakes were first released as "experimental" almost 3 years ago and have been stuck in feature purgatory ever since.


To be blunt, this is driven by the rejection of flakes by a significant group of the contributor base, despite it being much more adopted by the user base at large.

Even as someone who does think Flakes are better than the prior solutions, I'm increasingly of the opinion that Flakes would be better moved to a layer outside the core Nix project - advancing them within core Nix at this stage seems pretty impossible with many within the project opposed to their existence. I think if Flakes were an alternative project at the same level as something like Niv, a lot of the holy warring would get out of the way.

1. Those who wish to improve them could do so without the discussion being deadlocked by "hey, we haven't yet agreed these should be stable"

2. Those who don't want to paint them as the path forward for fear of precluding a better option, now don't have to.


> Even as someone who does think Flakes are better than the prior solutions, I'm increasingly of the opinion that Flakes would be better moved to a layer outside the core Nix project - advancing them within core Nix at this stage seems pretty impossible with many within the project opposed to their existence. I think if Flakes were an alternative project at the same level as something like Niv, a lot of the holy warring would get out of the way.

I posted some information and metrics about that on Discourse:

https://discourse.nixos.org/t/announcing-determinate-nix/547...


This sounds like the plight of TC39 Observables vs RxJS.


You may want to check out [rcmd](https://lowtechguys.com/rcmd/).


I’ve come across this! Do you use it?


> by kqr, published 2024-11-19

It's from the future! ;)


I wonder if this has anything to do with Apple Intelligence. Maybe they can have an LLM that operates on encrypted input text without decrypting it, so that users can send sensitive information to an Apple-controlled central server without worrying about privacy issues?


> I began my research by looking for some kind of hard plastic case. While these are common for standard SD cards, they are quite rare for microSD.

What about putting the microSD into an adapter [1] first? I imagine you can find a much better deal when ordering in bulk.

[1]: https://www.amazon.com/SanDisk-microSD-Memory-Adapter-MICROS...


Or just get a proper microsd case. 17 cents a piece here:

https://abra-electronics.com/robotics-embedded-electronics/r...


Every micro SD card I've bought came with an Adapter. I have a box full of those.


Bought a SanDisk microSD card recently. Came in a nice little transparent plastic case... but with no adapter!


I bought ten small microSD cards, 32 GB, for 3 EUR each. Very likely they're counterfeits. But they did come in a small plastic case. You could reuse these for mail. They also all come with an adapter. I have tons of these adapters, usually throw them away.

As for the article, it mentions:

> It is normally recommended to use bubble wrap to protect SD cards in transit, but I have never seen bubble wrap inside a normal envelope which made me suspect that this would elevate the rate of delivery failure.

Bubble wrap envelopes exist, obviously. The envelopes are a bit larger but would work. When I order small items from Ali, this is often the packaging they used.


I don't understand why they sell them as counterfeits. I just want 3 EUR cards, I'm not fussed about the size (they're for things that need a few MB, usually). However, nobody will sell me cheap cards, unless they're counterfeits that claim to be 32 GB but are actually 16 GB instead.

I would have been happy with 8 GB!


You can get “genuine no-name” cards at that sort of price, though I can't testify to the long-term reliability. Some that I have on use are https://www.amazon.co.uk/gp/product/B09YGV2JGP/ which are £2.50 each if you get the 10x 16GB option. I got a 5-pack a larger version for a Thing a short while back and the ones I've used thus far fully checked out to support the claimed storage (I don't trust cheap SD cards without verifying, because of the counterfeit issue and quality issues, so ran a full test on each) and have so far maintained reasonable performance.

That is why there aren't genuine smaller cards: there just isn't a large market for them because the parts availability means the smaller capacity cards wouldn't work out any cheaper to source, so any noticeable drop in price would be through the seller reducing their markup. For the same price, people will buy the larger ones for the same price just-in-case they need more space later because why not? Even if there is a small price difference, if 8Gb or less is pennies cheaper than 16Gb or more, people will generally go for the larger option.

So to “why sell counterfeits?”: the scammy sellers can't sell them honestly in enough quantity to be worth bothering, so they lie.


This is very helpful, thanks!


If you only need a few MB anyway I can recommend getting industrial grade (aMLC / SLC cards) from Digikey (Mouser probably has them as well).


That's a good idea, thanks!


I've definitely bought some that came without adapters.


He's not saying that literally every one comes with an adapter, just the ones he bought.


How do you like Simutrans? https://www.simutrans.com


I can't stand it because of the insanely low built in fixed framerate. It's something weird like 20fps. UI responsiveness is also terrible for the same reason, because everything is apparently spaghetti coded together and the game logic is tied to the framerate.


Congratulations on the release! How can we download the model and run inference locally?


You can download the model checkpoints from kaggle https://www.kaggle.com/models/google/gemma and huggingface https://huggingface.co/blog/gemma

Besides the python implementations, we also implemented a standalone C++ implementation that runs locally with just CPU simd https://github.com/google/gemma.cpp


Are there any cool highlights you can give us about gemma.cpp? Does it have any technical advantages over llama.cpp? It looks like it introduces its own quantization format, is there a speed or accuracy gain over llama.cpp's 8-bit quantization?


Hi, I devised the 4.5 (NUQ) and 8-bit (SFP) compression schemes. These are prototypes that enabled reasonable inference speed without any fine-tuning, and compression/quantization running in a matter of seconds on a CPU.

We do not yet have full evals because the harness was added very recently, but observe that the non-uniform '4-bit' (plus tables, so 4.5) has twice the SNR of size-matched int4 with per-block scales.

One advantage that gemma.cpp offers is that the code is quite compact due to C++ and the single portable SIMD implementation (as opposed to SSE4, AVX2, NEON). We were able to integrate the new quantization quite easily, and further improvements are planned.


Thank you! You can get started downloading the model and running inference on Kaggle: https://www.kaggle.com/models/google/gemma ; for a full list of ways to interact with the model, you can check out https://ai.google.dev/gemma.


FYI the ; broke the link, but I found it easily anyway.


Good catch - just corrected. Thanks!


You can install a free (as in beer) Wolfram engine on any computer, e.g. a beefy virtual machine: https://www.wolfram.com/engine/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: