Hacker Newsnew | past | comments | ask | show | jobs | submit | mastax's commentslogin

My prediction is one of the Chinese FPGA makers will embrace open source, hire a handful of talented open source contributors, and within a handful of years end up with tooling that is way easier to use for hobbyists, students, and small businesses. They use this as an inroad and slowly move upmarket. Basically the Espressif strategy.

Xilinx, Altera, and Lattice are culturally incapable of doing this. For lattice especially it seems like a no brainer but they don’t understand the appeal of open source still.


Define “upmarket” ?

For me, that means higher capacity and advanced blocks such as SERDES, high-speed DRAM interfaces etc.

The bottleneck in using these kind of FPGAs has rarely been the tools, it’s the amount of time it takes to write and verify correct RTL. That’s not an FPGA specific problem, it applies to ASIC just the same.

I don’t see how GoWin and other alternative brands would be better placed to solve that problem.


Gowin and Efinix's tools are extremely spartan compared to Vivado or Quartus: they're pretty much straight HDL to bitstream compilers. There's also a FOSS implementation flow available for the Gowin chips (but I haven't used it.)

HDL isn't getting any easier, though, and that's where most of the complexity is.


> My prediction is one of the Chinese FPGA makers will embrace open source

Sadly, this doesn't seem to be panning out because the Chinese domestic market has perfectly functional Xilinx and Altera clones for a fraction of the price. Consequently, they don't care about anything else.

It irritates me to no end that Gowin won't open their bitstream format because they'd displace a bunch of the low end almost immediately.


> It irritates me to no end that Gowin won't open their bitstream format because they'd displace a bunch of the low end almost immediately.

All of their IDE/programmer/etc binaries are basically entirely unprotected, almost all of their chips are entirely implemented in https://github.com/YosysHQ/apicula - if other manufacturers cared to implement it, it wouldn't be hard.


Support is stuck at the old levels--none of the GW5 series are implemented. This is just like how the Lattice support is similarly stuck at the ice40/ECP5 level which is almost a decade old.

Gowin seemingly doesn't even sell the chips to individuals. Either set up an LLC so you can request samples from a destributor, or desolder one from a sipeed dev kit.

I could order MOQ (minimum order quantity) of anywhere from 100-500 depending upon range. I didn't need to be an LLC, but it sure helps to actually know the lingo and understand how to deal with distributors and FAEs (field application engineers).

One thing you absolutely have to remember is that when it comes to distributors and FAEs is that as an individual you are wasting their time. Talking to anyone other than you is more profitable. Nevertheless, most won't ignore you (sales is their job, after all) but you very definitely have to make their lives as easy as possible and understand that you get their time after they have serviced everybody more profitable.

Tariffs made everything miserable because the FAEs and salespeople were up to their earballs dealing with the daily price swings of customers with actual volumes.


Also technically the code names are only for unreleased products so on ark it’ll say “products formerly Ice Lake” but the intel will continue to calm them Ice Lake.

> On that note: GCC doesn't provide a nice library to give access to its internals (unlike LLVM). So we have to use libgccjit which, unlike the "jit" ("just in time", meaning compiling sub-parts of the code on the fly, only when needed for performance reasons and often used in script languages like Javascript) part in its name implies, can be used as "aot" ("ahead of time", meaning you compile everything at once, allowing you to spend more time on optimization).

Is libgccjit not “a nice library to give access to its internals?”


To use an illustrative (but inevitably flawed) metaphor: Using libgccjit for this is a bit like networking two computers via the MIDI protocol.

The MIDI protocol is pretty good for what it is designed for, and you can make it work for actual real networking, but the connections will be clunky, unergonomic, and will be missing useful features that you really want in a networking protocol.


Or, the obligatory RFC 1149 (IP over Avian Carriers).

Oh come on, SLIP over MIDI is tried and true.

Note the word order here.

Googling “slip over midi” gives a lot of fashion blogging about mini dresses and slips that one wears under them, so I’m not quite sure what you mean.

But if you mean “midi over slip”, then that is the inverse case from what I am suggesting. Midi over slip (and slip could be any tcpip substrate, such as ethernet) has midi messages as the payload, carried via tcpip.

I’m talking about using midi messages to carry tcpip payloads. You can absolutely do it, but it isn’t really what the protocol is designed for.


No, I meant it exactly like that SLIP over MIDI. I know there are plenty of MIDI over TCP/IP and UDP implementations but that's not what I had in mind.

And what google turns up when you enter those exact three words in a row is really none of my business.

https://en.wikipedia.org/wiki/Serial_Line_Internet_Protocol

https://en.wikipedia.org/wiki/MIDI


I used SLIP all the time back in the day, and I use MIDI all the time in my home music setup. The wikipedia articles don't tell me anything I don't already know.

I suppose someone somewhere has done it (and I have always said that you can), but my best internet searches with a wide variety of terms don't show any old tutorials or products that explain how. Nor can I find anything else that uses MIDI to send tcpip packets over the MIDI connection. Not even a mention.

My Google-fu may be totally weak, but whatever.

I freshly, happily and totally concede that people have in the past used MIDI to send SLIP packets, and it is well understood how. Great. You are totally correct.

But all of this just proves the original point. It either precedes anything on the internet today, or is so obscure that no search engine can find it. Either way, if no one uses it or even bothers to explain how, I think it is pretty fair to conclude that it is rather unergonomic, and hacky, and doesn't provide all the features one really wants in a network connection.


I think you are taking this all way too seriously. Think IP over avian carriers.

You would need two midi connections, one for each direction.

You got whooshed pretty hard here. The post you were responding to was a joke.

I could be wrong, but my surface level understanding is that it's more of a library version of the external API of GCC than one that gives access to the internals.

libgccjit is much higher level than what's documented in the "GCC Internals" manual.

You push branch A, then switch to branch B and start working on that. CI failed on branch A, so you stash branch B and switch back to branch A to fix it.


thanks, that makes sense. I don't see how a worktree is more convenient in that case.

Maybe from the kind of work I do? either CI is failing because of something really simple, or something really complicated that means getting a product setup and producing debug messages. If it's a critical fix on branch A, then I'm not working on branch B. I'm testing branch A locally while CI does its thing


Worktrees are useful particularly because they look like entirely separate projects to your IDEs or other project tooling. They are more useful on larger projects with lots of daily commits. If you just use branches then whenever you switch, in the worst case, your IDE has to blow away caches and reconstruct the project layout or build the project fresh. On large projects this takes significant time. But switching your IDE to a different project, there are now two project and build caches to switch between.


ah interesting. our codebase is over 10gb with about 8 years of history. But, we only have 2-3 merges per week.


You may be shocked to hear that there are no seas in the Himalayas.


Well ackshually, Himalayan salt does come from a sea (although this sea has disappeared a long long time ago) so it's not _technically_ wrong


People were happy when Netflix was the streaming service and it cost $7.99. People will be unhappy if Netflix is the streaming service and it costs $159.99. The glory days were only possible because the streaming market didn’t matter.


> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.

> So they're shutting down their consumer memory lines, and devoting all production to AI.

Okay this was the missing piece for me. I was wondering why AI demand, which should be mostly HBM, would have such an impact on DDR prices, which I’m quite sure are produced on separate lines. I’d appreciate a citation so I could read more.


Just like the GPUs.

NVIDIA started allocating most of the wafer capacity for 50k GPU chips. They are a business, its a logical choice.


It's kind of a weird framing. Of course RAM companies are going to sell their limited supply to the highest bidder!


You can pass the mutex by value and it does continue to protect its value.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


Yeah, KiCad has improved immensely in the past 5 years. It still has a long way to go to really compete with Altium et. al. though. The thing is: Altium is basically finished software. They keep trying to add features to it but I'm certain if you polled the users the only thing they really want is fewer crashes and bugs. Every year KiCad gets closer and closer.


Have they been conclusively spotted? With evidence? Sorry I only skimmed the article. Until there is, I’m going to keep believing it’s some sort of mass delusion like UFO sightings. Not because I think some sort of drone attack is particularly unlikely, but because these sorts of mass delusions are evidently very common - like happened in New Jersey.


9 September is probably where you want to start. Intentional swarm of military drones into NATO. Shahads shot down, no question about a Russian plan to test reactions along the perimeter.

Since then, coordinated launches of screens of smaller civilian drones. I don’t think we’ll get hard numbers, but I’ve heard that NATO is more interested in ground-detection of GPS and satellite uplink jamming.

So the question now is: how are some civilians coordinating within NATO countries, and how are they getting drones that can jam?


Don't they show up on radar? I have no reason to doubt the detection of drones around airports and other infrastructure, especially given they already have enforced drone bans and therefore have installations specifically designed to detect and track drones.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: