Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they could - in fact I'm starting a PhD in a similar area soon! It's a matter of providing a high level enough programming language (e.g. Haskell) and a smart enough compiler that can automatically parallelise sections, and with the right middleware/compiler back end it should be possible!


I buy CPU+GPU unification (in fact I highly anticipate it) and I also buy that a DSP could function as a dynamic coprocessor, as they are often programmed in C.

But my day job revolves around HDLs, and it is my opinion that a higher level language isn't the answer. Fifty years from now it might be, but state-of-the-art HDL compilers just aren't good enough yet. It's like C compilers a few decades ago, where you had to insert some inline ASM in your code here and there because the compilers were still developing.

So I guess what I'm saying is you can't target CPU, GPU, DSP, & FPGA in one compiler until we can master targeting FPGA even just by itself.


To make a comparison for anyone who hasn't programmed FPGAs (especially on the path to etching silicon), placement is extraordinarily important. Not only can (will) you make a highly non-optimal layout, FPGAs are not orthogonal. You'll spend a lot of time trying to route the bits that need to talk to each other via direct connection as much as possible instead of going through a gp line or worse.

Depending on the make and model of FPGA, you will have "large" areas that you either can't or don't want to plop logic.

You can have a pretty netlist that validates and simulates correctly (although you'll eventually end up dealing with Cadence, who seem to have the right hand side of the bugs per line of code curve locked up) but still takes weeks or months of that inline ASM work to make it competitive with a rack of Xeons. The edit/compile/debug cycle is not quick by any means past a trivial number of gates.

Dealing with that junk is why IP blocks are so attractive, but you end up on the road to structured ASICs and that just leads to misery.


Fair I suppose - but I still think it doesn't contradict my point. I believe that what we need is a general (and smart!) enough language, along with advanced enough and intelligent enough back ends, which should solve the problem. It will obviously take a while, but it's a really quite large area of research at the moment!


That is true of Haskell at this point too. Stictness annotations and unboxed types are sometimes necessary to squeeze maximum performance.

I think the idea would be to keep the backdoors, the question is how to make them still usable in that case.


In theory it should be possible, but doing so in the general case is a ridiculously hard problem as sliverstorm hints upon.

If I was back at school, this is something I would definitely spend time on.

Some possible ideas:

- Writing different LLVM backends to target the various hardware. You still would need some runtime to schedule/organise the tasks.

- A language with a very powerful type system (ie fully dependent) which allows a high level of programmer intent to be extracted by a compiler. (ie we are only doing Multiply-adds on a specific range of doubles, so create an custom fpga for just that)

- Just writing a bunch of libraries which provide nice APIs for the various functions, kinda like NumPy.


I dont think Haskell is suitable. In fact there are very few languages that are. The reason is you need dependent types in order to compete with VHDL when it comes to type safety. I only know of two general purpose languages that support that, Idris and Nimrod (kinda.. Its coming along).

Your work sounds interesting. Contact me at skyfex@gmail.com if you want to exchange more ideas


I agree that haskell isn't smart enough, but I think it's a step in the right direction, in that a lot of constructs seem geared to abstracting away the details of how to perform certain computations, and more about describing what a computation should perform.


Maybe we can open some group for this discussion. Anyway; email in my HN profile if you want to exchange ideas as well.


Chalmers univeristy has a HDL built as Haskell modules called Lava: http://www.cse.chalmers.se/edu/year/2012/course/_courses_201... Don't think its used outside of the university though.


Berkeley has Chisel where you write something like C + Scala and you get Verilog for hardware instantiation and C for verification https://chisel.eecs.berkeley.edu/


It actually is just Scala. The whole thing is a Scala DSL.


I thought you could write some modules in C


I suppose you could, in practice, write modules for simulation in C or C++ and hook it up to the C++ code that the compiler generates. I doubt this is very useful in practice since you can't generate any Verilog from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: