I've loved these kinds of demos since I first discovered them. For those not in the know, this is _the_ site for finding them: http://www.pouet.net/
Though it didn't end up involving any procedural generation, I was tasked with cramming an OSD into an FPGA that was already using the majority of its resources. The device was processing an HDMI video stream and needed to display a menu and various other information to the user, for control and settings. All this at up to 1080p60. I ended up using about 16 KB of on-chip SRAM, if I recall correctly, and designed an old-school inspired GPU ala NES/SNES, with perfect alpha blending even! It was palette based, of course, and the memory is used to store sprites and a "command buffer". While pixels are flying through the GPU it reads the command buffer sequentially, which instructs it which sprites to draw in-order. Space between sprites is handled by rendering a fully transparent 1x1 sprite, repeated however many times. I was able to draw the framing for the menus and displays with fancy drop-shadows, logos, text, etc. It worked quite well! The only major limitation was my inability to handle exotic languages like Chinese, simply because there wasn't enough room for all the necessary sprites.
While the FPGA handled real-time, on-the-fly rendering, the on-board MCU was the one uploading sprites and configuring the command buffer over SPI. I had to build up all the software to not only drive the GPU but also handle the OSD. If you think coding up GUIs on a desktop is bad, just wait until you try to design a GUI system completely from scratch on an MCU!
I can only imagine what demoscene artists would be able to accomplish if they could design their own GPUs on an FPGA!
Sounds awesome! When I was in school I made a "Game Boy with vector graphics" using an FPGA and a microcontroller. I synthesized a GPU and display driver on the FPGA and sent vector drawing commands from the microcontroller which ran a simple game. It looked like this:
The eye was pretty interesting. These primitives, what are they? smoothstep() is apparently "Hermite interpolation" but colsca() - col-scale? collerp - col-linear interpolation?
I did that. It's a fairly direct conversion from the paper.
I'm guessing SmoothStep got converted for size or performance seasons, but it doesn't really do the same thing.
One of the tricky bits of procedural generation like this is that bugs get entrenched easily, because 'fixing' things can dramatically change the generated result. Perhaps that is what happened in this instance.
If this is from 2008, does anybody know what's up with one of the last slides which states "There was an interesting procedural full 3d image in this slide.
After some explanations on it, speech attendees were flashed
and their short term visual memory erased."?
But could also be a reference to the 'visual knock-out capsules' from Alfred Bester's SF novel 'The Demolished Man':
"They were cubes of copper, half the size of fulminating caps, but twice as deadly. When they were broken open, they erupted a dazzling blue flare that ionized the Rhodopsin - the visual purple in the retina of the eye - blinding the victim and abolishing his perception of time and space."
Though it didn't end up involving any procedural generation, I was tasked with cramming an OSD into an FPGA that was already using the majority of its resources. The device was processing an HDMI video stream and needed to display a menu and various other information to the user, for control and settings. All this at up to 1080p60. I ended up using about 16 KB of on-chip SRAM, if I recall correctly, and designed an old-school inspired GPU ala NES/SNES, with perfect alpha blending even! It was palette based, of course, and the memory is used to store sprites and a "command buffer". While pixels are flying through the GPU it reads the command buffer sequentially, which instructs it which sprites to draw in-order. Space between sprites is handled by rendering a fully transparent 1x1 sprite, repeated however many times. I was able to draw the framing for the menus and displays with fancy drop-shadows, logos, text, etc. It worked quite well! The only major limitation was my inability to handle exotic languages like Chinese, simply because there wasn't enough room for all the necessary sprites.
While the FPGA handled real-time, on-the-fly rendering, the on-board MCU was the one uploading sprites and configuring the command buffer over SPI. I had to build up all the software to not only drive the GPU but also handle the OSD. If you think coding up GUIs on a desktop is bad, just wait until you try to design a GUI system completely from scratch on an MCU!
I can only imagine what demoscene artists would be able to accomplish if they could design their own GPUs on an FPGA!