Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why don't use pixels caching solution for dense geometry problem described in the article? For example 2d/ui/gui engines don't render detalized vector shapes (like glyphs) from scratch every frame - glyphs rasterized to pixels, pixels stored to cache and used for subsequent rendering until zoom changes. And if you cache pixels for each 2x zoom level you just need to downsample next zoom level whan user changes zoom or resizes object (which is very fast comparing to rendering from scratch)



In the article, I mention a few ways of dealing with it - from "old school" impostors/billboards (think of precomputed sprites), simplified representation, to finally temporal anti-aliasing (reusing pixels in screen space). All of those work well under some strong constraints and are used today, however all have problems, trade-offs, artifacts - whole point of the post.

Temporal AA is the most general and mature solution, but at the same time, many gamers hate it due to introducing some blurriness through imperfect resampling (and makes some very small population motion sick) - just web-search for "temporal antialiasing reddit" and see the sentiment of many users.


Just remembered this interesting provocative video from 2011 - https://youtu.be/00gAbgBu8R4 As I understood right - to achieve something like unlimited detailed geometry we need to store objects in adaptive vector form which is very fast to render in current zoom level and will need a little bit of computation if zoom also changes a little. And what encoding form of models/geometry will be the fatest way for rendering? From 2d/ui engine perspective it's just a pixel cache and we only need to copy pixels to viewport. And there will be max zoom level where it makes no sense to cache pixels for bigger zoom level because of memory overhead and ability to render straight from original vector representation in realtime. But what is equivalent of pixels caching in 3d world? Due to ability of viewing objects/geometry from different sides/angles (without changing distance from camera to object e.g zoom level) we need to have something like 3d pixel cache which seems like huge memory requirements. Maybe voxels or points/splats? Or maybe just same 2d pixels cache but for each 6 sides of object (like 6 edges of cube) and do some pixel interpolation for intermediate viewing angles?


Except they never really made their "unlimited detail" look anywhere near as good as the comparatively low poly trick-based rendering they were competing against and definitely not the high end imagescan data being rendered by recent Unreal engine demos. Even Euclideon's highest end rendering demos that I've seen (also imagescan-based voxel data IIRC) looks rather shoddy compared to modern AAA game engines.

Maybe it technically could push more polygons, but it looked like crap.

But yeah, I agree that their tech seems to be a clever way to index and access large amounts of point cloud data, allowing them to stream from disk just what is needed for the current view -- a clever database more or less.

But for all the claims they made about how it will revolutionise everything, their demo's were pretty damn bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: