Hacker Newsnew | past | comments | ask | show | jobs | submit | nobbis's commentslogin

ROS 1 had schema evolution (rosbag migration) 15 years ago


Wait, you renamed Forge (https://forge.dev) released last week by World Labs, a startup that raised $230M.

Is this "I worked with some friends and I hope you find useful" or is it "So proud of the World Labs team that made this happen, and we are making this open source for everyone" (CEO, World Labs)?

https://x.com/drfeifei/status/1929617676810572234


Yes. I collaborated with one of the devs at World Labs on this. The goal is to explore new rendering techniques and popularize adoption of 3D gaussian splatting. There's no product associated with it.


Understood. Thanks for the clarification.


We renamed due to a name collision with another renderer / tool.


> The big difference of course, based on reading this, is that NeRF models the whole light transmission function ("radiance field") whereas this seems to model only the boundary conditions (what the light "hits").

The title of the paper is "3D Gaussian Splatting for Real-Time Radiance Field Rendering". Each rendered pixel weights the contribution of unbounded view-dependent Gaussians. So, no, that's not a difference.


Ah gotcha, thanks for pointing that out.


And Matterport will never see them coming, as their employees are banned from using Luma:

Competitors: No employee, independent contractor, agent, or affiliate of any competing 3-D capture company is permitted to view, access, or use any portion of the Service without express written permission from Luma AI. By viewing, using, or accessing the Service, you represent and warrant that you are not a competitor of Luma AI or any of its affiliates, or acting on behalf of a competitor of Luma AI in using or accessing the Service.

https://lumalabs.ai/legal/tos

(Disclaimer: another banned competitor)


IANAL, but this seems easily circumvented by setting up an independent entity. (It's not an affiliate if it's not owned by the competitor.)


Is this even legal?


IANAL but I assume so. "Employee of a competing company" (whatever that means) isn't a protected class.


Rising rates don't make the dollar stronger. There's no empirical relationship between interest rates and exchange rates.



> Rising rates don't make the dollar stronger. There's no empirical relationship between interest rates and exchange rates.

I think this is the first time I've ever heard someone try to claim that printing money does not, ceteris paribus, impact the exchange rate.


Interest rates are actually the predominant driver of demand for a currency!



Key step in generating 3D – ask Stable Diffusion to score views from different angles:

  for d in ['front', 'side', 'back', 'side', 'overhead', 'bottom']:
    text = f"{ref_text}, {d} view"
https://github.com/ashawkey/stable-dreamfusion/blob/0cb8c0e0...


I'm modestly surprised that those few angles give us enough data to build out a full 3D render, but I guess I shouldn't be too surprised, as that's tech that has had high demand and been understood for years (those kind of front-cut / side-cut images are what 3D artists use to do their initial prototypes of objects if they're working from real-life models).


DreamFusion doesn't directly build a 3D model from those generated images. It starts with a completely random 3D voxel model, renders it from 6 different angles, then asks Stable Diffusion how plausible an image of "X, side view" it is.

It then sprinkles some noise on the rendering, makes Stable Diffusion improve it a little, then adjusts the voxels to produce that image (using differentiable rendering.)

Rinse and repeat for hours.


Thank you for the clarification; I hadn't grokked the algorithm yet.

That's interesting for a couple of reasons. I can see why that works. It also implies that for closed objects, the voxel data on the interior (where no images can see it) will be complete noise, as there's no signal to pick any color or lack of a voxel.


Yes, although not complete noise – probably empty. Haven't checked but assume there's regularization of the NeRF parameters.


    text = f"{ref_text}, front cutaway drawing"
Maybe?


I don't think that NeRFs require too many image to make impressive results.


Given the way the language model works these words could have multiple meanings. I wonder if training a form of textual inversion to more directly represent these concepts might improve the results. You could even try teaching it to represent more fine grained degree adjustments.


We rely on Wasm for Metascan (https://metascan.ai/explore) which allows us to use the same rendering code on iOS, the desktop, and the web.

Accessing multithreading is limited as SharedArrayBuffer requires cross-origin isolation to mitigate Spectre. Apart from that, it works great.


Explains the huge remodel of her property here in South Lake.


iPhone/iPad's LiDAR Scanner.


6D.ai wasn’t released until 2018. You may be thinking of Abound, which you wanted to license in March 2017. I don’t know of anyone else that had real-time meshing in 2017.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: