Prisma is an evolution of Graphcool Framework. Graphcool Framework is a complete backend as a service that enable you to create a database, configure permissions and write business logic that is deployed to serverless functions.
In contrast, Prisma focuses exclusively on the database aspect and brings many advancements compared to Graphcool.
We have created some nifty open source libraries to bring a lot of the same out-of-the-box experience provided by Graphcool Framework to more traditional GraphQL backend development with Prisma, including GraphQL Bindings that provide smart schema stitching and auto-complete in your editor as seen in the video on https://www.prismagraphql.com/
Prisma incorporates 2 years of learnings from operating Graphcool at scale and writing big applications with GraphQL. I'm super excited to be part of the community that is shaping around GraphQL and happy to answer any questions about GraphQL in general (especially from the backend perspective) or Prisma specifically.
Disclaimer I'm not from Graphcool or Prisma, but avid user of both
Prisma is the unbundled version of Graphcool.
Graphcool's main selling point was that everything was hosted: "The Parse for GraphQL".
Prisma's main selling point is that you can host your own database wherever you want and Prisma generates the Schema and TypeDefs for you based on CRUD operations. Compared to Graphcool, you get one instead of two endpoints (Relay and Simple API endpoints are the same in Prisma).
Now you can connect your client directly to the Prisma endpoint or you build a GraphQL Server (with graphql-yoga or apollo-server) between your client and database.
All resolvers (auth, file hosting, etc) are written on this GraphQL Server level.
Prisma is a lot more flexible and now also bigger enterprises with legacy databases can use them more easily.
Hope that answers your question. Again, I don't work there but really like their product.
As lots of people already pointed out: there is more going on than just a smart upsampling technique.
See section 4 of the paper - they train on a set of 3d models from some 'ShapeNetCore' dataset, from which they generate sample inputs (renderings of the model with randomized viewpoint and lighting) and corresponding target outputs (voxelized model).
They train specialized networks for different classes of objects 'aeroplanes, chairs and car', so reconstructing on all classes at the same time probably still has some issues.
An interesting point about their use of this coarse-to-fine progression that the article omits: they do the same trick for training their net - first train to predict the coarse voxels, and when those work start predicting the next level.