I have the same issue. For many of the questions my answer is "all of the above, but A in context A, B in context B, etc.". Many are also not mutually exclusive.
Take this example: "When debugging I typically:"
> Write tests to isolate the problem
In the case of math functions, or more primitive building blocks, writing tests can help ensure correctness of the underlying functions, to exclude them from the problem search.
> Reason about the code logically first
This is always useful.
> Use a debugger to step through code systematically
Useful when dealing with a larger codebase and the control flow is hard to follow. The call stack can give quick guidance over trying to manually decipher the control flow.
> Add print statements to understand data flow:
Useful when debugging continuous data streams or events, e.g. mouse input, where you don't want to interrupt the user interaction that needs to be debugged.
> By forcing you to make a decision without context.
Not the OP, but what would be the point to that? In any practical scenario there is always context, isnt it? I guess I don't quite get what we are trying to measure here.
It is not primarily a matter of difficulty. The problem with choices like these is that they seem to be predicated on a rather simplistic, formulaic view of software development - a view that experienced developers will recognize as flawed.
In this case, one tacit assumption is that a given developer will typically adopt just one of these approaches. Another is that they can meaningfully and objectively be ordered along either of the axes purportedly being measured.
I'm building my own polygon modeling app for iOS as a side-project [0], so I feel a bit conflicted.
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
There’s also the excellent Nomad Sculpt, which while not a mesh editor, is an incredibly performant digital sculpting app. Compared to Blender’s sculpt workflow it maintains a higher frame rate and smaller memory footprint with a higher vertex count. Of course it’s much more limited than Blender but its sculpting workflow is much better and then you can export to Blender.
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
Just for reference I’m running 4.5 with the Vulcan backend and sculpting a 6.3 million vertex object completely smoothly and Blender is using 4 GB of RAM and 2.5 GB of video RAM. Granted my system has a 9800 x3D and a 5070ti.
One thing Blender lacks is easy 3D texture painting. As far as I know, neither is there a decent 3D texture painting iPad app. Definitely a gap in the market.
Yes… I have seen this before and played with it. This was a good attempt at emulating the behavior of an app like Substance Painter. However… the core problem is that in order to paint textures you need very complex and deep functionality. When using Substance, I have to variously consider: the texture channels (e.g. color, roughness etc), the many layers that may serve these channels, the baked texture channels (e.g. ambient occlusion, normals etc), the many blend modes, masks and adjustments that serve and interconnect all these.
I doubt that anything other than native blender functionality could serve all this with any elegance.
I teach substance painter and it does a good job and hiding all this complexity until you need it. It is very common for students to underestimate it as an app… to view it as just photoshop for 3D.
Yes. And you can paint textures too with Procreate. But like a said, not as advanced as Substance painter and others. It might still be enough for some use cases.
To cheer you up, in my experience over the existence of the App Store, anytime something like this comes to the Store is a big win for independent side projects.
Your project might possibly be way cheaper and solve a specific problem, so it would benefit from the awareness that Blender's large marketing footprint would inevitably leave behind ;)
Keep building!
I'm currently in contact with the Blender team to see where I could contribute, but you're right that there is space for multiple projects.
I think I'm going to focus more on CAD / architectural use cases, instead of attempting feature parity with Blender's main selling points (rendering, hard-surface modeling, sculpting).
Having been in the app development game for a long time, I know the feeling but have also learned to realize that this is actually not a negative; it means there's a strong signal that there's a desire for 3D apps on these touch devices. Competition can be really good. And your app has the ability to be more focused vs a legacy app that has to please a very large user-base who've come to expect it to behave a certain way.
Well, funny enough I'm actually working on a visionOS sculpting app at the moment! It's metaballs based, and kind of a different vibe / niche than what you're going for.
Eg, more like a sketch than asset production, and metaballs-focused which obviously is going to create a very different topology than NURBS, etc.
My first app was Polychord for iPad which is a music making app that came out in 2010.
These days Vibescape for visionOS is a big one for me, and there are others. I also worked in the corporate world for about a decade working on apps at Google, etc.
On the contrary, your project just became even more relevant. Blender badly needs an alternative/competitor. Everybody loses if a single project dominates.
Blender has a ton of competitors. They're all commercial and have corporate backing. If anything, blender is the "little guy". It is utterly amazing what Ton has managed to do with Blender.
I would think Maya is the most influential of all of them. Blender is popular among hobbyists and people who aren't able to shell out a few thousand every year, but Maya dominates in the commercial world. Plus many animators are using Unreal Engine just for traditional animation now
Blender is absolutely an underdog in commercial studios. It is used, but it’s the minority tool for professional settings. There are still several areas blender is lacking compared to maya or 3dsmax.
Blender is nothing like chromium. It's not made by a big company, it sprung up in an extremely for-profit niche (and it has like 4 serious competitors that are all actively in use)
I didn't know about Feather 3D, but it looks super aesthetically pleasing. I'll have to try it out.
I tried uMake a while back, but found the 3D viewport navigation a bit hard to use, and would often find out I had been drawing on the wrong plane after orbiting the camera.
After using something like Tilt Brush in VR, it's hard to go back to a 2D screen that doesn't instantly communicate the 3D location of the brush strokes you're placing.
I think this is indeed the advantage of this paper taking C++ as the language to compile to SPIR-V.
Game engines and other large codebases with graphics logic are commonly written in C++, and only having to learn and write a single language is great.
Right now, shaders -- if not working with an off-the-shelf graphics abstraction -- are kind of annoying to work with. Cross-compiling to GLSL, HLSL and Metal Shading Language is cumbersome. Almost all game engines create their own shading language and code generate / compile that to the respective shading languages for specific platforms.
This situation could be improved if GPUs were more standardized and didn't have proprietary instruction sets. Similar to how CPUs mainly have x86_64 and ARM64 as the dominant instruction sets.
What is the main difference in shading languages vs. programming languages such as C++?
Metal Shading Language for example uses a subset of C++, and HLSL and GLSL are C-like languages.
In my view, it is nice to have an equivalent syntax and language for both CPU and GPU code, even though you still want to write simple code for GPU compute kernels and shaders.
The language extensions for GPU semantics and code distribution required in C and C++.
The difference is that shader languages have a specific set of semantics, while the former still have to worry about ISO standard semantics, coupled with the extensions and broken expectations when the code takes another execution semantics from what a regular C or C++ developer would expect.
For me, Docker Desktop is simply an easy way to launch the Docker daemon and inspect some created images and their corresponding logs. Other than that, the cli suffices.
We had to remove Docker Desktop at my job (I think they started asking for money?) and moved to Lima/Colima.
If this project means one less program to configure to get my docker containers running then I'm all for it.
Docker desktop for commercial use requires a license and they don't release binaries for Mac other than desktop. Seems like their one route to monetization. I use docker for literally only one build that doesn't work on native macOS so i love the idea of switching to a simple standalone CLI
Here’s a quad tree:
11
11
—
0110
0110
1111
0110
—
00000000
00011000
00011000
00111100
01111110
11111111
00011000
00011000