AI-assisted coding is not a black box in the way that managing an engineering team of humans is. You see the model "thinking", you see diffs being created, and occasionally you intervene to keep things on track. If you're leveraging AI professionally, any coding has been preceded by planning (the breadth and depth of which scale with the task) and test suites.
Xcode is using the Claude Agent SDK, which means that you "get the full power of Claude Code directly in Xcode—including subagents, background tasks, and plugins—all without leaving the IDE¹". I assume that means iOS development plug-ins like Axiom² should work as well.
That's progress over where it was a year ago. But the almost complete absence of packages that run on Windows makes the progress made so far more of a curiosity than a usable option--alas. I'd use Swift in a heartbeat if it had even a semblance of a decent ecosystem on Windows.
Although the only framework that was developed in Objective-C after Swift got introduced was Metal, anything else is mostly maintenance and incremental improvements.
> Aperture used to handle it pretty well, but Apple dropped it.
If you still miss it, note that Nitro (macOS, iPad, iPhone) is Aperture's spiritual successor, created by its former Sr. Director of Engineering. https://www.gentlemencoders.com/nitro-for-macos/
> IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust.
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
The test suite that the actual SQLite developers use to develop SQLite is not open-source. 51445 open-source test cases is a big number but doesn't really mean much, particularly given that evidently the SQLite developers themselves don't consider it enough to provide adequate coverage.
SQLite's test suite is infamously gigantic. It has two parts: the public TCL tests you're referencing, and a much larger proprietary test suite that's 100x bigger and covers all the edge cases that actually matter in production. The public tests are tiny compared to what SQLite actually runs internally.
It allows the code to be fully public domain, so you can use it anywhere, while very strongly discouraging random people from forking it, patching it, etc. Even still, the tests that are most applicable to ensuring that SQLite has been built correctly on a new compiler/architecture/environment are made open source (this is great!) while those that ensure that SQLite has been implemented correctly are proprietary (you only need these if you wanted to extend SQLite's functionality to do something different).
This allows for a business model for the authors to provide contracted support for the product, and keeping SQLite as a product/brand without having to compete with an army of consultants wanting to compete and make money off of their product, startups wanting to fork it, rename it, and sell it to you, etc.
It's pretty smart and has, for a quarter century, resulted in a high quality piece of software that is sustainable to produce and maintain.
That’s like if I gave you half the dictionary and then said it’s ironic that if there really weren’t any letters after “M” you wouldn’t be complaining.
The people developing exploits have an obvious way to recoup their token investment. How do the open source maintainers recoup their costs? There's a huge disparity here.
reply