Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Objective Next (nearthespeedoflight.com)
61 points by ImOssir on March 21, 2014 | hide | past | favorite | 76 comments


The arguments in favor of considering an interpreted language as a suitable alternative are not very convincing.

Slower execution is indicative of waste. When it comes to interpreted languages, even those that are very well implemented, any slowness is directly related to energy loss. This is very problematic when it comes to mobile devices, even modern ones, as this energy is generally coming from a rather limited source (the device's battery).

While this may not be a problem if one or two apps are written using a wasteful interpreted language, it can become a much bigger issue when the entire ecosystem is implemented in such a manner. It'd be irresponsible to unnecessarily reduce the battery life of the devices of thousands or even millions of people.

And while many mobile devices today probably are more powerful than desktops from a decade ago, I don't think it's correct to say that those desktops "ran interpreted apps just fine". They didn't. There's a reason why Java has a bad reputation for performance problems, even today: many desktop apps written in Java performed horribly on computers of that era. And Java was perhaps the best-performing of the interpreted or quasi-interpreted languages.


I think his point is not that an interpreted language is the right thing. His point is that the assumption that it must be a compiled language is the wrong thing - that we need to re-examine way more of our assumptions than we usually examine when defining the "next" language.


Author here: you are correct.


You can go beyond even this assumption: is the Von Neumann model of programming even necessary? Of course, dataflow languages have been around for years, but there are other models out there that might work better; e.g. see Subtext [1].

I'm a professional PL designer also, though working under the pretext of a researcher. Those of us who are disgruntled are trying to start a support group :)

[1] http://www.subtext-lang.org/


Sean, I’ve read some of your research and it’s great! I agree completely. Hell, even the concept of “to create software means some set of sequential steps” is presuming quite a bit. Although I don’t have too much experience creating software otherwise, I at least acknowledge we really barely have any concept of what we’re doing and what’s possible.


Most of the time it's not the CPU that is using most of the energy. It's the mobile or wireless connection, the GPS and the screen.


I think that hardware factors like those are generally irrelevant when it comes to purely software issues, like we're discussing here. In this case, we're talking about an energy consumption penalty imposed solely at the software layer.

Aside from arguments about the screen perhaps not needing to be used as long when the software completes its operations faster, I think it's probably safe to say that such hardware-specific energy consumption due to the factors you mention would be constant in both the case of compiled applications and the case of interpreted applications.


>In this case, we're talking about an energy consumption penalty imposed solely at the software layer.

The point is that we may be wasting time talking about a negligible amount of the total battery assumption.


Good point. A better argument against the interpreted approach is that interpreted languages to a larger degree tend to be rather dynamic. With a platform like iOS where release cycles are fairly lengthy(not that this is really Warranted IMO) a static language is much more sensible. It's obviously a lot better to find as many bugs as possible before shipping and I would argue that this is much easier in stricter static languages. Frontend javascript for example is fine being dynamic and interpreted because it's easy to hotfix issues


I don't see a reason why a modern language, static or dynamic, could not have both a compiler and an interpreter. Most Lisps have both, Haskell has both. Also, most interpreted languages actually have a compile phase before the actual interpreter.

I do agree with you on that any platform should have a staticly typed language as a first class citizen.


What would be the use case for the interpreted version? I would assume it would be usable for the development process, surely running interpreted code on devices would be a total waste if compiled code is available?


During development is probably the best use case I can think of. With editor integration an interpreter is a tremendous help when coding. At least Lisps, Ruby, Python and Haskell have this kind of "development environments' to a some degree. I think GHCI even doubles as a debugger.


The reason the CPU is not using most of the energy is because it's not used all the time. And that's by design. Every single thing possible under the sun is done to keep the CPU off for even a split ms on mobile.

Because if you start using the CPU all the time, your battery is done.


ObjC's dirty little secret is its capability of playing nice with C++. There are huge C++ code bases with only Obj-C UI frontends both in the OS X and the iOS market.

As long as your Obj-Next doesn't come with superb C++ compatibility you won't get the serious players on board.


Author here. Please try not to get too caught up on the interpreted part. It’s far less important to what I’m trying to get across here in the essay. Yes, implementation matters, but what we’re implementing matters more. And don’t ignore the benefits of being interpreted, like having a more dynamic runtime. Being able to modify a system without needing a recompile cycle is a huge win.

See also more thoughts on the essay: http://nearthespeedoflight.com/article/2014_03_20_assorted_f...


I think instead of presenting the problems it is important to provide solutions. Do not presume a large portion of your audience has not already thought these same things. Instead, take Brett Victor's approach and offer slightly more tangible advice.

Instead of hating on the way things are currently done, show a better way. It will mean a lot more in the end.

Don't mean to be harsh, just saying that there are plenty of us who have questioned all these things and still do not have the answer.


While I don't agree with a more dynamic runtime(a more static runtime would be preferable IMO) removing the need for recompiling would be really nice. I am certain that this is would be a huge undertaking though. A better approach IMO is optimizing for reduced compilation speed, there have already been work done in this area. A lot of speed is gained for free from the object file paradigm and even more can be gained by using forward declarations and @import


Dear OP: why do you think the Win32 C APIs still exist? Microsoft have offered .NET for Windows since 2001. It does everything you're requesting but it hasn't replaced the completely hideous Win32 C APIs.

Bluntly: there is a class of programmers and programs that cannot be written without:

* C linkage

* Calling of and calling from C

And a further class of programs where:

* Garbage collection

* Significant runtime

* Safe code

Make acceptable performance impossible. This class of programs includes .NET itself which is entirely written on top of the Win32 C APIs.

If you want a runtime on top of Objective-C, you're welcome to write one yourself. It could offer everything you're requesting without needing to change anything about Objective-C.

Oh wait, there are literally dozens of runtimes on top of Objective-C. Not to mention official bindings to Python and other languages. Sure, none of them are very popular but if you want to write in something else: go ahead.


I would like to add all of that is possible without C and has been achieved multiple times back when C was UNIX only.

It is just the historical legacy fact that the majority of modern OS ABI == C ABI that keeps us have to link with it.


Well, that and C is a nice lowest common denominator; virtually any language can manage a fairly decent foreign function interface with C code without too terribly much friction.


Author here: Objective C is far and away mostly used for creating user-facing software. I want a better way to do that. That’s what the essay is about.


Have you considered PyObjC for programming Mac OS in Python?

https://pythonhosted.org/pyobjc/

Or Xamarin for programming iOS using Mono/C#?

http://docs.xamarin.com/guides/ios/getting_started/hello,_wo...

Or F-Script for programming Mac OS X from a command-line shell?

http://www.fscript.org/

All three are quite well established and work pretty well for what they do.


To me, it's easy to see what needs to be done with objective-c when you write a tiny class that creates a dictionary:

    @interface MyClass
    - (NSDictionary *)createSimpleDictionary;
    @end

    @implementation MyClass
    - (NSDictionary *)createSimpleDictionary
    {
        return @{
           @"keyForNumber": @1,
           @"keyForBool": @YES,
           @"keyForArrayOfStrings: @[@"one", @"two", @"three"]
        };
    }
    @end
How much cleaner could this be if we dropped C backwards compatibility! We could remove the header file, and get rid of all the *'s and @'s:

    class MyClass
    - (NSDictionary)createSimpleDictionary
    {
        return {
           "keyForNumber": 1,
           "keyForBool": YES,
           "keyForArrayOfStrings: ["one", "two", "three"]
        }
    }
    end
This new language would get rid of many warts of Objective-C, bringing it up to speed with modern languages, while keeping performance and interoperability with the Cocoa toolchain.

There may be some perfect, completely new idea for a programming language out there, but Apple would never go for it. Developer experience is a distant third to the User experience and the Apple experience, and the only thing they will go for is an evolutionary, backward compatibile change such as this.


On the contrary, Objective-C is one of the only languages that actually gets header files right and subsequently they are a wonderful feature. They are very readable since it's now possible to write them to perfectly describe your public interface. All implementation details are kept in the .m where they should be.


Any language with proper module support since the early 80's allows for that.


You expressed this sentiment far better than I have managed so far, totally agree.


Why would we want to get rid of the .h/.m separation? It's a nice feature, Java's single file format is far worse in comparison.


I, and basically every language designer since the 80s has disagreed with this. It's needless repetition: all of the information contained in the .h file is duplicated in the .m file. If you really want a nice listing of the exported methods, it's easy to generate that from the .m file.


I agree that there is some merit in the repetition argument, but when discussing a language like Objective-C we must consider that any Objective-C programmer is surely using auto completion to the fullest extend due to the extremely verbose nature of Objective-C naming conventions. The point being that the repetition time is negligible


Stockholm syndrome from too many years in the company of archaic compilation models: it's real and it's scary!


It is incredible how C based compilation models become so widespread, that younger programmers are completely unaware of sane module systems in other languages with ahead of time compilation toolchains.


Only worth doing if there are some directives added for public/private directives.

Exposing internals is still bad design.


Why should the on-disk format of a programming language have anything coupled to a specific use-case for browsing that code? If viewing just the definitions of a method is useful (ie, viewing the header file) then your editor should just let you do so. Letting that bleed into the way the files are organized on disk at the cost of DRY seems a terrible design choice as it serves no purpose other than to make up for poor tooling.


The separation is nice, but does it need to be done manually?

Couldn't a simple compiler take in .m files and output compiled code + .h files?


In C/C++/Objective-C the .h files are not use solely for declaring the public interface of a class a lot of times this is where one exposes enums, typedefs, defines which are related to the class in question. The obvious argument against this is using static class members, but I would argue that it adds unnecessary verbosity.


True, but that's nothing that couldn't be done in either a separate file or in the .m?


How is it worse?


Separation of interfaces and implementation is a core component of pre compiled code distribution, .h files + compiled library.

By simply glancing over a header file one can achieve a quick idea of the interface without consulting the documentation. It's self documenting, reading a Java file in the same way is impossible.

Forward declaration in header files and imports in implementation is also a huge benefit in compilation speed and complexity.


The point is that writing and maintaining such files by hand is a tremendous waste of time.

When I last used C#, I remember Visual Studio would extract and display such public overviews directly from compiled .NET assemblies.

Even if no IDE is involved, something could just auto-generate a text file for distribution. It could even be adorned with a .h file extension for faux-retro appearance!


I do agree partially about the repetition issue, but with a language such as Objective-C you'll auto complete every definition the second time which hardly takes a second, especially with fuzzy auto completion. If time saving is the goal the naming convention for methods is a far better candidate for change


Zapping the entirety of an unnecessarily manual practice is going to be objectively better than providing some typing assistance for it.


There are still use cases for header files which would require additional syntax if headers are to be generated from .m files. Also the use case of forward declaration comes to mind which would totally lost with an implementation only system


You wouldn't need any of this if it were done properly. C# is a good demonstration of the principle and I'm sure Java could support something similar. In the case of C#, forward declarations are unnecessary, and if you want/need to see something like a .h file then one can be generated automatically from the metadata included in the binary.

In fact Objective-C already includes pretty much everything necessary in the binary already, as you can see by using class-dump. Perhaps additional work would be required to support exposing absolutely everything that might be required, but there's no obvious reason why the language couldn't be extended to allow direct use of compiled code, with no need for a .h file.


There is no .m .h .c .cpp involved. In hypothetical new language we cast off all baggage and stop thinking in archaic C and "C with a twist" terms.


That's what javadoc is for. You get a lot more out of browsing through that, than header files.


- Turbo Pascal

- Object Pascal

- Delphi

- Modula-2

- Modula-3

- Oberon

- Active Oberon

- Component Pascal

- OCaml

- Haskell

- D

- Ada

- Eiffel

- ...

All languages with native code compiler available, where the header files can bot the manually or automatically generated from the implementation.

Many of them have, or used to have, code browsers and auto complete by reading the metadata stored in the binaries.

C based toolchains are still at stone age, in terms of modular programming support.


Heh, which is similar to what I drew up a year ago, and even linked to https://github.com/jbrennan/ObjectiveNext/blob/master/langua... :)


The author is right that we need to think outside the box but I think his argument is misleading. Historically, programming languages evolve and not leapfrog. I don't see why Objective-C cannot keep evolving while some other company is bold enough to research a new visual programming language that will result in a paradigm shift. The two are not mutually exclusive.


Author here. Good point, they are not mutually exclusive. I do think however it's not in Apple's culture to significantly move away from C/ObjC any time soon. For the time being, they are getting by well enough as is today. I think they'll continue to slowly evolve the language, but I'm not so keen to wait for major changes, and I'm hoping to get others onboard with me.


As a main language, I agree they're not looking to change anything for a long time coming, but they have made stuff like Quartz Composer (a graphical functional programming language) and Automator. There is a slim sliver of hope they may deem fit to create a HyperCard for the post-PC world. iOS touch interfaces would be the perfect place to reinvent programming as something other than text (since typing on a touchscreen is so painful).


Are you saying you want to take a crack at creating this new paradigm shift? Couldn't you argue that a project like RubyMotion is already doing some of what you are proposing.


No not really. Ruby is a fine text language but it offers no benefits or advantages like what I described in the article, safe maybe a REPL. But we can have that already with the Super Debugger, for example.


> Why do we assume we need a compiled language? Why not an interpreted language?

When will people learn this is an implementation issue?!


I'm not convinced that it's purely an implementation issue. While maybe that's true in a theoretical sense, I don't think it holds true in practice.

There are various characteristics of programming languages that can make it much easier to implement them as interpreted languages rather than compiled languages, and vice versa.

Maybe any difficulties could be overcome given enough resources, but this usually isn't the case. I think this is why the efforts to make compilers for a language like Python have never really been that successful. And the same goes for the attempts to create interpreters for languages like C and C++.

Perhaps the most success has been with the various compilers that compile Java source or bytecode to native code. But even in that case, the end result usually isn't very good. It's usually somewhere in the undesirable middle, giving the worst of both compilation and interpretation.


> There are various characteristics of programming languages that can make it much easier to implement them as interpreted languages rather than compiled languages, and vice versa.

It is all about convenience and effort one is willing to invest versus return on investment. As any good CS compiler design course will show to their students.

Back in the 70-80's most languages had multiple implementations offering both compilers and interpreters as part of their toolchains.

Only on the early 2000's I started to see this phenomenon of mixing languages with their canonical implementation.


The closest thing I've seen to this notion of what the OP seems to want is Adobe Flex and MXML. Microsoft has kind of the same thing going on with Silverlight and XAML (or whatever they are calling those things now). Microsoft put a lot of time and effort into the design tools and everything else to make that a good experience for designer and developer from what I can tell.

The end result of visual design tools doesn't seem to match the hype. At some point you DO need programmers to make some of the magic happen. I think so far game development has probably done the most to merge the graphic designer, game designer, and programmer workflows into something that is productive for lots of people on the same project.

I really think Apple is the last place you'll see much programming language innovation compared to what you already see in languages compiling to JS or the JVM or even LLVM.


Also, if there is going to be a visual software development revolution, I think it will look something like a visual data flow language. Something along the lines of yahoo pipes meets rails or django.

How it would work is you would have something like ActiveRecord that can pull data into the first step, then visual steps for mapping, calculating, etc.. and spitting that data into a HTML view or whatever in the last step. You could design the HTML in a WYSIWYG kind of thing or just as a tempting system. Doesn't matter.

All it would be doing would be replacing a lot of the computation that programmers do with some functional lego block able logic in the middle. Once you had the mechanisms in place for this kind of system, I think you could swap out the data sources from a CSV file to a database to a restful service, and your design bits from HTML to iOS or whatever pretty easily, giving you the ability to work in a pretty general purpose way.

A tool like that could be very interesting, but it's quite a different approach.


The interesting thing is why are people actually calling for a replacement.

The iOS dev environment is the nicest I've ever worked in., in big part to obj-c and all that comes with it.

Seems to be change is being requested for changes sake , rather than trying to address any actual pain points.


Idiomatic code, formatting and all, is a visual language I've gotten quite used to. If I "zoom in," it's regular code, but one must also be able to cruise one's project artifacts at a higher level than lines of code to function as a developer.

The complexity is reality, not something that can be swept under the rug so that naive implementors can use general purpose tools. Isn't that why we have DSLs, anyway?


It's really surreal to me that we're having this conversation without discussing the reality that outside of the Apple-controlled iOS sandbox, the world is replete with hundreds and hundreds of languages. You don't have to navel-gaze to theorize what's it's like to work in another language: just try writing some code that runs your laptop instead of your phone.


Many of those hundreds of languages do actually run on iOS. Many games on the App Store are written in Unity (a C# platform).

Heck, many are written in Flash too. Yes, Flash.

I actually consider it a very healthy sign that the people complaining about Objective C are a vocal minority whining about such silly, unfocused, vague things.

It means Objective C is doing quite fine, thank you very much.


Anyone here tried xamarin? It's a 1:1 map of the iOS /Mac API. I love c# and wish I could use it to write native code. C++ is just too clunky to me to use daily. The one thing about iOS and Mac dev that I hate is the interface builder. Worst. ui. designer. ever.


> I could use it to write native code

mono -aot


The only thing that could be usefull, IMHO (because a visual language per-se is a niche tool) is perhaps use code no as "plain" text but as RTF/HTML/Word, ie, the developer write "text" but the code is stored in html/rtf or similar, meaning, that exist "bold", "italics", "center", etc commands that apply to the text.

So, for example, I have a Customer type and a Customer varibable. Not only can I see that one is a type and the other is not, I can apply a global "tag" to all the Customer instances to mark it as "Client-Side" and see visually the difference between a Customer class server side and client side.

Or have a style to mark when a function have side-effect, or if it trhow a exception. Make it part of the syntax:

[CAN-KILL]openFile:(customer:[CLIENT]Customer)

But the coder just write: openFile:(customer:Customer)

How could be to use code as a word document?


What is the problem with this comment?


So, why not use F-Script or work to make it what you need?


Author here: Funnily enough, I have! I wrote the Super Debugger which uses F-Script (http://shopify.github.io/superdb/)

Please note though this is not at all what I’m describing in my essay. A REPL is hardly a graphical, direct manipulation drawing tool for creating software.

But it was fun to dip my toes in more dynamic waters.


It sounds like you're describing a modern Smalltalk. I would love something like that, especially if it was cross-platform and perhaps syntax like Nu.


While I agree with the OP in that there are certainly improvements that could be made to Objective-C I find myself disagreeing with a lot of the ideas in the post, especially this paragraph:

>We think we know what we want from an Objective C replacement. We want Garbage Collection, we want better concurrency, we don’t want pointers. We want better networking, better data base, better syncing functionality. We want a better, real runtime. We don’t want header files or import statements.

We don't want Garbage Collection to the contrary we want reference counting. Reference counting is the best compromise between handling memory manually and using GC ala Java. I would argue that the mental strain for the programmer is equal for reference counting(at least with ARC) to that of GC. However Reference Counting is extremely lightweight.

Regarding concurrency it should be mentioned that I don't have a huge amount of experience with different languages. I've done concurrency in Erlang, Java/JR, C++ and Objective-C. I'd say that GDC from Objective-C and possibly Erlang were the easiest to use.

Why wouldn't we want pointers? Most modern languages use pointers under the surface, there are significant benefits in not obscuring and hiding this as is done in Java.

We do want header files and imports because they provide a bunch of benefits namely separation of interface and implementation, distribution of compiled implementations with interfaces in header files. With the @import option compile time is significantly faster as well.

I'll deliberately skip the last points, suffice to say I don't feel strongly about them. I do agree on the data base point though.

My own revised list of things that Objective-C needs are. Proper static typing(no id), templates and namespaces.

EDIT: The point about requiring a programer to change a color is interesting, all though I would argue that the same holds true for web. I have rarely worked with designers who are proficient enough in git and css to make such changes. In the case that the designer is proficient enough to handle git, a css like json file can be used to specify the look of the app and thus changing colour would only involve editing a json file which is comparable to editing a css file


> We don't want Garbage Collection to the contrary we want reference counting. Reference counting is the best compromise between handling memory manually and using GC ala Java. I would argue that the mental strain for the programmer is equal for reference counting(at least with ARC) to that of GC. However Reference Counting is extremely lightweight.

I used to think that, until I started to spend more time on modern GC platforms like Java and .NET. It's impressive how performant a good generational garbage collector can be. For many business applications (i.e., not the kinds of workloads you'll see being simulated by popular benchmarks) a good generational GC can even improve performance thanks to improved locality of reference. I think a lot of us don't fully appreciate the cost of a cache miss.

Speaking of, I'm not sure a lot of us fully appreciate the cost of synchronization, either. It's one big reason why I'm not convinced that reference counting is particularly well-suited to concurrent programming, since it turns a really commonplace operation (retaining a reference to an object) into a synchronization point in the program. There's a non-trivial cost associated with that. You could avoid repeatedly stalling the CPU by being very careful about not sharing objects among threads, but I don't think most developers can reasonably be expected to make a habit of that in practice.


> Speaking of, I'm not sure a lot of us fully appreciate the cost of synchronization, either. It's one big reason why I'm not convinced that reference counting is particularly well-suited to concurrent programming, since it turns a really commonplace operation (retaining a reference to an object) into a synchronization point in the program. There's a non-trivial cost associated with that.

This is actually a problem for both. Simple implementations of AGC have to stop the world to scan at least the roots, and probably even a bit more than that, because there are significant ABA problems. There are ways around this, but as far as I know the only environment to use any of the really good ones is the JVM, because it's hellishly complex. I don't know that there is anything that doesn't stop the world for at least a couple of ns, though.

There are also solutions to it for reference counting. They involve using thread local reference counts and briefly stopping the world to reconcile the counts. I believe [1] describes this algorithm.

And once you get there, and start dealing with cycle detection, the difference between the two approaches starts to look a lot smaller. There's a paper on this too. [2]

[1] http://www.cs.technion.ac.il/~erez/Papers/refcount.pdf

[2] http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf


I disagree with ridding ourselves of C compatibility to make the syntax terse. Being able to wrap C libraries and have a reasonably high level runtime a huge strong point of objective-c.


I am confused. Where did you get the impression that I was in favour of removing C compatibility? For the record I agree on your point. Any language which has compatibility with C is favourable.


Please "god", no interpreted language.


Visual Studio for the mac then?


I can sum up this article & its predecessors like so:

"What do we want? We don't know! When do we want it? Now!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: