Well, it's obvious that there will be an improvement in specifiying the syntax for a computer program as it has been for the last decades.
e.g. Assembler -> C -> C++
There recently has been a post @ HN about the missing programming paradigm (http://wiki.c2.com/?ThereAreExactlyThreeParadigms). With the emerge of smarter tools, programming will get easier in one way or the other, releasing the coder from a lot of pain ( as C or C++ did realse us from tedious, painful assembler ). However, I am quite sure that it won't replace programmers since our job is actually not to code but more to solve a given problem with a range of tools. Smarter tools will probably boost productivity of a single person to handle bigger and more complex architectures or other kinds of new problem areas will come up. Research will go faster. Products will get developed faster. Everything will kind of speed up. Nevertheless, the problems to get solve / implement will remain until there's some kind of GAI. If there's an GAI smart enough to solve our problems probably most of the Jobs have been replaced.
we don't need anything as complex as AI to seriously boost programmer productivity. I'm working on a suite of tools that should do just that for web developers, and there's no plans for AI/ML integration just yet.
It's intriguing, particularly in cases where the code written could be verified against a specification or at least comprehensively fuzz-tested against a 'known good' version of the program. A mildly terrifying thought for a performance programmer - the idea that, perhaps, someone could just write some sloppy Python code to solve a program "at some speed" and have a automated system iterate it into some tuned, SIMD-laden, C/C++ horror-show.
While of course we have optimizing compilers to do this sort of thing now, you could imagine automated systems that attempt to preserve clarity and simplicity as they do it - and such a system could work semi-supervised, iterating with a human in the loop to steer the system towards more comprehensible solutions.
It's called Technology Transfer to private sector. Happens all the time. The CompCert certifying compiler for C was a landmark achievement of INRIA that lots of CompSci is building on. A company called AbsInt controls the I.P. now.
The thing i don't get: this seems like a useful but rarely used tech.
Even if there are patents involved , i'm sure it's not patented everywhere around the globe, and it's just software, so why isn't there no real competition ? something like a reasonably prices SAAS service ?
Whew, that's fortunate. The robot programmers of the future will know how to invert a binary tree, but not how to write a CRUD web app full of boring business logic.
automation of software development is inevitable as there is huge financial incentive to do so. Businesses view developers as expensive middle men between them and the product -- much like the assembly line workers in a factory.
I am betting that typical crud business apps and majority of web development will be the first fields experience heavy automation, which also happen to be the areas where the vast majority of software jobs exist today. But this won't happen overnight, so lets enjoy the good times while they are rolling :)
Every time I encounter people in this business with that view, they're way off though.
Most often business' initial expectations are a nice-looking broken pile of shit. They simply haven't conceptualized the product enough and figured out how it needs to work. It's my job to get them there.
Businesses/stakeholders are the expensive middle men between me and the money to build the product. I can do this without you.
Nothing (in the league of things we are discussing) is inevitable, it's a result of choices, sometimes masked with rationalizations of how things are inevitable. For me good times start where there are people with both feet on their own liberate square meter to have fun times with, to the rest I can only say "connect me to whoever told you this".
I don't think businesses are having a great success in automating developers, and it doesn't seem a technology problem, we've got some interesting but relatively unused(and a bit old) technologies(spiralgen, apache-isis/naked-objects) , so it seems more like an organizational issue and those are tough to solve.
I hope we reach that phase, but so far it does not look promising. We'll get there, but I don't think it'll be in my lifetime. I hope we do though, I'm up for a career change.
I'm working on a tool suite at the moment that will automate a lot of the current pain of web development. It's actually quite easy to build once you have the patterns figured out.
For actual in-depth technical discussion / criticism of the paper, you can read its ICLR peer reviews (and other public comments): https://openreview.net/forum?id=ByldLrqlx
Machine learning algorithms are actually quite good at solving poorly specified problems. They learn from examples and don't require any definite problem specification.
For example, recognizing people in a picture is almost impossible to specify but relatively easy to learn, given enough examples.
Next time your facing a poorly specified problem, don't ask for a clear spec, instead ask for a million examples and train a deep net to solve it :)
I somewhat agree. It's more often than not headlines, anecdotes and philosophy about ML. But where is the common denominator high enough to talk about it in a deeper way?
I've visited /r/machinelearning, /r/computervision /r/reinforcementlearning, and although they are often better than HN, they also either partly suffer from the same condition, or are almost dead with no real activity.
TLDR; what are some good places to hang out for good ML discussions and news?
Twitter has good conversation as well, if you follow all the ML people.
Most researchers are in labs so they usually discuss these things in person with their colleagues. Also the community is relatively small so a lot of discussion happens at conferences and most people are first or second degree connections.
> I've visited /r/machinelearning, /r/computervision /r/reinforcementlearning, and although they are often better than HN, they also either partly suffer from the same condition, or are almost dead with no real activity.
The issue is that most of the people that practice ML don't talk about it in the open (with the exceptions of academics on Twitter). The best discussions I've found happens in the labs or cross-lab email lists.
>Why is there no "general" deep learning algorithm?!
Because "deep learning" is almost always just neural networks with many layers and some tweaks to the unit types to alleviate vanishing-gradient issues, trained by stochastic gradient descent. In rather theoretical terms, "deep learning" involves using gradient descent, and gradient-increasing tweaks, to search for a specific continuous circuit in a given space of continuous circuits. It's only as general as the hypothesis class you search through, which here is, again, continuous "circuits" composed out of "neuron" units of specific types, numbers, and arrangements.
Now, in computability terms, given a large enough (continuous or discrete) circuit, possibly a recurrent one, you can represent any computable function. However, in learning terms, that doesn't make a useful computable function at all easy to find in a very-high dimensional space.
Would a potential roadblock for such systems be automatic verifiability? Consider that a system provides a ranked list of possible code snippets someone would still need to pick from these choices and test it.
Just like humans program; you write code and have unit tests (if you are lucky). What is different about this? It has the same inputs / outputs so the unit tests will be there. Formal verification would be better; aka having a model for the original and having the computer prove that new version is mathematically identical to the original. But both the formal verification and the transformation proof are far off for almost all software projects in practice.
e.g. Assembler -> C -> C++
There recently has been a post @ HN about the missing programming paradigm (http://wiki.c2.com/?ThereAreExactlyThreeParadigms). With the emerge of smarter tools, programming will get easier in one way or the other, releasing the coder from a lot of pain ( as C or C++ did realse us from tedious, painful assembler ). However, I am quite sure that it won't replace programmers since our job is actually not to code but more to solve a given problem with a range of tools. Smarter tools will probably boost productivity of a single person to handle bigger and more complex architectures or other kinds of new problem areas will come up. Research will go faster. Products will get developed faster. Everything will kind of speed up. Nevertheless, the problems to get solve / implement will remain until there's some kind of GAI. If there's an GAI smart enough to solve our problems probably most of the Jobs have been replaced.