Average wage has very little to do with it, at least when it comes to London. People there pay a lot more than the rest of the country, but they're earning a lot more too. And even within London the same kind of logic applies.
Partly those people's wages rise. But more commonly they live elsewhere. Those people's wages have no connection to the house prices in zone 1, because those people live further out and commute.
Nope. Can I suggest you read Penrose again with a little more critical thought.
There are problems it is possible to prove are not computable. But can you prove that human beings can solve them?
Tessellation of the infinite plane? Please demonstrate a person that can solve this (i.e. not that they have a > 99.9% chance of being able to do it, or that you can show they can start out pretty successfully and you assume they'll always stay ahead of the game).
Bear in mind when working out if a computer can solve a given problem (i.e. is it mathematically computable), we're not trying to work out if it can ever solve it, or even if it can solve it in infinitely many cases, or even in an arbitrarily high proportion of cases. We're working out if it can be proven that it is (not) guaranteed to find a correct solution in all cases. There's just no way to make those judgements of a human being.
So instead, mathematical analysis of computation is compared against intuition arguments from the evidence that human beings have good, reliable strategies for solving some of them. Unsurprisingly, the brain comes off pretty well in that comparison!
Penrose was big on handwaving and appeal to quantum magic, but not very good on the specific arguments to back up his claim.
I found this article to be a huge bag of misconceptions about AI, computation, and the actual claims of AI professionals. As an argument against hyperbolic media mischaracterisation, it might be reasonable. But like Penrose, it manages a long and condescending argument from intuition that fails to take seriously the actual claims being made.
> But can you prove that human beings can solve them?
> Tessellation of the infinite plane? Please demonstrate a person that can solve this
Penrose himself would seem to be a demonstration that there is at least one person, IIRC.
Is your argument that uncomputable really just means no guaranteed success, and that just because some human can get a solution to an uncomputable problem doesn't mean that it isn't following a set of algorithms?
You misunderstand the result Penrose has shown. There are plenty of aperiodic tessellations of the plane that can be computationally generated. The issue is whether a computer can, in all cases, determine if a set of shapes can tile aperiodically.
I don't understand the second bit. But, no. Computability has very specific definition. Getting a solution to an uncomputable problem isn't generally hard. It is trivial to create a program that will solve the halting problem for an infinite class of cases. The issue is that such solutions can be shown not to be general over all cases.
Such overconfidence. There is no generally accepted evidence that anything physical, whether brain or something else, can compute something a TM cannot compute. Penrose's claims are conjectures at best and pseudoscience at worst.
Since the brain is not just a Turing machine, it would be astonishing if it was limited to the capabilities of a Turning machine. The brain is capable of emulating a Turning machine, but that is just one of its capabilities.
The things a Turing machine lacks are interupts and I/O. If you hook sensors up to a Turing machine and make the tape change depending on the state of those sensors, you don't have a Turing machine any more, in the sense that it is no longer bound by any of Turing's computability theorems.
This insistence that the limits of the most limited model of computing (Turing machines) must be applicable to any machine that computes anything--including things that are not describe by any of the formal mathematical proofs on the limits of computability because they violate the most basic assumptions upon which those proofs depend--is one of the most curious aspects this debate.
So not even computers are just Turing machines, because they too have I/O. Turing's theorems are useful and important when considering certain practical questions of computability within the limited circumstances of a computation whose inputs are entirely specified at the start and which can't be interrupted by new information coming from the outside, but they just don't apply to cases where there are cells whose values aren't known until reality provides them via some sensor mechanism.
As such, it would be a little weird if we (and computers) can't compute things a Turing machine can't, given we have capabilities that a Turing machine doesn't have. There are even examples of such things. One due to Church (IIRC) that shows how we can solve certain instances of the halting problem that a Turing machine can't.
Actually Turing's paper "On Computable Numbers" distinguishes between "automatic machines" (a-machines) and "choice machines" (c-machines), where the latter can pause to ask for input. It seems to me this accounts for your I/O. (I think this is also how you'd add a RNG to a Turing machine.) His paper pretty much only considers a-machines, so I'm curious what is written about c-machines elsewhere. It's unclear to me how much c-machines "change the story." For instance you can't escape Godel's Incompleteness Theorem by adding a finite number of axioms to fill in all the unprovable statements, because there are infinitely many of them.
Unless interrupts and the Input portion of I/O are generated by an oracle (A machine of a computational class more powerful than UTMs) of some sort, a Turing Machine with I/O and interrupts is equivalent to a UTM. I remember the proof being trivial, ie some guys did it during my CS undergrad for a short (2-week) research introduction course, but I can't find a proper paper about it atm. Granted, if the brain was an oracle, this would be true, but you would have to presume that the brain was an oracle in order to prove that it is stronger than a UTM.
In regards to your last statement, the other way works as well. There are statements that can't be proved by any human brain, but can be proved by non-brain logical systems. For example: "The brain cannot consistently assert this statement". One can then create instances of types of problems that are equivalent (when put through some bijective transformation) to the beforementioned statement.
See also: https://en.wikipedia.org/wiki/Wang_tile
n 1966, Wang's student Robert Berger solved the domino problem in the negative. He proved that no algorithm for the problem can exist, by showing how to translate any Turing machine into a set of Wang tiles that tiles the plane if and only if the Turing machine does not halt. The undecidability of the halting problem (the problem of testing whether a Turing machine eventually halts) then implies the undecidability of Wang's tiling problem.
I googled for tessellation infinite plane algorithm but could not find any pertinent information.
There are problems that the brain can solve that one can prove an algorithm cannot. That to me is a bold claim. Could you point to material I could read on this?
I would not discount Penrose with the ease the GP does. These themes are quite philosophical, therefore all sides are making some big assumptions. I would read Penrose, Dennett, Chalmers, Tononi, Searle, Putnam and many others. In philosophy the trick is to disagree with everyone.
Agreed. I am very intrigued by Penrose but not convinced. In Scott Aaronson's online lectures for Quantum Computing since Democritus he is pretty dismissive of Penrose, but in the book he asks why a renowned Oxford scholar would insist on such a position, and that's a good question! In these arguments people are too willing to clamp shut their opinion and shut down wonder.
I mean - what's the point of spending 3-4 years in an Academic environment that proceeds to then test and grade students on exactly how good they are, at the time - then only to perform the whole process over again some number of years down the road, with fuzzier results?
Seems dumb to me.
I've worked with people who could likely do very well on algorithmic tasks - (of which most software projects require precisely zero) - but actually deliver something of use... not so much.
So, a pilot explains why we all will forever need to have pilots.
I am friends with a pilot (commercial 737 short-haul captain for a 'value' european airline). He will happily tell you that
a) Much of what he does is automatable and rather boring (takeoff, landing interesting; the rest very dull), and that it's mostly punching settings into computers. Indeed, he does not have complete ability to do whatever he pleases - if, for example, he climbs too fast or steps outside flight parameters set by the airline (set mostly for cost issues), he would expect to be facing disciplinary action.
b) That a large amount of the training book-work that they have to go through is irrelevant for flying a modern airliner - but is in place mostly to act as a barrier to entry and to keep wages high.
c) That the biggest barrier to 'self-flying planes' (which doesn't mean 'autonomous', it may be drone-style remote-control or other options) is the perception of safety.
It's the last part that's interesting. The Economist ran an article years ago (I can't find a weblink sadly) about how a UPS cargo plane was flown entirely remotely on a test flight. It noted that humans seem to prefer the risk of a "human being" flying them around vs a "computer" - _Even If_ the data showed that the latter was much safer. It went on to point out that this might well be the case given the proportion of accidents classified as "Controlled flight into terrain" (I.E: the pilot crashed an airworthy plane into the ground).
It will be interesting to see if that public perception shifts given recent events. This article is however exactly how I'd expect a pilot to respond.
The fact that there is a routine in the job doesn't make it less necessary for a pilot to be present.
Yes, on a transatlantic flight and once you have reached cruise altitude, you rely on the autopilot to handle the plane while you carry your other tasks : checking fuel levels and consumption, checking new weather reports, etc. Just because you can have a cruise control on your car doesn't mean it is that easy to switch to driverless cars.
b) A large amount of the training book that hey have to go through is relevant to the general practice of flying. Yes, on the very specific airplane that he's using a lot of the calculation is already handled by the fms, but it doesn't mean the pilot shouldn't know about the regulations and workings of his plane, especially for the case where an emergency happens.
c) The biggest barrier to self-flying plane is not the perception of safety. It is the reaction in case of emergency. While one could argue that many accidents can be linked to human error, many are also averted by the pilots onboard reacting correctly to an emergency. And the recent history has shown that even with the huge amount of redundancy that exists in planes computers/sensors/automation systems nowadays, it's not enough to make the plane entirely safe to fly by its own.
Look at AF447, the autopilot disconnected when it couldn't understand the discrepencies between its sensors. And while the pilots didn't have the good reaction, it was due to their lack of proper flying experience (and reliance on automation). Look for the "children of the magenta" video on vimeo for a bit of knowledge about the dangers of relying on automation too much in a cockpit.
Your 3rd point is not only valid for air traffic. Some of our subways are fully automated and the driver just opens/closes the doors - and sits in the front for the perception of the passengers that the train is driven.
True. In fact, it's even stranger in the UK, or so I'm told...
Trains have drivers. Some stations have shorter platforms than others. Thus at those stations a smaller number of train doors need to open to allow passengers on and off - as some doors won't be next to a platform, they'll be above fresh air.
You could have a 'open all doors' button and a 'open <x> doors' button. But the driver is not trusted to do this, so instead there is a GPS on board so the train 'knows' where it is, and opens the appropriate number when the button is pressed.
I think in London, certainly for the underground, the lack of driverless trains is more to do with strong
unions than it is safety perceptions (the DLR line is driverless). E.g:
"Unions have fiercely opposed the introduction of driverless technology on the tube, with the Aslef drivers’ union threatening “all out war”, but the Mayor said drivers would not lose their jobs because "train captains" will still be required."
Train captains! :-)
Though that doesn't mean that risk perception doesn't cause us to make other poor decisions. The UK had some rather bad train accidents (again, human error with 'signal passed at danger'- see http://en.wikipedia.org/wiki/Signal_passed_at_danger). Politicians get involved - and phrases like "this must never happen again" get pushed around, and huge cost estimates for the engineering required to chase this improbable 100% 'never again' target (>£1bn in 1988) are submitted. This inevitably delays implementation (if it ever works anyway - governments and big systems after all) - when we could be doing something simple (GPS is a thing. Trains move only in one dimension, and don't suddenly reverse direction. Finite number of tracks. Build system monitor for "train about to hit another train") that gives you 80/20.
Not trolling... is this still a going concern, shipping eval boards, etc? I saw this a couple of years ago, someone commented on HN, and then nothing. I liked the eval board specs and it seemed reasonably priced given the low volume (maybe a hard sell when STM is all but giving away kits).
I got my eval board one year ago. I'm sure if they had closed shop they'd have put a warning on their site, but you can always email them to make sure.