Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Generative models and programming talent (ibiblio.org)
28 points by mdemare on July 22, 2008 | hide | past | favorite | 17 comments


Sure, the ability to form generative models is important, but it's at least as important to be able to change or adapt them when faced with contradictory evidence.

The "CS IQ" test in the article bothered me because it rewarded premature optimization (forming a generative model and sticking to it even when you have no reason to believe it's right) instead of what I would consider a more optimal strategy (recognize that you don't know how assignment works, hypothesize about the possible ways it could work, and use a different model of assignment for each question, thus hedging your bets for partial credit).


Adapting your model in the face of new evidence is part of the definition of a model-seeking approach to problem-solving. I don't think that the article is implying that sticking to a particular model when confronted with contradictions is beneficial.

I think you are also using a different definition of "optimal strategy" than the author. The test was not measuring test-taking strategy - it may not even have been scored in a student-visible way. In the absence of a score-optimizing approach the natural inclination of the student's analytic style should come to the forefront. Additionally, the test had predictive power regardless of a root cause. Students who used a consistent model in selecting answers went on to succeed in the programming course, and the others did not. The underlying cause may actually be different but the concrete result stands.


You're right, I'm not arguing against the article so much as trying to continue its thought. I agree that generative models are useful. Among people who use them, I suspect that the next major predictor of success would be adaptability. Confirmation bias is the opposing force: once we form a model, we are biased towards it, and tend to overlook contradictory evidence.

One tactic that increases adaptability is not forming a model until you actually have to; this avoids confirmation bias.

You're also right about the nature of the test: as an experiment, it does show that presence of generative models predicts success. I'm interested in education, so I'm voicing the reason why this wouldn't be a practical test of qualification.


The bias point is well made. I don't remember the quote exactly, but i think it's something like "...when all you know are hammers, everything looks like a nail...". Though I think his point is that you have to at least be able to form models about the problems you tackle and stick to those models. I feel like you need some commitment or vision to be able to move on. Development tends to have two phases, exploration and exploitation. Balancing the two is a real problem. If you exploit too soon you may miss a much better solution, but if you do too much exploring you may just be wasting time. Balancing the two contributes to writing good code it seems.


I had a professor in a beginning CS class who was a fortran/pascal programmer (and it showed mightily) but had one very good habit of re-enforcing good cognitive models with english language. Instead of saying a equals b for a = b, he would say a becomes the same as b or if it was a pointer operation a points to what b points to. I still use his method when reading code today.


I think that's a great convention. Using = for assignment introduces quite a bit of ambiguity, especially the common typo of "if (x = y)" when "x == y" was the intention.

If I ever get around to making a real language (ahem), I will probably use either "<-" for infix assignment (left arrow) or "set" for pre/postfix; "=" will be a value comparison, and "is" will be a pointer comparison (i.e., symbol equality, capable of comparing non-terminating data structures, etc).


That was interesting. I took a vision class a few years ago that clearly illustrates these ideas. Bringing vision theory into CS boils down to converting continuous space into discrete space. This led to a great quote from the prof "CS people are engineering who think discretely". I think this article reflects this mentality. Being "good" at CS involves making a consistent state model where events always follow other events. This is yet another reason why concurrency seems to be difficult to grasp even for good CS students. Because of possible thread interleaving the typical state space explodes and we have to think about that kind of code in a different way.


The last line concisely states what I've known I do for years: "[S]uccessful CS students are those who, given a set of facts, will instinctively seek a consistent generative model to connect them."


A more general pattern recognition ability seems to fit the description of "seeking a generative model" here, and given such, I fail to see how this should be a CS IQ, rather than general... IQ.


Agreed. I wish more politicians had the tiniest sliver of a generative model in their muddled thinking.


This dude takes himself way too seriously. I agree with a weaker version of his claim: model-seeking is necessary but not sufficient for problem-solving.


I think the first part is confusing in saying that "[good programmers] have a model of exactly what the computer does when it executes each statement." That makes me think of machine language: absolutely no abstraction. Good programmers don't need to understand every little detail of what's going on when a computer runs a program; it helps sometimes, but it's not fundamental. Anyway, from the article we can't say if that's what Reges meant.

The idea of a generative model reminds of the Architect, one of the INTP types, blokes the seek to model everything and understand every little detail of how stuff works. I expect a lot of hackers to be of this type.



This strongly resonates with Naur's brilliant "Programming as Theory Building". Unfortunately what seems to be the main site for it (http://www.zafar.se/bkz/Articles/NaurProgrammingTheory) is down.


Hm... in the physical sciences we call this the Scientific Method.


Shht. computer 'scientists' are the ones that didn't make it into the physical sciences. No need to stoke their inferiority complex any further.


I had to laugh when I read the quoted students reponse to the wrong output. Just introduce a simple correction that fulfills the requirement for his particular test case. Why you'd wonder what the academic standards are in a univesity that even allows such people to enter the door, this modus operandi is even encouraged the 'industry' under the moniker of 'test-driven development'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: