Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you think its better to teach uni students on new processors and tools e.g. Freescale, ARM, etc. or on older z-80, 80186 cpus?


I think university students should definitely start with older processors, and then gradually change the levels. I agree there is an architectural change in the newer processors, plus the additional cores. But, working with an older processor with limited memory and processing ensures the programmer realizes how important is each line of code and appreciates the comfort provided by newer processors and thus their complexity.


The first computer I programmed was a Z80 micro-controller connected to some basic peripherals (LED readout, sensors, actuators, stepper motors, potentiometers, etc...). There was no compiler, no assembler; nothing but a keypad to enter the instructions into memory and a button to start execution.

The CPU was less powerful than any of the x86 32bit chips that were widely available at the time, but as a kid it still really gave me the idea that whatever I could think of, I could make a computer do.

I'd agree, understanding things at a really basic level first helped me to better understand things at a higher level later on. It probably helps me to keep in mind what a computer actually needs to do to run code as well. I think it's probably one of the reasons Knuth uses MIX in TAOCP.


Kind of a "which students" sort of question.

I'd say with the older ones. With those, you can put a logic analyzer on the memory bus and see what's going on - if the pins aren't on a BGA under the chip and the board has no vias.


Working on the older CPUs is more approachable to understanding all the low level details plus it makes you appreciate all that the newer CPUs offer. However when actually working, I don't think one should work with an older CPU unless it really makes sense (sufficient computer power, low power requirements, etc.) Working with a powerful CPU lets you focus on the job at hand instead of the idiosyncrasies.


I don't think this is true at all, older CPUs are not a "more purified" and "cleaner" version of todays, they have the same and often considerably more cruft and crazyness.

To work with them is to teach bad habits and useless skills.


Some older CPU's maybe, but you can't seriously look at e.g. the 68000 next to an x86 CPU and tell me the 68000 is not cleaner.

It's not that they don't have craziness, it's that the functionality that mere mortals need to use to write efficient code is simpler.

The M68k's 8 general purpose data registers and 8 general purpose address registers alone is sufficient to make a huge difference.

For me, moving to an x86 machine was what made me give up assembler - in disgust - and it is something I've heard many times over the years: it takes a special kind of masochist to program x86 assembly; for a lot of people who grew up with other architectures, it's one step too far into insanity.


I have the pleasure of working with PowerPC in my day job. Also a relatively clean architecture. I really do wish that Apple had been more successful with it, that Microsoft would have continued supporting it in NT, that Motorola / IBM had kept up with Intel in raw performance, and that it had a larger user base than it does today.


Not to mention the m68k flat address space. A clean architecture for clean code.


Just look at the 6502. No two instructions follow the same pattern - every one is a moss-covered three-handled family credenza, to quote the good Doctor.


The 6502's instruction set is pretty regular, with most instructions of the form aaabbbcc. For instance, if cc==01, aaa specifies the arithmetic operation and bbb specifies the addressing mode. Likewise with cc==01, aaa specifies the operation and bbb the address mode. See http://www.llx.com/~nparker/a2/opcodes.html

The regularity of the 6502's instruction set is a partially a consequence of using a PLA for instruction decoding. If you can decode a bunch of instructions with a simple bit pattern, it saves space.


Aftger arithmetic, instructions have little or no regularity. They omit addressing modes, swap codings for modes. There's internal hardware reasons for this, but for the programmer its chaotic.


Not that it's in the least bit relevant to the discussion, but the moss-covered three-handled family credenza is not a Dr. Seuss quote found anywhere in his books, it came from the 70's era 'Cat in the Hat' TV adaptation, authored by Chuck Jones.


Cool! I never knew. I guess it shouldn't be considered 'canon' then.


That's just not true. It has irregularities, but most of the instructions fit into a small set of groups that follow very simple patterns.

But secondly, where the 6502 deviates from a tiny set of regular patterns it is largely by omitting specific forms of instructions, either because the variation would make no sense, or to save space - the beauty of the 6502 is how simple it is:

You can fit the 6502 instruction set on a single sheet of paper with sufficient detail that someone with some asm exposure could understand most of it.


The x86 family is the same.


Oh there is quite a lot of consistency in the structure of instructions across the basic set - register numbering, many instructions allow full register and addressing modes. The 6502 had pretty much no two instructions the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: