Iirc it's not such a meaningful distinction anymore. "CISC" x86 uses micro-operations internally. "RISC" ARM has several different instruction encodings (ARM, Thumb, Thumb-2, A64). Increasing numbers of people are working in high level languages anyway.
From that page, which collects a number of Usenet posts by John Mashey:
> The RISC characteristics:
> a) Are aimed at more performance from current compiler technology (i.e., enough registers).
> OR
> b) Are aimed at fast pipelining
> - in a virtual-memory environment
> - with the ability to still survive exceptions
> - without inextricably increasing the number of gate delays (notice that I say gate delays, NOT just how many gates).
The point b is where RISC chips really pulled away from CISC in terms of architectural design, especially chips like the MIPS, which Mashey worked on: The MIPS had a number of points where it exposed the tricks it used to pipeline more aggressively, even at the expense of making compilers somewhat harder to write and/or human assembly-language programmers think a bit harder. However, the lack of complicated addressing modes (post-increment, scale-and-offset, etc.) and the lack of register-memory opcodes with ALU operations, and total lack of memory-memory operations, is still a very common feature of RISC design.
That seems like RISC changing everything. How much did it change things? CISC has been transformed from a true competitor to RISC to a sort of abstraction layer on top of it.
That's not really true. x86 chips are not secretly RISC inside; the microcode often corresponds to individual instructions, it has complex things like 'lea' that are quite efficient, and you can't really abstract away things like variable length instructions.
Iirc it's not such a meaningful distinction anymore. "CISC" x86 uses micro-operations internally. "RISC" ARM has several different instruction encodings (ARM, Thumb, Thumb-2, A64). Increasing numbers of people are working in high level languages anyway.