Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.
Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.
They then went on to design a cpu with register windows which the people writing the compilers realised was a terrible design choice. Investing in your compiler and cpu design before you commit to it is very important. Adding instructions because you can come up with a single assembler fragment that performs better is a naïve and essentially terrible idea.
They coined the term, but there was prior art. I believe the
http://en.wikipedia.org/wiki/CDC_6600 was the earliest example of what inspired RISC. Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.