It is true that many junior developers write OO code, especially in Java, that has overly long inheritance chains, but junior developers make all kinds of mistakes. This is simply one of them.
I program in C, C++, Objective C, Java, PHP, Python, and web (i.e. HTML/CSS/JavaScript). I do so for a living and am considered a senior developer by my peers. What does this mean? This really means that I've worked myself out of making many of the junior level mistakes when programming.
Are there better developers than me? Hell yes.
Do I see them often? No.
Do I crave to meet and learn from them? Absolutely
For example, Karlos is a developer that I have spoken to many times on A.org and his ability and coding level usually make my head spin. He is VERY talented. I love learning from him.
How does all this pertain to OO? OO is good when used right. It generally can be a slower implementation though because any time you make the language easier for humans it becomes a bit harder for the computer. Most modern OO languages make the safe assumption that you're using a modern computer with modern programming power and therefore this extra overhead isn't much of an issue.
OO features can be leveraged on slower machines without much of a slow down as long as you don't go overboard and objectify every single possible idea in your code. There is quite a bit of reuse of code in my experience with OO design. The problem is, fundamentally, trust. Most programmers don't necessarily give trust to others code so easily. When it comes time to use another person's code, there is often a sinking feeling (when source isn't available) that bottlenecks, problems and poor implementations come from this black box code.
The amount of time saved by using many C++ STL classes for things like vectors, maps and strings over reimplementing the same thing (over and over again) in C is amazing. If I am trying to eek out the most possible performance from a game on a limited architecture (such as say an Amiga), then using C++ STL classes would have to be evaluated on a case by case basis (and obtaining the most performance would likely mean not using them).
There are several other factors such as executable size and memory consumption to consider. Modern languages can well afford to trade speed of development for heavier usage cost because most modern computers have an order of magnitude (or more) more resources to use. Don't blame OO code because it's not well suited to a machine that was built before OO was a term, let alone a popular one.
My major gripe with OO is that it often leads to overdesign by implementing unused methods to make APIs complete, very long inheritance chains etc. OO seems to really get the worst out of (some) mediocre programmers.
And if you look how many times programmers still reinvent the wheel, OO has not fulfilled it's promise of code re-usability either. Don't quote me on it but I think I once read it is inherent with OO and that generic programming is more fit for that.
greets,
Staf.