There are many types of accelerator. Although today the term is mostly used only for CPUs, there was a time when video cards were called graphic accelerators

There are two ways to accelerate something, brute force and clever tinkering.
Brute force is just putting more Mhz somewhere. A 14mhz CPU is capable of processing data twice as fast as a 7mhz CPU. But even then, it's not that simple... You still have to take into account how large a 'chunk' of data the CPU can handle at a time. A good analogy would be coke bottles. Let's say the amount of liquid it can hold is the MHZ. A 2 litre bottle would be twice as fast as a 1 litre bottle right? If you were to turn them upside down an empty their contents into indentical receptacles, the one under the 2 litre bottle should fill twice as fast no? Well no, because on both bottles, the opening is the same size, so there's only so much liquid that can pass through at the same time. So they would empty themselves at the exact same speed. But INSIDE the bottle... there is a lot more liquid!
Same analogy again.. but with identical bottles. One with a 1" opening, one with a 2" opening. Even though they hold the same amout of liquid (read: same amount of Mhz) one will definately empty much faster than the other because it can have a lot more liquid go through its opening. This is called the bus size.
The other type of accelration, clever tinkering does not rely as much on raw speed rather than on ways of doing things. One example of such a thing is fixed point mathematics which was once hugely popular as floating point units used to cost an arm and a leg. Think of the CPU as a human being. It's much simpler for you to calculate 2000 + 2000 than 2000 X 2000. It is the same with computers, the simpler the operation, the faster it will get done. When dealing with fractions (floating point maths) like 1.111 + 1.111 you need a floating point unit to do it in hardware, or you need to do it in software (read: slow). However.. clever thinking leads to fixed point. In fixed point maths, you simply assume the decimal point is always at the same place. So our previous operation could be simplified to 1111 + 1111 and when the CPU returns 2222, you place an imaginary decimal point at 2.222 and there you go... fast floating point operations. This is an overly simplified example as there are other thing to take into account here, such as the fact that your integers can only hold smaller numbers since you dedicate a part of it as being after the decimal... then again this can be circumvented by yet more creative thinking...and that is why computers are so complex
