So they reached their funding goal, eh?
I'm still not clear on the advantages over GPGPU computing that this represents, but the 45GHz description is very misleading.
By this way of adding all the cores together, a Freescale P5040 is an 8.8 to 10 GHz processor. Total BS.
The "45GHz" stuff is blatantly misleading, yes, and they've admitted it was a stupid way of describing it.
Though as for what this brings vs. GPGPU: These are 16 totally independent full RISC cores (later 64 cores, eventually they hope to get 1024 to 4096 cores per chip) , rather than SIMD units.
You see the big difference if you try to write code for it that require branches. A typical GPU has a small number of cores with a large number of data streams. So, say a GPU that can handle 16 data streams might "only" be able to handle 4 separate control flows. If you need branches that are taken for individual data streams within one of those cores, usually that means that stream is just "paused" while the rest process the instructions in between.
As such, GPU's are very efficient at handling data streams where you do exactly the same on a large number of sets of data (say, multiple a large number of pixels by the same value, or by values from another bitmap, or whatever), but horribly inefficient when the processing varies significantly for each stream of data. E.g. consider AI's where the decision trees might diverge massively depending on data encountered, or simulations where the data is highly interdependent (e.g. stuff simulated on one core should immediately affect the things on the other cores).
This is the kind of stuff that Adapteva's chips might be exciting for.
Especially if/when they get their higher core-count chips out the door. The 16 core version is frankly mostly interesting for developers to get a taste of the architecture and possible for very low end embedded solution (cell phones...). The performance they've shown with the 16 core version is impressive for a $99 board, but nothing an average desktop wouldn't be able to match or beat.
In architecture this is closer to an XMOS chip than a GPU, though faster clocked cores and a memory architecture that makes it simpler to work with (each core has 32KB in-core static RAM, but can also, subject to timing differences, transparently access the memory of all the other cores and main memory). They're also obviously aiming for a much higher number of cores.