@nikodr
Both are useful, and the three most popular standards--Ultra-320 SCSI, SATA 3 Gbit/s, and SAS--are used in both small and large-scale deployments.
SAS is quickly replacing U320 in local and direct-attached disk arrays on servers. (Hewlett-Packard tends to be the gold standard in this area, given their market share.) SAS is also quite popular given its common 2.5" form factor, allowing very dense deployments.
Both U320 and SATA are common in SAN environments, with U320 providing fast and reliable but expensive "first tier" storage for databases, virtualization, and other I/O and bandwidth intensive applications and SATA providing large, inexpensive storage arrays for everything else. As costs continue to fall and reliability continues to improve, SATA is closing in here as well.
To decide what you need, you really need to understand your application. Key factors are I/O operations per second (number of simultaneous requests) and bits per second (bandwidth). Single drive SCSI implementations tend to outperform SATA implementations simply because the feature set is more mature. The drives themselves aren't that different.
If you need more performance than a single drive can provide, you start building an array, using drive statistics to estimate overall I/O performance and interface statistics to estimate bandwidth limits. As bandwidth tops out, you typically add additional channels to your array. In most real world implementations, drives are added in units of the interface maximum until the I/O requirements are meant. e.g. An I/O requirement of 20 U320 disks would result in a two-channel array, each channel hosting 14 disks, for a total of 28 disks. In most cases, the array interconnects would be expanded further, with each set of 14 disks split into two 7 disk channels.
Redundancy in your array is factored in after your requirements are in place. In the above four-channel, two controller example, a RAID-5 array would lower the usable storage to 26 disks, which should still meet your requirements. Of course, redundancy impacts performance, so that has to be factored in as well, depending on the type of array implemented. (In a SAN environment, most of this of abstracted away, but it still boils down to basic I/O and bandwidth requirements.)
Regarding SSD, buyer beware. Most consumer-level SSDs are optimized for sequential read benchmarks. Write performance on these devices degrades very quickly, usually to around 2 MB/s--yes, 2 MB/s. Be very careful and do your homework before purchasing any SSD device. AnandTech has a great article on SSD at
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1. I highly recommend it.
EDIT:
And a recommendation: go with SATA. It's cheap. If a drive fails twice and you have to buy two new ones, you'll still have spent less than you would have on a SCSI solution. For most desktop applications, you won't see a big difference in performance, assuming you use a decent SATA controller. Spend your money on RAM (also cheap these days) instead.
Re: RAID, most new motherboards have on-board array controllers, but the array logic is mostly handled by the device driver and subesequently, the CPU; i.e. it's slower than a dedicated array controller. If you use a dedicated controller, I'd recommend a battery-backed cache. It will prevent data loss in the event of a power failure during a write operation.
If you don't know or don't care about your performance requirements, then a simple two-disk mirror (RAID-1) will protect you in the case of a disk failure. As everyone's pointed out, that's not a backup solution, though.