From what I understand to go to 3 TB you would need a 128 bit computer.
I think it was already covered in this thread, but modern disks are accessed using sectors and 48-bit logical block addressing (LBA). Most disks use 512 byte sectors, and the master boot record (MBR) partitioning scheme stores partition information using 32-bit values:
(2^32)-1 * 512 = 2,199,023,255,040 = ~2 TiB
That's where the recent 2 TiB boundary was established. (Note that disks are sold using base 10 values, and 2 TB < 2 TiB, which is why 2 TB drives work just fine.) Newer schemes use larger sector values, e.g. 64-bit LBA and GUID partition tables (GPT) use 64-bit values:
(2^64)-1 * 512 = 144,115,188,075,855,360 = ~8 ZiB
That's a lot of storage, but it doesn't solve the problem for legacy systems. (And we'll hit that new limit faster than anyone thinks we will, I'm sure.) 512 byte sectors have never been a universal standard, but most hard disks use them. Newer disks use 4096 byte sectors* allowing for much larger drives in a 32-bit space:
(2^32)-1 * 4096 = 17,592,186,040,320 = ~16 TiB
It's already possible to reach that limit using current drives and controllers, e.g. a 6 x 3 TiB array.
The maximum number of bytes you can access in e.g. a file on most modern systems is
(2^64)-1 = 18,446,744,073,709,551,615 = ~16 EiB
Again, that's a big value, but the mainstream will reach that limit soon. Many fields are already struggling with it, particularly finance and the physical sciences.
File systems have different limitations as they're based on cluster sizes, and clusters are multiples of sectors.
* Most still emulate 512 byte sectors to support legacy systems.
(probably wrong on this, it is just something I heard at work)
Now you can go back to work and school your coworkers. ;-)
EDIT: With respect to AmigaOS, I think we should dump RDB and move to GPT. As far as I know, while RDB and MBR can live together, RDB and GPT cannot.