@Karlos
Since your 030 has only 256 bytes of data cache, you might want to redesign your character table like so:
May I express my doubts that this will make the table lookups any faster.
While the bit array needs less memory and can therefore fit into the 68030's tiny cache, the extra instructions required for the bitwise access will probably nullify the performance boost achieved by the cache hits. (I haven't counted the instruction cycles, yet)
Here's some example code in M68k assembly to support my claim:
; input: d0 = character (values 0...255)
; input: a0 = address of lookup table
; output: status bit Z (Zero)
; alternative (1) byte array lookup
tst.b (a0,d0.w) ;(cache-miss likely on 68030)
; alternative (2) bit array lookup (supposed C compiler output)
move.w d0,d1
lsr.w #5,d1 ;d1=table offset (long words)
move.l (a0,d1.w*4),d1 ;d1=long word value
and.l #31,d0 ;d0=bit number (modulo 32)
moveq #1,d2
lsl.l d0,d2 ;d2=bit mask
and.l d2,d1
; alternative (3) bit arry lookup (asm optimized)
move.w d0,d1
lsr.w #3,d1 ;d1=table offset (bytes)
btst d0,(a0,d1.w) ;(implied: d0=bit number modulo 8)
Depending on the number of wait states caused by the cache-miss during byte array lookup (alternative 1), even the hand-optimised bit array lookup (alternative 3) may be slower due to the extra instructions, let alone the C version (alternative 2). If the code runs on a 68040 or 060 then things will look even worse for the bit array...
I think this kind of performance tweaking is way off-topic for a beginners session in C, anyway.
Sorry for nagging. ;-)