Amiga.org

Amiga computer related discussion => Amiga community support ideas => Topic started by: rednova on January 27, 2012, 05:02:34 PM

Title: a golden age of Amiga
Post by: rednova on January 27, 2012, 05:02:34 PM
Dear Friends:

I believe if we all work hard dedicated, we can bring about a new
golden age of amiga.
Let's do it !!!

Rednova
Title: Re: a golden age of Amiga
Post by: tribz on January 27, 2012, 05:59:06 PM
Other than wishful thinking, whats your plan? :)
Title: Re: a golden age of Amiga
Post by: persia on January 27, 2012, 06:03:07 PM
(http://cache.ohinternet.com/images/3/39/Gnomes_-_Profit.png)
Title: Re: a golden age of Amiga
Post by: rednova on January 27, 2012, 06:46:30 PM
Hi:

@tribz
I do have a plan !!!

My plan is to create nice animated cartoons -like Eric Shwartz-using my
amazing amiga computer. And not only to enrich the amiga scene, but also
to show what amigas can do (able to create great amiga cartoons only using
the amiga and the moviesetter animation software).
I am also planning to make new amiga games using AmosProfessional
but including beautiful graphics made in lightwave 3d.
That's my personal plan   :)
But you can help me immensely if you also become dedicated to create
new amiga animations, games, and music. etc etc whichever is your skill.
Cheers !!!

Rednova
Title: Re: a golden age of Amiga
Post by: wawrzon on January 27, 2012, 07:01:09 PM
Quote from: rednova;677774
Hi:

@tribz
I do have a plan !!!

My plan is to create nice animated cartoons -like Eric Shwartz-using my
amazing amiga computer. And not only to enrich the amiga scene, but also
to show what amigas can do (able to create great amiga cartoons only using
the amiga and the moviesetter animation software).
I am also planning to make new amiga games using AmosProfessional
but including beautiful graphics made in lightwave 3d.
That's my personal plan   :)
But you can help me immensely if you also become dedicated to create
new amiga animations, games, and music. etc etc whichever is your skill.
Cheers !!!

Rednova


perhaps you should become dedicated to learn to use your software first, because what i remember to have seen of you wasnt even at the level of tutorial using example objects moving along simple splines. if these were splines that is. i know everybody starts somewhere, been there done that, but thats just before making bold demands.
Title: Re: a golden age of Amiga
Post by: Darrin on January 27, 2012, 07:47:05 PM
I got "hard at work" the other day when one of the secretaries walked past me wearing a tight shirt and a little black mini-skirt.  Unfortunately it didn't rush in a new Golden Era of the Amiga.  :(
Title: Re: a golden age of Amiga
Post by: A1260 on January 28, 2012, 01:43:13 AM
Quote from: rednova;677774
Hi:

@tribz
I do have a plan !!!

My plan is to create nice animated cartoons -like Eric Shwartz-using my
amazing amiga computer. And not only to enrich the amiga scene, but also
to show what amigas can do (able to create great amiga cartoons only using
the amiga and the moviesetter animation software).
I am also planning to make new amiga games using AmosProfessional
but including beautiful graphics made in lightwave 3d.
That's my personal plan   :)
But you can help me immensely if you also become dedicated to create
new amiga animations, games, and music. etc etc whichever is your skill.
Cheers !!!

Rednova


you can start with learning to code and then start doing all the bounties for amiga software... that would be a first step in getting the golden age of amiga back. the amiga today lack seriously in the software apartment. the artistic things you mention you want to do, we dont need, there is more than enough people already that can do all that.
Title: Re: a golden age of Amiga
Post by: bbond007 on January 28, 2012, 02:06:58 AM
Quote from: rednova;677756
Dear Friends:

I believe if we all work hard dedicated, we can bring about a new
golden age of amiga.
Let's do it !!!

Rednova


I think it is almost here in the form of new hardware like the FPGA Replay (Minimig 2.0) and Natami. On the OS level you have AROS and Morphos. Did I leave anything out?
Title: Re: a golden age of Amiga
Post by: slaapliedje on January 28, 2012, 06:04:13 PM
Quote from: Darrin;677784
I got "hard at work" the other day when one of the secretaries walked past me wearing a tight shirt and a little black mini-skirt.  Unfortunately it didn't rush in a new Golden Era of the Amiga.  :(

That made my morning!

Unfortunately to bring about another Amiga Golden Era, we'd need something like X1000 on the same pricing scale as a mid-range PC.  Or something like the Natami out and about.  Not to mention the AmigaOS is missing a modern web browser.  The web browser on pretty much any Phone is at the same level as AWEB or iBrowse, and that's not even including the smart phones.

I still can't even get Netsurf working on my A4000D.  

I think what would do it is getting AROS ported to run on everything and anything.  That would be the first step.  Then get things running natively in it.  Gecko or Webkit straight on using Zune (that's the MUI replacement, right?  Threw me off when I first saw it, because of Microsoft's Zune...) would be awesome.

Those are my thoughts.  I haven't even been able to install the latest version of Icaros on anything but a virtual machine.  It flat out locked up during the boot up on my HP touchsmart.  Maybe I needed to do a hash check on the ISO...  

I am all for bringing back ANY platform from the past at this point, seriously tired of Windows, and Linux just works too well for me to have a reason to play around with it much anymore.  Give me something I can hack around on!!

slaapliedje
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 29, 2012, 02:05:05 PM
I've got a plan.

Plan A phase I is to produce a new Amiga 1200 accelerator card featuring an ARM cpu and software 68k emulator in flash ROM (so instant boot and your Amiga won't know the difference).

Phase II is replacement Amiga 1200 motherboard using an ARM cpu again, but for software emulation of the chipset only and accelerator card is needed for CPU (but original A1200 accelerators will still work.  So you can replace your Accelerator for a faster one, and/or replace your mainboard if that fails.  Mainboard chipset will be faster than AGA but otherwise compatible.

Phase III is both of the above on one board, eliminating the A1200 edge connector, but perhaps with socketed CPU.  Maybe plan for both Amiga and ATX case compatibility (provide both sets of mounting holes).

Phase IV is original Amiga case to put it all in!
Title: Re: a golden age of Amiga
Post by: wawrzon on January 29, 2012, 03:23:10 PM
Quote from: Mrs Beanbag;678101
I've got a plan.


things like that have been proposed, discussed and forgotten..
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 29, 2012, 04:34:11 PM
Quote from: wawrzon;678111
things like that have been proposed, discussed and forgotten..

I know... but... it's still a plan.

I'm not the best person for carrying through with plans to be honest, I tend to get all excited about some new idea and then run out of steam after about a month.  But I know people who could probably help me with this, if they are willing.  Plenty of amateurs manage to knock up ARM boards these days.  (Armatures?)  The hardware side of things is getting easier all the time as technology progresses, and the software side of things is already pretty much covered by UAE.

Actually phase III might be easiest to start with now I come to think of it, trying to interface with the A1200 edge connector is probably the most awkward bit.
Title: Re: a golden age of Amiga
Post by: save2600 on January 29, 2012, 04:43:12 PM
I believe a new golden age of Amiga could be enjoyed on one of our moonbases someday. Amiga on the moon!
Title: Re: a golden age of Amiga
Post by: Karlos on January 29, 2012, 04:48:22 PM
How about getting AROS working properly on the Raspberry Pi and working on an integrated 68K JIT for ARM?
Title: Re: a golden age of Amiga
Post by: wawrzon on January 29, 2012, 04:53:21 PM
Quote from: Mrs Beanbag;678123
I know...

other asked similar questions again and again, myself i proposed something like that on a german hardware dedicated forum, with no outcome, only it was x86 at the time. arm mighht be simplker because of endianness, but there arent many people here around who could carry out such a project especially till commercial availabiliuty. the closest to what you propose is currently gba1000 with its i believe 100mhz 060 accel:
http://www.gb97816.homepage.t-online.de/
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 29, 2012, 05:12:48 PM
Quote from: wawrzon;678128
other asked similar questions again and again, myself i proposed something like that on a german hardware dedicated forum, with no outcome, only it was x86 at the time. arm mighht be simplker because of endianness, but there arent many people here around who could carry out such a project especially till commercial availabiliuty. the closest to what you propose is currently gba1000 with its i believe 100mhz 060 accel:
http://www.gb97816.homepage.t-online.de/

ARM is simpler for a great many reasons.  They're made for embedded use so they're relatively simpler to put on a board.  Software emulation isn't a problem.  Or at least it's not the first problem.  A simple ARM board with two CPUs that can run anything at all would be a good start.

I need to talk to some hardware people.  I'm on the Edinburgh tech scene and I know people do this kind of stuff.  Not Amiga stuff yet, but, well, I'll let you know how I get on.

That 060 board looks pretty cool, real 060s are quite expensive though, that would be a barrier there.
Title: Re: a golden age of Amiga
Post by: wawrzon on January 29, 2012, 07:07:15 PM
it is his private project, even though open and few people got similar system running. i think the 060 accel is even pretty more unique. of course 060 is not an ideal candidate, i suppose you are aware that there are two fpga 68k in the works, tg68 and natami 050, but either hardly having any application or not finished yet. arm accel might perhaps be interesting if you can pull anything like that.
Title: Re: a golden age of Amiga
Post by: Khephren on January 29, 2012, 07:22:07 PM
I don't think we will ever have another 'golden era' -that was, I guess, 86-94 in Europe.
We can have some nice hobbyists hardware and software.

We are going to have to move off 68x soon. Freescale appear to be phasing out a lot of the lower end chips, and the 040 and 060 parts are still at crazy prices. FPGA (or other) solution can't come fast enough me (in terms of accelerators for classic).
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 29, 2012, 11:31:24 PM
Quote from: Khephren;678145
I don't think we will ever have another 'golden era' -that was, I guess, 86-94 in Europe.
We can have some nice hobbyists hardware and software.

We are going to have to move off 68x soon. Freescale appear to be phasing out a lot of the lower end chips, and the 040 and 060 parts are still at crazy prices. FPGA (or other) solution can't come fast enough me (in terms of accelerators for classic).

The successor to Freescale's Dragonball is i.MX series, which is an ARM based CPU, so I'd still consider this "linear development".  In other words, if I could do with with the freescale component it would still be "real amiga" spiritually, if that makes any sense to anyone.

ARM is the future of computing though, I'm sure of that.  x86 has to end sooner or later, it's too stupid to continue indefinitely.  Surprised it's lasted this long to be honest.

Still.  Now we can buy a Megadrive in a hand held device, not sure what the hardware is in one of these exactly but there must be a market for a similar Amiga product.
Title: Re: a golden age of Amiga
Post by: matthey on January 30, 2012, 01:00:28 AM
Quote from: Mrs Beanbag;678166
The successor to Freescale's Dragonball is i.MX series, which is an ARM based CPU, so I'd still consider this "linear development".  In other words, if I could do with with the freescale component it would still be "real amiga" spiritually, if that makes any sense to anyone.


Sorry, it doesn't make sense to me. Freescale could have abandoned the ColdFire Dragonball for various reasons probably related to small power efficient devices, customer wants/needs and cost. There are many aspects of ARM that make it easier to follow the crowd but then where is the creativity and originality? If you want simple and want to follow the crowd then buy an x86 and run UAE. I think an enhanced 68k (with ColdFire and other improvements) still has possibilities because I think it can offer better code density than ARM with Thumb 2 while being easier to program and more powerful (although not as energy efficient). It was dropped and is still delegated to the cellar for pure marketing reasons while it is proven technology (68060) that can be improved and scaled up with today's technology. The Natami fpga 68k+ CPU should be as fast as the last 68k processors and it would be possible to burn ~500MHz processors that would provide enough power to do most of today's computing needs.

Quote from: Mrs Beanbag;678166

ARM is the future of computing though, I'm sure of that.  x86 has to end sooner or later, it's too stupid to continue indefinitely.  Surprised it's lasted this long to be honest.


ARM will do well in the low end device market where energy efficiency matters. They are getting faster too, but I think x86 will be able to hold them off on the high end. The x86 is not the same architecture it originally was. It has it's baggage but so does ARM which has gone through 3 architecture changes itself to arrive at an efficient CISC style encoding much like the 68k had all along.

Quote from: Mrs Beanbag;678166

Still.  Now we can buy a Megadrive in a hand held device, not sure what the hardware is in one of these exactly but there must be a market for a similar Amiga product.


Probably an all in one (68k + custom chips) burned chip from an fpga just like could be done for the Natami when done. We need Amiga users to support what we have and consolidate development efforts.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 30, 2012, 05:30:07 PM
Quote from: matthey;678171
There are many aspects of ARM that make it easier to follow the crowd but then where is the creativity and originality? If you want simple and want to follow the crowd then buy an x86 and run UAE.  I think an enhanced 68k (with ColdFire and other improvements) still has possibilities because I think it can offer better code density than ARM with Thumb 2 while being easier to program and more powerful (although not as energy efficient). It was dropped and is still delegated to the cellar for pure marketing reasons while it is proven technology (68060) that can be improved and scaled up with today's technology. The Natami fpga 68k+ CPU should be as fast as the last 68k processors and it would be possible to burn ~500MHz processors that would provide enough power to do most of today's computing needs.
x86 over my dead body!  I'm as big a fan of 68k as anyone but let's face it, there is never going to be a 68k CPU in the mainstream market ever again; blame marketing all you want, but this is a fact.  The Natami project is awesome but their 68k CPU, in an FPGA or otherwise is always going to be an enthusiast product, with a price tag to match, and there's nothing wrong with that but it's not what I'm aiming for.  If you care about code density though, x86 wins, because it's still an 8-bit instruction set at heart so it can encode some instructions in a single byte, whereas 68k instructions are 16 bit and therefore always multiples of two bytes.

UAE again solves an entirely different problem.  We can do that already of course on existing hardware but having to load it up on another operating system just isn't the same.  It relegates it to the lower social class of historical curiosity.  Emulation isn't living, only reliving.  It has no future, only a past.

Quote
ARM will do well in the low end device market where energy efficiency matters. They are getting faster too, but I think x86 will be able to hold them off on the high end.

x86 will still be around for a few more years at the top end but it won't hold on forever.  Heat dissipation is already becoming an unmanageable problem in high performance systems.  Whereas ARM is already being investigated for servers, and Nvidia are going to be pushing it for mainstream desktop/laptop use.  AMD and Intel don't just supply top-end CPUs, once their mid-range and server markets fall away they're going to find their premium products much more difficult to keep competitive.

Quote
We need Amiga users to support what we have and consolidate development efforts.

This much I agree with, and I will say, I fully support Natami in their endeavours, and I'd buy one if they were available already, but to be honest I think their project might just be a little too ambitious, which is perhaps why it's taking them so long.  They have made work for themselves with their philosophy, they really are doing things the hard way and hats off to them.  I have no intention of trying to compete with them, it just solves a different problem as far as I'm concerned.

But let's not forget what else we already have in the community, AROS's new Kickstart and 68k JIT for ARM CPU's could come in very handy for the scheme I have in mind.
Title: Re: a golden age of Amiga
Post by: bloodline on January 30, 2012, 06:41:07 PM
Quote from: Karlos;678126
How about getting AROS working properly on the Raspberry Pi and working on an integrated 68K JIT for ARM?
+1
Title: Re: a golden age of Amiga
Post by: wawrzon on January 30, 2012, 07:16:23 PM
@Mrs Beanbag
interesting thoughts i must admit.
what concerns aros though i must mention that afaik it only runs hosted, and it is also envisioned to run hosted on pi at least initially. unfortunatelly, it looks like the arm developer is backing off due to lack of time, so..
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 30, 2012, 07:37:44 PM
well my first concern isn't performance so any 68k emulator I can get to compile for ARM will do to begin with.  Should be possible to pull UAE to bits to get something running.  Can get ARM chips of up to 1GHz for less than £50!  Chipset emulation could be done with a lot less.  I need to speak to some hardware folks first.
Title: Re: a golden age of Amiga
Post by: trydowave on January 30, 2012, 07:39:15 PM
The golden age of the Amiga was 1991 for me. That’s the year I brought Turrican 2. The age when the Amiga first wowed me was 1989 and Shadow of the Beast. To be honest. For a new golden age the Amiga would have to do the same thing that the 1000 did, totally revolutionise the gaming/computing world like it did back in 85'.
If the Amiga wasn’t the best then it wouldn't have sold. And it’s the reason why it slowly died out when the console/pc market came in. But. It was king, for many years. New amigas just don't cut it. They can't even match a mid entry PC let alone the latest gen consoles and the best PC gear money can buy.
Not to mention the fact that software is developed and marketed so different nowadays. I'm happy to stick with the old Amiga. Even though it’s currently in pieces in my spare room :(
Just my two cents.
Title: Re: a golden age of Amiga
Post by: matthey on January 31, 2012, 12:51:39 AM
Quote from: Mrs Beanbag;678275
If you care about code density though, x86 wins, because it's still an 8-bit instruction set at heart so it can encode some instructions in a single byte, whereas 68k instructions are 16 bit and therefore always multiples of two bytes.


The x86 has variable length instructions from hell that make up for the short instructions. The average code density of x86 is better than most RISC encodings but a ways behind 68k and Thumb 2 ARM which both use 16 bit instruction encodings. The 64 bit x86 is a little worse yet at code density. I think the 68k can be improved 5-10% in code density over the 68020 or ColdFire with the additions the Natami is likely to add without significantly increasing the complexity of the decoder. Those little ARM devices have good battery life but are dogs and x86 devices are fast but chew through the batteries. An enhanced 68k could hit the sweet spot between. We know how little of memory and storage an Amiga needs to be useful.

Quote from: Mrs Beanbag;678275

x86 will still be around for a few more years at the top end but it won't hold on forever.  Heat dissipation is already becoming an unmanageable problem in high performance systems.  Whereas ARM is already being investigated for servers, and Nvidia are going to be pushing it for mainstream desktop/laptop use.  AMD and Intel don't just supply top-end CPUs, once their mid-range and server markets fall away they're going to find their premium products much more difficult to keep competitive.


Servers generally need to access lots of memory and 64 bit x86 makes sense there. Yea, it generates a little more heat but crank up ARM to that processing power with 64 bits and I wouldn't expect a huge difference. PowerPC was supposed to be able to dethrone x86 due to it's more efficient and superior design but it didn't happen. IBM has take the PowerPC to the max but it's advantages don't seem to be enough to pay the cost differential in most cases.

Quote from: Mrs Beanbag;678275

But let's not forget what else we already have in the community, AROS's new Kickstart and 68k JIT for ARM CPU's could come in very handy for the scheme I have in mind.


The fpga Arcade chose fpga CPU emulation over ARM CPU emulation. It would be interesting to see how a faster ARM could emulate the 68k.

Quote from: trydowave;678293
... For a new golden age the Amiga would have to do the same thing that the 1000 did, totally revolutionise the gaming/computing world like it did back in 85'.


It's very difficult and expensive to revolutionize the gaming/computing world anymore. Another golden age for me would be a very affordable for everyone Amiga in 1 chip (68k CPU, custom chips, 3D) with backward compatibility. Think Natami produced in enough quantity to approach the Raspberry pi price. Say $100 U.S. with the expansion the Natami has. It is possible with enough quantity.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 31, 2012, 04:59:58 PM
Quote from: matthey;678361
We know how little of memory and storage an Amiga needs to be useful.
Hell yeah but that's not just the CPU...

Quote
Servers generally need to access lots of memory and 64 bit x86 makes sense there. Yea, it generates a little more heat but crank up ARM to that processing power with 64 bits and I wouldn't expect a huge difference.

tell it to HP

Quote
PowerPC was supposed to be able to dethrone x86 due to it's more efficient and superior design but it didn't happen. IBM has take the PowerPC to the max but it's advantages don't seem to be enough to pay the cost differential in most cases.

All market forces, I'm afraid.  It takes a lot to dethrone x86 because of its position, not because of any technical advantages intrinsic to the design.

Quote
It's very difficult and expensive to revolutionize the gaming/computing world anymore. Another golden age for me would be a very affordable for everyone Amiga in 1 chip (68k CPU, custom chips, 3D) with backward compatibility.

Indeed and I don't know what that revolution would be.  Amiga revolutionised the home computer world with its high quality graphics and sound, while PCs were still green screen beeping number crunches.  These days PC and console graphics and sound are so good I don't even see the point of them getting any better.  We need radically new concept of computer...

Quote
Think Natami produced in enough quantity to approach the Raspberry pi price. Say $100 U.S. with the expansion the Natami has. It is possible with enough quantity.

Yeah but we'll never get that quantity if it is only of interest to die-hard fans.  But there is a demand in the world at large for a cheap computer that's competent at games like a console, works out of the box without mucking about with drivers, but also with an operating system that you can use to browse the web, write docs, draw graphics, compose music etc.. just from a software side of things, I am sick that these days I have to pay £400 for cubase or photoshop!  (Well I don't because I use Gimp on Linux but you know...)

XBox 360 runs on a PowerPC chip by the way.
Title: Re: a golden age of Amiga
Post by: Ral-Clan on January 31, 2012, 06:47:31 PM
Quote from: rednova;677774
Hi:

@tribz
I do have a plan !!!

My plan is to create nice animated cartoons -like Eric Shwartz-using my
amazing amiga computer. And not only to enrich the amiga scene, but also
to show what amigas can do (able to create great amiga cartoons only using
the amiga and the moviesetter animation software).
I am also planning to make new amiga games using AmosProfessional
but including beautiful graphics made in lightwave 3d.
That's my personal plan   :)
But you can help me immensely if you also become dedicated to create
new amiga animations, games, and music. etc etc whichever is your skill.
Cheers !!!

Rednova

Hi Rednova,

Good Luck, you've been talking about it for a while now so I look forward to seeing what you can produce.
Title: Re: a golden age of Amiga
Post by: Bundi on January 31, 2012, 06:50:00 PM
I think the strength of the amiga was that a unified platform (ECS through AGA didn't vary that much, nor 68000 to 68060) was very affordable, "Open" in as much as the RKMs were publicly available for the same price as any other text book. No NDA, no £professional-only per annum developer subscription and the platform as a whole, audio+graphics+cpu+operating-system simply couldn't be beaten on price (I consider amiga-in-keyboards) and was hard to beat even without considering price between it's first appearance to Commodores eventual demise.

To compete on the same front now is impossible considering the budgets of NVidia, ATI, Soundblaster e.t.c. and the markup any firm has to pay when buying in small volume. The efficiency of AmigaOS is irrelevant on the desktop now that you can buy a 2.8GHz sempron with 1mb cache new for £28.38 (The first place I looked.) Better to have memory protection, 64bit e.t.c.

To compete on an open, developer-friendly platform front is to compete with linux which is a losing battle. To compete in making a nice friendly commercial linux/bsd available on an artificially constrained platform is to compete with Apple and Google which is plainly ridiculous. (Hello Commodore-USA)

About the only way I can see for "Amiga" / AmigaOS to find a niche again, outside of those of us who continue to use it simply because we love it like our first car that does about 20 miles a year but is still sitting in a lock-up somewhere, would be to provide best in class developer environment, available for a month to six weeks disposable income for some new up and coming technology and then pray that you can license bits of it out and enable those who buy into your environment to do the same.

About the only prospect I can see of an opportunity to do that is with FPGA development. If I understand the following paper correctly, technology is emerging that allows software to define the silicon on which it runs, dynamically(ish), while running. ( I can't claim to truly understand the paper)

How much more amazing would the demoscene be if the coders could redefine the silicon, with the limits of the fpga, dynamically to benefit whatever effect they felt like showing off?

http://www.doc.ic.ac.uk/~tbecker/papers/iee06.pdf

And that ties in more with Natami / AROS than it does with anyone who holds any old Commodore or Amiga IP. Even then I assume there would be a £BIG expenditure necessary to provide a modern development environment and quality documentation and I think we can safely assume that there are commercial entities positioning themselves to fill that space already.

Maybe I'm just dreaming of seeing a demoscene resurgence. For the purposes of full disclosure I should say that I've never coded a line of asm in my life, except maybe at school, so there is a good chance that I'm havering.

Now that I've reread my post I realise I may have drank too much coffee today. :furious:

I apologize to anyone who feels that I have wasted their time with this unusually lengthy post.

B.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 31, 2012, 08:44:14 PM
Yes, a computer with some FPGAs on board that could be reconfigured on the fly through the OS could lead to some very interesting projects...

Tonight I've been looking into hardware/realtime raytracing.  Of course we all remember The Juggler.  Raytracing was what made Amiga's name!

So I'm now wondering just how many ARM coprocessors we could pack on one board... forget GPUs, maybe a stack of CPU/FPGA pairs could do all sorts of crazy things.
Title: Re: a golden age of Amiga
Post by: Karlos on January 31, 2012, 09:06:24 PM
Quote from: Mrs Beanbag;678166
ARM is the future of computing though, I'm sure of that.  x86 has to end sooner or later, it's too stupid to continue indefinitely.  Surprised it's lasted this long to be honest.


ARM is very much the present of computing, let alone the future. I wouldn't write off the x86 though. The current generation of these processors is a far cry from the clunky old components. The modern 64-bit implementations are actually quite nice and extremely high performance.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on January 31, 2012, 10:23:56 PM
Quote from: Karlos;678482
ARM is very much the present of computing, let alone the future. I wouldn't write off the x86 though. The current generation of these processors is a far cry from the clunky old components. The modern 64-bit implementations are actually quite nice and extremely high performance.

Oh yeah there's a really neat RISC core hiding behind all that microcode gubbins, shame we can't get at it directly and turn off all those redundant transistors...

Still.  Check this out: http://www.youtube.com/watch?v=oLte5f34ya8

this is the sort of trick a new Amiga ought to aim for.  Forget GPUs, massive parallelism is the way to go.  Maybe a single standard supervisor CPU with a whole load of barrel co-processors similar to the UltraSparc T1.  The throughput of those things is incredible, given the right workloads.  They threw away such complexities as out-of-order execution in exchange for simultaneous multithreading, thus all but eliminating cache latencies.  This strategy would be perfect for highly paralellisable workloads such as ray-tracing.

We could call them Juggler Chips!
Title: Re: a golden age of Amiga
Post by: orange on January 31, 2012, 10:57:38 PM
Quote from: Mrs Beanbag;678487
Oh yeah there's a really neat RISC core hiding behind all that microcode gubbins, shame we can't get at it directly and turn off all those redundant transistors...

Still.  Check this out: http://www.youtube.com/watch?v=oLte5f34ya8

this is the sort of trick a new Amiga ought to aim for.  Forget GPUs, massive parallelism is the way to go.  Maybe a single standard supervisor CPU with a whole load of barrel co-processors similar to the UltraSparc T1.  The throughput of those things is incredible, given the right workloads.  They threw away such complexities as out-of-order execution in exchange for simultaneous multithreading, thus all but eliminating cache latencies.  This strategy would be perfect for highly paralellisable workloads such as ray-tracing.

We could call them Juggler Chips!


um, are you sure 'Mr. Beanbag' needs ray-tracing ? :)
Title: Re: a golden age of Amiga
Post by: Thorham on January 31, 2012, 11:15:55 PM
While everyone is talking about 'new' hardware, I have to ask: What about the software?
Title: Re: a golden age of Amiga
Post by: Karlos on January 31, 2012, 11:16:30 PM
Quote from: Mrs Beanbag;678487
Oh yeah there's a really neat RISC core hiding behind all that microcode gubbins, shame we can't get at it directly and turn off all those redundant transistors...

Still.  Check this out: http://www.youtube.com/watch?v=oLte5f34ya8

this is the sort of trick a new Amiga ought to aim for.  Forget GPUs, massive parallelism is the way to go.  Maybe a single standard supervisor CPU with a whole load of barrel co-processors similar to the UltraSparc T1.  The throughput of those things is incredible, given the right workloads.  They threw away such complexities as out-of-order execution in exchange for simultaneous multithreading, thus all but eliminating cache latencies.  This strategy would be perfect for highly paralellisable workloads such as ray-tracing.

We could call them Juggler Chips!


On the contrary, I'd say forget CPUs and focus on GPU if you like massive parallelism. My (now old news) quad core can run four threads concurrently. My (equally old) GTX275 can run 30720 of them at full pelt. Thread switching to hide latencies caused by memory access and the like is completely built into the hardware.

Full ray tracing is a tough one due to the tendency of threads to become divergent in their flow of execution but far from impossible with modest GPUs today. Then there is ray marching, which is the poor man's next best thing. And they can do that entirely realtime. In your browser, even, if you happen to have a WebGL capable one and supported hardware.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 11:56:31 AM
Quote from: Karlos;678500
Full ray tracing is a tough one due to the tendency of threads to become divergent in their flow of execution but far from impossible with modest GPUs today. Then there is ray marching, which is the poor man's next best thing. And they can do that entirely realtime. In your browser, even, if you happen to have a WebGL capable one and supported hardware.

On ray marching, or "volume ray casting" as they call it, Wikipedia states

"However, adaptive ray-casting upon the projection plane and adaptive  sampling along each individual ray do not map well to the SIMD  architecture of modern GPU; therefore, it is a common perception that  this technique is very slow and not suitable for interactive rendering.  Multi-core CPUs, however, are a perfect fit for this technique and may  benefit marvelously from an adaptive ray-casting strategy, making it  suitable for interactive ultra-high quality volumetric rendering."

http://en.wikipedia.org/wiki/Volume_ray_casting

Here Intel are doing real time ray tracing show off their Nehalem core:
http://www.youtube.com/watch?v=ianMNs12ITc

Obviously that is an expensive top-of-the-range CPU there (or rather, four of them).  It makes me wonder what could be done with a big bunch of ARM chips.  GPUs can be made to do this but you'd not be using them optimally.  Likewise even a general purpose chip like the Nehalem is a lot more complex than necessary.

I think to sum it up, GPUs are designed for a task too specific, while mainstream CPUs are designed for tasks too general.  I wonder if this goes some way to explain AMD's strategy with their Bulldozer chips, which seems to have confused a lot of people.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 01, 2012, 01:34:33 PM
@Mrs Beanbag
Quote from: Mrs Beanbag;678487
Forget GPUs, massive parallelism is the way to go.


I do agree that FPGAs represent a big opportunity to change how flexible computing architecture can be, but the line I quoted above doesn't make sense. The very reason GPGPU is a growing field is due to the massively parallel nature of modern GPUs. GPU computing and FPGA computing are not identical, but they are clearly related.

Quote from: Mrs Beanbag;678487

We could call them Juggler Chips!


I've got good news for you, your Juggler chips already exist:
http://www.eetimes.com/electronics-products/processors/4115523/Xilinx-puts-ARM-core-into-its-FPGAs
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 01:48:31 PM
Quote from: HenryCase;678577
@Mrs Beanbag

I do agree that FPGAs represent a big opportunity to change how flexible computing architecture can be, but the line I quoted above doesn't make sense. The very reason GPGPU is a growing field is due to the massively parallel nature of modern GPUs. GPU computing and FPGA computing are not identical, but they are clearly related.

Why are you talking about FPGAs?  I never mentioned FPGAs in that post.  I'm talking about generic CPUs in massive parallelism.  GPUs are SIMD.  Well some degree of SIMD is still useful for raytracing, because they do a lot of basic vector arithmetic, but not with the same amount of repetition as a GPU is designed for.

GPUs are of course massively parallel, but they are optimised for a specific sort of workload, although they are becoming more general purpose lately.

Quote
I've got good news for you, your Juggler chips already exist:
http://www.eetimes.com/electronics-products/processors/4115523/Xilinx-puts-ARM-core-into-its-FPGAs

These are not barrel processors.  They are FPGAs with an ARM core attached.  Which is also cool and useful, but not "Juggler chip" as described above.  "Juggler chip" is similar design strategy to UltraSPARC T1 but with ARM instruction set.
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 03:18:19 PM
Quote from: Mrs Beanbag;678562
On ray marching, or "volume ray casting" as they call it, Wikipedia states

"However, adaptive ray-casting upon the projection plane and adaptive  sampling along each individual ray do not map well to the SIMD  architecture of modern GPU; therefore, it is a common perception that  this technique is very slow and not suitable for interactive rendering.  Multi-core CPUs, however, are a perfect fit for this technique and may  benefit marvelously from an adaptive ray-casting strategy, making it  suitable for interactive ultra-high quality volumetric rendering."

http://en.wikipedia.org/wiki/Volume_ray_casting

Here Intel are doing real time ray tracing show off their Nehalem core:
http://www.youtube.com/watch?v=ianMNs12ITc

Obviously that is an expensive top-of-the-range CPU there (or rather, four of them).  It makes me wonder what could be done with a big bunch of ARM chips.  GPUs can be made to do this but you'd not be using them optimally.  Likewise even a general purpose chip like the Nehalem is a lot more complex than necessary.

I think to sum it up, GPUs are designed for a task too specific, while mainstream CPUs are designed for tasks too general.  I wonder if this goes some way to explain AMD's strategy with their Bulldozer chips, which seems to have confused a lot of people.


Wikipedia must be out of date there. I can assure you raymarching works fine on my gtx 275 and better still on fermi based GPUs which have superior divergent conditional branch handling and cache. There are several realtime examples written entirely in glsl for Mr doob's web glsl playground which run at full speed on my kit. I've tested even better CUDA specific examples. Lastly, even SIMD does not accurately describe the operation of these GPUs. SIMD better describes SSE or altivec. Its a poor description for modern stream processors.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 04:15:30 PM
Quote from: Karlos;678592
Wikipedia must be out of date there. I can assure you raymarching works fine on my gtx 275 and better still on fermi based GPUs which have superior divergent conditional branch handling and cache. There are several realtime examples written entirely in glsl for Mr doob's web glsl playground which run at full speed on my kit. I've tested even better CUDA specific examples. Lastly, even SIMD does not accurately describe the operation of these GPUs. SIMD better describes SSE or altivec. Its a poor description for modern stream processors.

Shader engines aren't really ray tracing, impressive though they may be.  CUDA can get closer to what I'm talking about, but "General Purpose GPU" is a self-contradictory phrase!  Either these chips are general purpose or they are special purpose.  Maybe we only call them GPU because they happen to be used for graphics.

I'm not saying it can't be done, or even done well, I'm only saying it's not optimal, because the chips are designed for something else and to make them do it you have to work around their limitations.  In other words, if they are so good at doing ray tracing already, imagine if they were actually designed for ray tracing instead of rasterisation... it seems to me that the complexity of graphics is these days getting to the point where ray tracing could actually be faster!  But a mainstream CPU is also far more complex than it needs to be, having been optimised for single-threaded performance, which is the opposite of what we want.

I mean look at this:
http://www.youtube.com/watch?v=x5aXxJGefxU

100% CPU work, and "Running in an E2140 1.6GHZ", that's not a lot of CPU, doesn't even have hyperthreading.  Now if you had 16 of such cores instead of only two, each with 8-way hyperthreading instead of superscalar... this is where CPU and GPU would meet in the middle.  The compromises made for streaming processors no longer seem appropriate.
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 05:17:40 PM
Shader engine is an obsolete term. Modern GPUs are massively parallel stream processors that are Turing complete. You can use them to perform any inherently parallel task you like, provided you know how to code it. If you program them to ray trace, that is exactly what they do. Or you could program them to perform all-pairs n-body particle interaction, or brute force md5 sums. They are nothing whatsoever like fixed function, discrete shade unit graphics chips of a few years ago any more than a modern multicore x64 is like a 286. Their main application is graphics processing because that is the sort of inherently parallel task they excel at, whether it is simple rasterization or complex per pixel shading. However, you need to look at this in the abstract. It can be any algorithm operating on a set of data using thread per unit data parallelism. There is no shader, the shader is merely a software construct running on a truly general purpose (algorithmically speaking- stream processor. And it crushes CPUs for this
Title: Re: a golden age of Amiga
Post by: HenryCase on February 01, 2012, 07:00:42 PM
Quote from: Mrs Beanbag;678578
These are not barrel processors.  They are FPGAs with an ARM core attached.  Which is also cool and useful, but not "Juggler chip" as described above.  "Juggler chip" is similar design strategy to UltraSPARC T1 but with ARM instruction set.

I guess I misread what you meant. Perhaps it would be best to outline in more detail what design you had in mind for the 'juggler chip', I'm interested to hear your thoughts.

In the meantime, here's another couple of links about massively parallel chips that you may be interested in following up on:
http://www.greenarraychips.com/
http://www.tilera.com/
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 07:02:58 PM
Well if that is the case then a modern GPU *is* a CPU, the only difference being the way it is connected to the memory.  But I still don't think that is quite the case.  How I understand it, a GPU is given a "kernel" which is a small program that is run for every piece of data that comes in on the stream.  They don't run a "full program" like a CPU does, but continually apply the same function over and over on the incoming data.  Which is very useful.  But its "Turing completeness" is limited to the bounds of the kernel, that is you can branch and loop as much as you like within a kernel, but you can't arbitrarily call one kernel from another.  Also the data goes in one end and out the other, very useful if you can split your dataset up into loads of small independent chunks.  If you're doing rasterisation this is very easy because every triangle can be done independently.  Maybe there's some cunning trick to it but I don't know how ray tracing would work in that scheme, because you want to do blocks of pixels in parallel rather than triangles or objects so every pipeline needs access to the complete scene structure.

But theory aside, I've been putting "real time ray tracing" into Youtube and I get a lot of stuff on CPUs and GPUs, and a lot of it is very impressive, but I don't see that GPUs actually have any obvious advantage over CPUs so far.
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 07:29:33 PM
The modern GPU is basically a very large collection of arithmetic/logic units. Think of these as very simple CPU cores where stuff like conditional branching is expensive but data processing is not. Then imagine them in clusters, each cluster running the same code but on different data. Not like a SIMD unit, but as an array of cores, able to branch independently but optimal when in step. Now imagine a set of work supervisors that oversee them, detecting when clusters are waiting for IO and able to switch the thread group they are executing for one that is ready to go. Finally, imagine these being served by multiple memory controllers on demand. That's your basic GPU today. Current GPUs can even execute multiple kernels concurrently, so if one cannot occupy all the stream units, you can run more.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 07:36:33 PM
Quote from: HenryCase;678626
I guess I misread what you meant. Perhaps it would be best to outline in more detail what design you had in mind for the 'juggler chip', I'm interested to hear your thoughts.

Ok just look up UltraSPARC T1 to get what I mean.  I'll summarise.  Traditionally CPUs have been designed for single-threaded performace, by inventing such things as instruction-level parallelism (you can do several consecutive instructions at once if they don't clash), branch predication, speculative execution, out-of-order execution etc.. All of these things require extra circuitry of course but it's worth it for the performance boost.  Problem is you don't get 2x performance for 2x transistors so then multiple cores came into play.  But lots of programs don't run in multiple threads so they still try to maximise the performance of single cores.  And they are still held back when one instruction has a dependency on a previous one that hasn't finished yet, or it's waiting for memory reads etc..

UltraSPARC T1 took a more holistic approach.  Knowing servers always run umpteen threads at once, there's really no point in all that extra complexity to get the most single threaded performance.  So they ditched it all and instead made a CPU core that could switch threads on every cycle.  They only have to have a register file for each thread and rotate them round (hence the term "barrel processor"), and you can get rid of a whole load of complexity and go back to a very simple core that only does one instruction at once, which gives you room for loads more cores on a die, and cache misses can be made to vanish into the background.  Single-thread performance is terrible, but if you can throw enough threads at it it can keep up with CPUs that run at far faster clock speeds.  The T1 typically ran at 1.2GHz and, given the right sort of workloads, could keep pace with 3GHz Xeons.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 08:42:39 PM
Well that makes sense but

Quote from: Karlos;678632
Think of these as very simple CPU cores where stuff like conditional branching is expensive but data processing is not. Then imagine them in clusters, each cluster running the same code but on different data. Not like a SIMD unit, but as an array of cores, able to branch independently but optimal when in step.

see this is where I'm getting stuck, surely independent conditional branching is exactly what ray tracing needs a lot of.

Also the kernel doesn't have random access into a large area of memory (where your scene might be stored, for instance) but only to the small portion that comes in on the stream, yes?
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 09:04:21 PM
Quote from: Mrs Beanbag;678644
Well that makes sense but

see this is where I'm getting stuck, surely independent conditional branching is exactly what ray tracing needs a lot of.

Precisely, which is why I said it's not an ideal algorithm. However, the handicap for it is becoming less and less with successive iterations of GPU architecture. The Fermi, for example, introduces more CPU like features, including cache memory. Which helps the next thing you mention...

Quote
Also the kernel doesn't have random access into a large area of memory (where your scene might be stored, for instance) but only to the small portion that comes in on the stream, yes?

Yes and no. You can do random access on current cards. Like conditional branching, it's not ideal. But far from insurmountable. nVidia demonstrated realtime raytracing using CUDA a couple of years ago. Of course, GPUs open up a possibility of hybrid rendering techniques. They can perform the entire primary ray calculation as a regular rasterization pass and focus only on tracing secondary rays.

Example:
[youtube]kcP1NzB49zU[/youtube]
Title: Re: a golden age of Amiga
Post by: HenryCase on February 01, 2012, 09:12:13 PM
Quote from: Mrs Beanbag;678634
UltraSPARC T1 took a more holistic approach.  Knowing servers always run umpteen threads at once, there's really no point in all that extra complexity to get the most single threaded performance.  So they ditched it all and instead made a CPU core that could switch threads on every cycle.  They only have to have a register file for each thread and rotate them round (hence the term "barrel processor"), and you can get rid of a whole load of complexity and go back to a very simple core that only does one instruction at once, which gives you room for loads more cores on a die, and cache misses can be made to vanish into the background.  Single-thread performance is terrible, but if you can throw enough threads at it it can keep up with CPUs that run at far faster clock speeds.  The T1 typically ran at 1.2GHz and, given the right sort of workloads, could keep pace with 3GHz Xeons.

I see, thank you for the information. That design makes sense for server chips, where you have a high number of unrelated threads, but expanding this type of architecture to be more generally useful does require looking at memory management. The main issue with parallel computing is managing memory, having the processing power to deal with multiple threads is easy in comparison.

Of course, it depends on what you're looking for. If you just want a ray trace accelerator then these issues are not so pressing. If you'd like to explore the memory issues more (and solutions to them), I can highly recommend this video on concurrency in Clojure, which offers the best solution I've found to date on making programming parallel systems (comparitively) easy to manage:
http://blip.tv/clojure/clojure-concurrency-819147

Quote from: Mrs Beanbag;678644
Also the kernel doesn't have random access into a large area of memory

How much memory do you anticipate being adequate? Graphics card memory is fairly large these days, can get cards with 1GB directly on the graphics card, for example. Plus, the PCI-E bus these cards are plugged into isn't exactly sluggish.
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 09:25:51 PM
Quote from: HenryCase;678651
How much memory do you anticipate being adequate? Graphics card memory is fairly large these days, can get cards with 1GB directly on the graphics card, for example. Plus, the PCI-E bus these cards are plugged into isn't exactly sluggish.


You can pick up a single GPU GTX580 card with 3GB right now if you have a fat enough wallet.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 10:00:34 PM
Quote from: HenryCase;678651
Of course, it depends on what you're looking for. If you just want a ray trace accelerator then these issues are not so pressing.

The model I have in mind is one where your operating system will run on a fairly standard dual or quad core CPU, but bulk calculations can be done on a set of co-processors with the barrel-architecture mentioned above.

[/QUOTE]How much memory do you anticipate being adequate? Graphics card memory is fairly large these days, can get cards with 1GB directly on the graphics card, for example. Plus, the PCI-E bus these cards are plugged into isn't exactly sluggish.[/QUOTE]

It's not the total amount of memory that's the problem, it's the random access.  The entire point of "streaming processor" is that the data comes in sequentially (more-or-less).  For ray-tracing you need to traverse the scene as a tree structure independently for each pixel, so it's difficult to serialise into a stream, because it's recursive rather than linear.  But the tree itself is static, so it can be in a read only memory.  The complication is that this read-only memory has to be accessed by all the threads simultaneously, but because it's guaranteed not to change during rendering there aren't any cache coherency issues there.  Each thread can have its own local data area and output buffer, and there are no interdependency issues.  The outputs can be combined after all threads complete.

I'm thinking of a cyclic arrangement something like this:

[CPU] --> [input data] --> [barrel coprocessors] --> [output buffers] --> [CPU]

which is kind of like a semi-streaming set up, I guess.  The output can still be streamed even if the input cannot.

There could be some kind of "burst unit" that transfers the output back to the input for the next iteration, which would be useful for physics simulations.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 10:04:05 PM
Quote from: Karlos;678648
Example: (youtube)kcP1NzB49zU

I did see that, it's neat, but it seems to be doing some crude form of render while you're moving it about and only does real ray tracing when you leave it still, and takes a short while to do it.  The Intel Xeon demonstrations were full ray tracing in real time.  Granted a Xeon setup like that would set you back a few grand...
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 10:16:15 PM
Quote from: Mrs Beanbag;678662
I did see that, it's neat, but it seems to be doing some crude form of render while you're moving it about and only does real ray tracing when you leave it still, and takes a short while to do it.  The Intel Xeon demonstrations were full ray tracing in real time.  Granted a Xeon setup like that would set you back a few grand...

Even the crude render is ray traced, it's just not iterated as far. I'm a GPU fan for a number of reasons.They are massively powerful, cheap (comparatively) and to me, represent the logical evolution of what the Amiga's custom chips could have been. You do anything from knocking up old Amiga style raster bars to realtime physics simulation, all running entirely on the GPU.

Incidentally, if you want to see what you can do on a slightly more serious (remember, the garage demo was running on a gaming card) GPU in realtime, check what the quadro 6000 can do:
[youtube]QaKwLp77kjQ[/youtube]
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 10:33:54 PM
Quote from: Karlos;678666
Even the crude render is ray traced, it's just not iterated as far. I'm a GPU fan for a number of reasons.They are massively powerful, cheap (comparatively) and to me, represent the logical evolution of what the Amiga's custom chips could have been.
No argument, I've always been a fan of offloading things to separate units so the CPU could twiddle its thumbs, just not convinced that streaming processors are the ultimate solution.  Maybe there is some way to efficiently "stream" recursively...

You know, the Amiga's old blitter was this >< close to being able to do texture mapping.  If only you could use the B source DMA in line drawing mode...

Quote
Incidentally, if you want to see what you can do on a slightly more serious (remember, the garage demo was running on a gaming card) GPU in realtime, check what the quadro 6000 can do:
(youtube)QaKwLp77kjQ
Nice, anyone got £4k to spare?
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 10:39:28 PM
Quote from: Mrs Beanbag;678673
No argument, I've always been a fan of offloading things to separate units so the CPU could twiddle its thumbs, just not convinced that streaming processors are the ultimate solution.  Maybe there is some way to efficiently "stream" recursively...

The demands are getting greater all the time and they're achieving it. nVidia have a stake in HPC. Half of the features added in the GF100/110 chips were more to do with general computing than traditional rasterized 3D graphics. The latter generally doesn't need full IEEE754-2008 conformance, EEC, caching etc.

Quote
You know, the Amiga's old blitter was this >< close to being able to do texture mapping.  If only you could use the B source DMA in line drawing mode...


Did you see the thread recently where someone (sorry, I forgot the username) got sub-pixel correct line drawing and (slightly buggy) sub-pixel correct polygon rendering out of ECS? Damned impressive stuff.

Quote
Nice, anyone got £4k to spare?


Still cheaper than a bucketful of high end Xeons ;)
Title: Re: a golden age of Amiga
Post by: actung_bab on February 01, 2012, 10:47:00 PM
Quote from: rednova;677756
Dear Friends:

I believe if we all work hard dedicated, we can bring about a new
golden age of amiga.
Let's do it !!!

Rednova

Well almost seems that way doesint it i have to agree alot to be thankfull for
I think if you got new hardware your got the Second golden age Maybe when
PPC cards and then A! one hardware so Know, we got 3rd Golden age
Depending how you look at things i guess but for me its great even if i havent got,
The hardware in front of me i got the oppunity to buy it if i wish to make it my first priorty
As my first hobby which it isint for last 6 years its been hifi gear dvds and know motorcycles A 1992 VFR 750 honda (rc36)

For me i run the bbs athough getting no callers at present hehe
But that makes the old 1200 enjoyable to turn on and love fiddling with it

Given i get the  time with family commitments

But we got remember alot people work on amiga software where always
gifted people did it for the love of it not getting pid a fortune (sharware)

These days seems its mostley about making money ie iphone apps etc

I think catch 22 thing , and not the devolpers fault or anyones
WE got the hardware but need to have people buying more software
To encourage growth in the market but its never going be like in the 80s
The world moved on and its not sad its just life
Those days everthing was new and excitng today little that brand new just reinventing the wheel .
So No not in the way you hoped for but am not going rain on your parade

Dude if you wish for that and its in your heart go for it buddy

keep posting no harm in talking about ideas can catch on and what made steve jobs person he was
Even was bit of a Hard man to his people though

Ps when laptop gets on the market i will be buying that as it be afforadable for me even with amiga
At number 3 On priority list
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 10:47:56 PM
Quote from: Karlos;678674
Did you see the thread recently where someone (sorry, I forgot the username) got sub-pixel correct line drawing and (slightly buggy) sub-pixel correct polygon rendering out of ECS? Damned impressive stuff.
I did not...

Quote
Still cheaper than a bucketful of high end Xeons ;)
Heh maybe, but Xeons are over-spec anyway, as I've been saying, we don't need all that superscalar jazz if we can throw enough threads at the problem.

But this is even better...
grmanet.sogang.ac.kr/seminar/RPU.pdf (http://grmanet.sogang.ac.kr/seminar/RPU.pdf)
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 10:53:00 PM
Quote from: Mrs Beanbag;678677
I did not...

Found it:

http://www.amiga.org/forums/showthread.php?t=60315

-edit-

Quote
Heh maybe, but Xeons are over-spec anyway, as I've been saying, we don't need all that superscalar jazz if we can throw enough threads at the problem.

Actually, you've just brought the conversation full circle. The reason I brought up GPU in the first place was this.

For multiple CPU approaches to massive threading, though there are other complications. Amdahl's law, for one :-/
Title: Re: a golden age of Amiga
Post by: HenryCase on February 01, 2012, 11:02:18 PM
Quote from: Karlos;678674
Did you see the thread recently where someone (sorry, I forgot the username) got sub-pixel correct line drawing and (slightly buggy) sub-pixel correct polygon rendering out of ECS? Damned impressive stuff.


Karlos, is this thread referring to the same demonstration?
http://www.natami.net/knowledge.php?b=6¬e=43776

EDIT: Ah, I see you found what you were looking for.
Title: Re: a golden age of Amiga
Post by: Karlos on February 01, 2012, 11:05:14 PM
Quote from: HenryCase;678681
Karlos, is this thread referring to the same demonstration?
http://www.natami.net/knowledge.php?b=6¬e=43776

EDIT: Ah, I see you found what you were looking for.


Yep, that's the one. I read his entire set of blog articles on the subject in the end. It was quite informative. So much cool stuff was locked away in some of that old hardware, never to be really exploited by anybody. More's the pity.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 11:24:35 PM
Quote from: Karlos;678679
Actually, you've just brought the conversation full circle. The reason I brought up GPU in the first place was this.
In that case, it was full circle as soon as I started!  This is exactly why I brought up UltraSPARC T1, a CPU designed specially for large numbers of threads with no regard for single-thread performance.

But, I hasten to point out, I'm considering this for a co-processor, not the main processor.  I'm still thinking about how the memory system would work.  I reckon there must be some way for a streaming unit to send the root of a tree to each core, and then the core can choose its path down the tree, so you can get the best of both worlds.  Considering the FPGA article above, efficient real-time ray tracing is still a long way from current GPUs and CPUs.  Both can do it, but both have to sweat.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 01, 2012, 11:27:12 PM
Quote from: Karlos;678682
Yep, that's the one. I read his entire set of blog articles on the subject in the end. It was quite informative. So much cool stuff was locked away in some of that old hardware, never to be really exploited by anybody. More's the pity.

Indeed.  One trick I've still yet to try, but I know must be possible, is to use HAM as a zero-cost polygon filler!
Title: Re: a golden age of Amiga
Post by: HenryCase on February 01, 2012, 11:50:30 PM
Quote from: Karlos;678679

For multiple CPU approaches to massive threading, though there are other complications. Amdahl's law, for one :-/


There are ways around Amdahl's Law. The main cause of it, AFAIR, is the memory architecture of traditional computer systems (the bottleneck is the memory bus of designs like the Von Neumann architecture).

However, this isn't necessarily a property of the CPUs themselves. In systems that have moved away from traditional memory routing setups, Amdahl's Law can be defeated. One of the reasons I looked into this was after reading an article (or maybe watching a video) about the XMOS chip, which apparently scales really well whilst adding new cores without having the same memory bottleneck. I can't find the article where this was mentioned, but if I do I'll link to it here.

In my opinion, there is a need to make managing memory as parallel an operation as can be achieved. Lockless concurrency is definitely possible at the software level (as shown in the Clojure video I linked to earlier), and it'll be interesting to see memory management evolves at the hardware level.

To those reading that don't know of Amdahl's Law, it basically says there are limits to the efficiency of multi-core systems, and past a certain point you can damage the efficiency of your system by adding more cores. This tipping point varies depending on the architecture, for example for modern x86 CPUs it is said that there is not much benefit beyond 4 cores (in this case it is CPU dependent as the Northbridge has now moved onto the CPU die).

Quote from: Mrs Beanbag;678688
Indeed.  One trick I've still yet to try, but I know must be possible, is to use HAM as a zero-cost polygon filler!


Sounds like a good idea!
Title: Re: a golden age of Amiga
Post by: Thorham on February 02, 2012, 02:36:30 AM
The golden age of the Amiga has long since passed, and it's not going to come back using hardware that hasn't got anything to do with Amigas (yes, here we go again). There's no chance, especially not when people keep sticking to operating systems that are so far behind the current standard (which is arguably a questionable standard).
Title: Re: a golden age of Amiga
Post by: yssing on February 02, 2012, 08:14:42 AM
The Official Amiga moved to PPC a long time ago, and yes AOS4.x is the official amiga platform.

Regarding browsers, there are plenty of OS Browsers, so its a matter of "just" porting those. Alas, I don't know how to do it though.

What I would really like is a java runtime environment and maybe some sort of flash implementation.

An office suite, but I guess we can use Google Docs already??
Title: Re: a golden age of Amiga
Post by: Thorham on February 02, 2012, 11:24:40 AM
Quote from: yssing;678731
The Official Amiga moved to PPC a long time ago, and yes AOS4.x is the official amiga platform.
:laughing:
Title: Re: a golden age of Amiga
Post by: gertsy on February 02, 2012, 12:45:21 PM
The golden age of Amiga.... I've already been through it once.  But by all means:
"Make it so !"
I'll be waiting in my Ready Room.
Title: Re: a golden age of Amiga
Post by: Richard42 on February 02, 2012, 03:54:19 PM
Quote from: HenryCase;678692
There are ways around Amdahl's Law. The main cause of it, AFAIR, is the memory architecture of traditional computer systems (the bottleneck is the memory bus of designs like the Von Neumann architecture).

Amdahl's law doesn't have anything to do with memory bandwidth.  It's very simple, and trust me, there is no way around it.  Some algorithms, or parts of algorithms, cannot be parallelized; they are inherently serial.  CABAC in H.264 video compression is a good example.  Your overall software performance will be bounded by the execution time of the most complex piece of the algorithm which cannot be parallized.  Once that piece of your algorithm is taking up 100% of an execution core, your software cannot go any faster, regardless of how many more CPUs you throw at it.

I read a paper a few days ago claiming that it's better to have a few beefy cores than a bunch of wimpy cores.  This is exactly what I've also seen in my experience with high-performance computing, and the reason why serious people run HPC compute loads on badass x86 chips and not ARM or Atom.
Title: Re: a golden age of Amiga
Post by: TheBilgeRat on February 02, 2012, 04:27:43 PM
Quote from: HenryCase;678692
To those reading that don't know of Amdahl's Law, it basically says there are limits to the efficiency of multi-core systems, and past a certain point you can damage the efficiency of your system by adding more cores.


Like the other gentleman stated concerning the pieces of code that cannot be made parallel, there is a simple equation that allows you to determine the speedup:

Tp = 1/s * Ts + (1 - 1/s) * Ts/p
Title: Re: a golden age of Amiga
Post by: jorkany on February 02, 2012, 04:41:43 PM
Quote from: yssing;678731
The Official Amiga moved to PPC a long time ago, and yes AOS4.x is the official amiga platform.
If you believe that then I expect you'll be buying the upcoming Commodore Amiga from CUSA, as it is the official Amiga.

But then if you believe what you wrote, you'll probably believe anything.

Look, I found the perfect soundtrack for you to listen to as you pretend OS4 is "teh REAL Amiga!":
http://www.metacafe.com/watch/sy-959746544/journey_dont_stop_believin_official_music_video/
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 02, 2012, 06:56:54 PM
Quote from: Richard42;678774
Amdahl's law doesn't have anything to do with memory bandwidth.  It's very simple, and trust me, there is no way around it.  Some algorithms, or parts of algorithms, cannot be parallelized; they are inherently serial.  CABAC in H.264 video compression is a good example.  Your overall software performance will be bounded by the execution time of the most complex piece of the algorithm which cannot be parallized.  Once that piece of your algorithm is taking up 100% of an execution core, your software cannot go any faster, regardless of how many more CPUs you throw at it.

This is exactly right.  However ray tracing is one of a class of problems called "embarrassingly parallel".

Quote
I read a paper a few days ago claiming that it's better to have a few beefy cores than a bunch of wimpy cores.  This is exactly what I've also seen in my experience with high-performance computing, and the reason why serious people run HPC compute loads on badass x86 chips and not ARM or Atom.

Well that rather depends on what you want to do with your computer!  If you know you are going to use a lot of threads, a bunch of wimpy cores are a good choice.

A GPU is exactly a "bunch of wimpy cores" designed to maximise for data throughput.  So is an UltraSPARC T1.  They both do their jobs in different ways, but both are the right solutions to the right problems.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 02, 2012, 07:34:32 PM
Quote from: Richard42;678774
Amdahl's law doesn't have anything to do with memory bandwidth.  It's very simple, and trust me, there is no way around it.  Some algorithms, or parts of algorithms, cannot be parallelized; they are inherently serial.


Let me put this question to you: what stops serial processing tasks being shared amongst different cores?

The issue with parallelising serial tasks is not the access to more cores, as out-of-order execution shows it's possible to streamline processing based on the computing resources available. What does hold things back is that the memory holding the data being worked on is not shared out. That is why I stated that commonly employed memory architectures are the bottleneck. If it helps, think about it like this. What we have now is multiple cores working on a single data set. Now think about a network of computers working on a problem together. A key part of making this efficient is ensuring they block each other as little as possible. Now consider that it's possible to build a 'network' of computers within a single computing device, so long as they have control of their own memory. Hopefully you can see where this is going, if not this page should give a little more clues:
http://www.eetimes.com/design/eda-design/4211228/Overcoming-32-28-nm-IC-implementation-challenges

Amdahl's Law applies only to a certain set of programs. Yes, there are parts of algorithms that must be executed in a certain order. However, there are many ways to write code that lends itself to parallel execution. Here's one example of an article that discusses ways to beat Amdahl's law:
http://drdobbs.com/cpp/205900309?pgno=1

Generally speaking, one of the key things when designing programs that are highly parallelised is avoiding the need to manipulate state. For example, the programming language Haskell is 'pure' by design in the sense that it doesn't alter the state of program whilst running it, and the elements of the program that do require changing state and the side effects from this are sandboxed in structures called monads. This allows Haskell programs to take advantage of multi-core CPUs without needing to worry about program execution.

If this is new, need to explore what is meant by side effects. Imagine if every time you asked a certain question you got the same answer. Having such a question in a program is an example of something without side effects. Next, imagine the opposite. With the question with side effects, the answer is partly determined by when you ask it. By removing side effects, it doesn't matter when you ask the question.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 02, 2012, 07:58:20 PM
Quote from: HenryCase;678804
Let me put this question to you: what stops serial processing tasks being shared amongst different cores?

Umm it's the fact that every stage in the computation has a direct dependency on the previous stage.  Instruction level parallelism might be able to squeeze some extra performance out of each stage, but even that has its limits (more than four-way superscalar is typically more effort than it's worth).

There's an argument to be made for this being down to unimaginative formulations of the problem.  But that's another story.  A lot of existing software has been designed this way.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 02, 2012, 09:33:13 PM
Quote from: Mrs Beanbag;678808
A lot of existing software has been designed this way.


You won't find me disagreeing with that. Of course it's possible to design software that doesn't scale well to multiple CPU cores.

What is being suggested by Amdahl's Law is that there's a limit beyond which you can't improve performance by adding a new processing core, even if you code from scratch. Looking at it another way, if you're trying to execute an algorithm, you'll only be as fast as the slowest non-divisible element in the algorithm. This is true, but what I feel is being overlooked is that it's possible to make smaller non-divisible elements in a program by altering the architecture that the program runs on. It's about maximising performance, and from what I've seen we've got plenty of room to optimise both the hardware and the software of parallel systems.
Title: Re: a golden age of Amiga
Post by: psxphill on February 03, 2012, 07:53:40 AM
Quote from: Mrs Beanbag;678688
Indeed. One trick I've still yet to try, but I know must be possible, is to use HAM as a zero-cost polygon filler!

It was originally designed for that. However there is a cost as you need to clip polygons rather than use z sorting.
Title: Re: a golden age of Amiga
Post by: yssing on February 03, 2012, 08:19:48 AM
Quote from: jorkany;678781
If you believe that then I expect you'll be buying the upcoming Commodore Amiga from CUSA, as it is the official Amiga.

But then if you believe what you wrote, you'll probably believe anything.

If I beleive that AOS.x is the official amiga, and it runs on PPC only, why would I then use money on CUSA rebranded x86HW?

No reason for you not to behave like an adult either.
Title: Re: a golden age of Amiga
Post by: bloodline on February 03, 2012, 10:19:39 AM
Quote from: yssing;678914
If I beleive that AOS.x is the official amiga, and it runs on PPC only, why would I then use money on CUSA rebranded x86HW?

No reason for you not to behave like an adult either.
The point jorkany is making is that AOS4.x is only official because some company bought the Amiga trademark and then said that it was official... If that is your logic then, CUSA are also producing official products.

I reality now, is that there isn't really an "official" AmigaOS... People can just use whatever they want to use.
Title: Re: a golden age of Amiga
Post by: dammy on February 03, 2012, 02:34:58 PM
Quote from: yssing;678914
If I beleive that AOS.x is the official amiga, and it runs on PPC only, why would I then use money on CUSA rebranded x86HW?

No reason for you not to behave like an adult either.


How about you acting as one as well?  There are multiple official Amiga OSs out in the wild.  My very first one was 1.2 and ended up with 3.1.  Then H&P released 3.5 and 3.9.  Then Hyperion released the 4.x series.
Title: Re: a golden age of Amiga
Post by: yssing on February 03, 2012, 04:29:08 PM
Som fanden læser biblen. Hvorfor gider jeg?

Look, the successor to AOS 1.x, 2.x and 3.x is AOS4.x not MOS or Aros.
Naturally AOS < 3.9 is official, but development of these have ceased a long time ago. No I am not saying that nothing is being develop for them.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 03, 2012, 09:01:44 PM
@yssing
FWIW, the people claiming that Amiga OS4.x somehow needs to share the 'official' title are just being pedantic IMO, and I say this as an AROS fan.
Title: Re: a golden age of Amiga
Post by: persia on February 04, 2012, 02:52:23 PM
I think the "official title" has far less meaning in 2012 than it did in 1992.  Let's face it, years of non-development have left all the Amiga-like OSs as hobby OSs.  There's not one that has a hope of commercial viability.  Just love the OSs you love and be happy with it.
Title: Re: a golden age of Amiga
Post by: Thorham on February 04, 2012, 05:27:50 PM
What's up with this OS talk? Amiga isn't a bunch of operating systems, it's a line of computers. As for AOS, it sucks. Amigas are cool, but the OS isn't worth much. Should change (isn't going to happen if you look at current efforts).
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 07, 2012, 09:48:43 AM
I don't even know what "official" means in this context, and it certainly doesn't particularly motivate me.  I don't much care if something is "official" or not, I only care if it is any good.
Title: Re: a golden age of Amiga
Post by: wawrzon on February 07, 2012, 10:57:28 AM
@Mrs Beanbag
yeah, but you know you have to count on vice squad being constantly on the loose telling us what is official and what not in amiga world.
Title: Re: a golden age of Amiga
Post by: AmigaNG on February 07, 2012, 12:24:11 PM
I kind of have to agree with mr beanbag what every your happiest using and works for you, then use it, dont just follow the brand. PS: I'm a Aros and amigaos4 fan.
Title: Re: a golden age of Amiga
Post by: persia on February 07, 2012, 01:56:14 PM
Yes, right now I am using AROS more and Classic less.  If the netbook turns out to be real I'll buy it and try OS4.  I also have MOS running on a Mac Mini.  But in the end, MOS, OS4 and AROS are shadows of what Amiga was.  They aren't cutting edge, aren't innovative and aren't combinations of well tuned hardware and software.  It's all pure nostalgia.
Title: Re: a golden age of Amiga
Post by: Tripitaka on February 07, 2012, 03:27:31 PM
Has anyone else noticed that this thread has descended into red/blue bickering?   ....yawn, pass the popcorn.

The only cunning plan I can see to revive a golden age of Amiga involves flesh eating nanites and a lunatic Amiga fan controlling them. Hmmm.... maybe I'll get to work on that after I've finished the washing up.
Title: Re: a golden age of Amiga
Post by: psxphill on February 07, 2012, 03:58:19 PM
Quote from: bloodline;678925
I reality now, is that there isn't really an "official" AmigaOS... People can just use whatever they want to use.

You're half right, there is an official AmigaOS but people can use whatever they want.
 
"Official" in this case just means the name was licensed. It's no different to branded sports mechandise.
Title: Re: a golden age of Amiga
Post by: Bamiga2002 on February 07, 2012, 04:00:48 PM
The classic golden age is coming again in the form of Natami :afro:!
And for some it is already here with NG systems with OS4 & MOS :)
Title: Re: a golden age of Amiga
Post by: wawrzon on February 07, 2012, 04:18:54 PM
Quote from: psxphill;679704
You're half right, there is an official AmigaOS but people can use whatever they want.
 
"Official" in this case just means the name was licensed. It's no different to branded sports mechandise.


so, as the only official amiga trademark today is apparently the property of cusa, even that *official* a-eon anouncements dare to name "amiga" only within the quotes. so what?? i dont care neither for them nor others that do not suit my personal amiga interest anymore. and for the record: im not from morphos.
Title: Re: a golden age of Amiga
Post by: Digiman on February 07, 2012, 06:24:26 PM
Without a new generation of J. Miner and R.J.Mical or any of the excellent ideas from programmers like Dan Silva etc there can't ever be a new age of Amiga.

Remember 18 years of no repeat of the firsts found in A1000 to CD32 is a long time in engineering terms and software development terms.

edit:mass market/PC conquering type golden age*
Title: Re: a golden age of Amiga
Post by: Fats on February 07, 2012, 06:50:08 PM
Quote from: Digiman;679714
Without a new generation of J. Miner and R.J.Mical or any of the excellent ideas from programmers like Dan Silva etc there can't ever be a new age of Amiga.


Even if J. Miner would be here now, he wouldn't be working on the next great desktop computer. Times are long passed that you can be earth changing in that product category. He would probably be working on the real first intelligent machine or something like that.

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Digiman on February 07, 2012, 09:11:38 PM
Software can be intelligent too. And yes to be honest if someone produce d an OS 25 years worth better than KS/WB 1.2 on A1000/500 it would be a game changer. Multitasking GUI desktop machine was game changing in 85/86.

Windows, Linux or OSX will never be a revolution......so we are stuck.

What I meant about Jay Miner etc was a new generation of designers who designed a machine architecture radically different and superior to current desktop PC or Mac. PC and Mac existed before the A1000 launch. You need lateral thinking geniuses before millions of dollars. And OS and user input technology must also be cutting for another revolution similar to the dawn of Amiga in 1985 IMO
Title: Re: a golden age of Amiga
Post by: persia on February 08, 2012, 04:39:42 AM
@Digiman Forward thinking isn't going to come out of the Amiga crowd, they scream if even the outdated and confusing idea of a "snapshot" is set to off by default. There'll be no innovation here.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 09:24:46 AM
Quote from: Digiman;679740
What I meant about Jay Miner etc was a new generation of designers who designed a machine architecture radically different and superior to current desktop PC or Mac.


I have a plan that would achieve exactly that. Issue now is getting the technical skills to implement it, but that's something I'm working on. The architecture isn't exactly like the Amiga, but things I picked up from the Amiga and discussions around it have partly inspired it.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 08, 2012, 05:39:37 PM
Quote from: HenryCase;679788
I have a plan that would achieve exactly that. Issue now is getting the technical skills to implement it, but that's something I'm working on. The architecture isn't exactly like the Amiga, but things I picked up from the Amiga and discussions around it have partly inspired it.

What's your plan?  I might be able to help.
Title: Re: a golden age of Amiga
Post by: Tripitaka on February 08, 2012, 06:06:32 PM
Quote from: HenryCase;679788
I have a plan that would achieve exactly that. Issue now is getting the technical skills to implement it, but that's something I'm working on. The architecture isn't exactly like the Amiga, but things I picked up from the Amiga and discussions around it have partly inspired it.


Pah! My plan was awesome, but I'll be interested to hear yours anyway.
Title: Re: a golden age of Amiga
Post by: Fats on February 08, 2012, 06:33:06 PM
Quote from: Digiman;679740
What I meant about Jay Miner etc was a new generation of designers who designed a machine architecture radically different and superior to current desktop PC or Mac.


These things are happening: Arduino - PogoPlug - RepRap to name a few. All I wanted to say is that a.org is not the right place to look for that, and there is nothing wrong with that.

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 08:11:06 PM
@Mrs Beanbag, @Tripitaka
Thanks for your interest. My plan is a mishmash of ideas from all over the place, and I've not tried to summarise them succinctly before, but I'll try.

The plan is based around two central themes: FPGA computing and an OS that has a similar structure throughout its construction. As hinted at before, I do have Amiga-inspired ideas for this system too, but it's best to explain the core structure first.

With regards to FPGA computing, FPGAs are quickly improving in power and cost already, but I believe we're on the verge of seeing a game changer emerge. What I believe to be a game changer will be FPGAs built on memristor technology. There are a few reasons for this. Other than being fast storage devices, memristors can also be used for computation. This video is what inspired me with this particular idea, if you wish then skip to 29:10 to hear the bit about the memristor-based FPGA:
http://www.youtube.com/watch?v=bKGhvKyjgLY

The implication logic section starting at around 38:37 is also worth pointing out.

Next, the OS. The main source of inspiration behind the OS came from watching this video by Ian Piumarta, who currently works for VPRI:
http://www.youtube.com/watch?v=cn7kTPbW6QQ

The video describes a programming language being developed for VPRI that basically combines the functional programming of Lisp with the object-oriented nature of Smalltalk, which appeared to be a powerful combination for approaching programming tasks (FWIW, I realise that there are Lisps with object systems in place already, but these previous object systems were afterthoughts rather than at the core of the language).

What I was particularly struck by was he was describing the way VPRI was trying to build an OS using this language, which would be compact (OS in approx 20,000 lines of code), as well as structurally similar at every level of the OS (in other words, understand the code for high level apps, and you'd also understand the code at the lower levels of the OS).

What's so good about this? There are a few things. One of the advantages touched on in the video is that this same language can be used to define the hardware. So, if you run the OS on an FPGA, learning one language doesn't just give you the ability to create programs, it also allows you to define hardware accelerators for your programs.

To further illustrate what's possible with this approach, it's worth knowing about the Bluespec language. A simple explanation of what's possible in Bluespec is that when you evaluate Bluespec code, what is produced is both an accelerator design and the code that makes use of this accelerator. Essentially, it tries to create an optimal solution utilising both hardware and software:
http://www.gizmag.com/bluespec-code-circuit-system/20827/
I'd attempt the same thing with this system I'm proposing.

Another advantage that comes along with memristor-based FPGAs is the re-programmability. One thing you're taught early when learning Lisp is to view code and data as different ways to interpret the same object, in that code can be data and data can be code. Think about what we have with a memristor-based FPGA; we have a device that can store programs, and we have a device that can perform computation. What if you made these areas of the chip interchangeable? In other words, a section of the chip could be storage one minute, and then a logic circuit the next. What this gives you is on-the-fly re-programmability of FPGAs, you program a new accelerator as 'data' then you flip the switch and its now an accelerator.

As I mentioned before, I have other ideas for this system, and if you'd like information about the Amiga-inspired parts I'd be happy to share them. So to summarise, the basic plan is for flexible FPGA computing + simple OS + architecture that scales from high level to low level. In a way this simple OS + hardware accelerators approach is a spiritual successor to the Amiga anyway, just taken to the next evolutionary level.

Why is this better than what's out there? Other than the  simplicity of the system, it has a lot of potential to speed up code execution. Not only would you have as many specialised accelerators as you could fit on your hardware, you'd be concentrating the processing on a single chip (apart from RAM, there's some engineering challenges to overcome before memristors replace RAM), which should remove a lot of processing bottlenecks.

So what do you think? :-)

@Fats
Quote
All I wanted to say is that a.org is not the right place to look for that

Are you sure? ;-)
Title: Re: a golden age of Amiga
Post by: amigadave on February 08, 2012, 09:55:46 PM
@HenryCase,

Is this something you are really working on, or just thinking about?
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 08, 2012, 10:10:03 PM
One thing that I have thought about FPGAs is that they're designed for "rapid prototyping" or in otherwords with the idea in mind that they are something you make with the intention of making a "proper" chip at some stage in the future... how this manifests is in the scheme where you "flash" your design to the chip, into non-volatile storage of some sort.  But if these are going to be used as reconfigurable processors, they don't need to be non-volatile at all and could be made of ordinary RAM so that they can be reconfigured much more quickly.  In fact they could be wired up in such a way that they can reconfigure themselves as they go along.

This has a lot of advantages.  If your program doesn't use some feature of a CPU, such as floating point, but does a lot of integer maths, it could reconfigure it all as integer cores when needed.  But perhaps we can go even further than an FPGA design here...
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 10:11:40 PM
Quote from: amigadave;679885
@HenryCase,

Is this something you are really working on, or just thinking about?


Working on in the sense of educating myself, but not working on in the sense of building it yet. Do you have any feedback?
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 10:40:16 PM
Quote from: Mrs Beanbag;679886
One thing that I have thought about FPGAs is that they're designed for "rapid prototyping" or in otherwords with the idea in mind that they are something you make with the intention of making a "proper" chip at some stage in the future... how this manifests is in the scheme where you "flash" your design to the chip, into non-volatile storage of some sort.  But if these are going to be used as reconfigurable processors, they don't need to be non-volatile at all and could be made of ordinary RAM so that they can be reconfigured much more quickly.  In fact they could be wired up in such a way that they can reconfigure themselves as they go along.


Thank you for your ideas Mrs Beanbag.

With regards to FPGAs, you are correct that currently they are often used for rapid prototyping. However, they are occassionally used in commercial products, and again I must stress that memristors bring a number of advantages to typical FPGAs that will allow the performance to get much closer to fixed function devices. Allow me to explain...

The simple explanation for current FPGAs is that they are organised into code blocks called logic elements. You can see a diagram of one here:
http://www.eecg.toronto.edu/~pc/research/fpga/des/Logic_Element/logic_element.html

Through configuring these logic elements you change the function of FPGAs to fit your needs. However, the architecture is sub-optimal compared to ASICs as the extra bulk required to manage the re-programmability limits their logic density (in other words, ASICs allow circuitry to be simplified).

However, using memristors in FPGAs has the potential to make FPGAs a lot more efficient. There are a few ways to build FPGAs using memristors, but let's first focus on the first way described in the video I shared. The simple way to think about this is that you have two layers in the chip; on one layer you have transistors, on the other layer you have memristors. Memristors act as the wiring between the transistors. They are ideally suited to this. Memristors are electrically controlled variable resisitors that remember their resistance. Increase the resistance high enough and you have a 'blocked' connection. Reduce the resistance and you have an 'open' connection. The circuitry to manage the memristor wiring would be minimal, and the simpler connections would allow more efficient designs to be implemented on FPGAs.

As suggested, this is just one way to implement FPGAs utilising memristors. The other way hinted at in the video was to use implication logic. What has been discovered is that memristors can do logic by themselves, you don't need transistors at all, but you need to use a different form of logic. My knowledge on implication logic is sketchy at the moment, but from what was suggested in the video, the preliminary work done at HP on this indicates that when compiling C code, you end up with smaller binaries using implication logic. It's an area I need to research more, but I should learn to walk before I can run! ;-)

Use of RAM for reconfigurable computers is an interesting idea, but I'm not quite sure how it would work. Could you explain more?

Quote from: Mrs Beanbag;679886

This has a lot of advantages.  If your program doesn't use some feature of a CPU, such as floating point, but does a lot of integer maths, it could reconfigure it all as integer cores when needed.  But perhaps we can go even further than an FPGA design here...


Yes, I intended the CPU to be programmable in the way you suggest. When you say we could go further than an FPGA design here, what do you have in mind?
Title: Re: a golden age of Amiga
Post by: Tripitaka on February 08, 2012, 11:00:43 PM
@Henrycase

I have a very good friend who is far more technical than I am. He used to code for Amiga systems funny enough, mostly touchscreen based point of sale systems. Now he's freelance but the bulk of his work comes from a system he wrote for transferring large amounts of securely encrypted data. It's quite well known apparently.

Anyway the point is that about 11 years ago that we had a long talk about something very similar to you idea. Particularly regarding the blurring of hardware and software boundaries. It was all just theory and "this is the direction I would like to see things going" kind of chat but I've never quite looked at hardware the same way since.

I heartily agree that a true Amiga revival will not ever be an Amiga revival as such but rather a revival of innovation, the Amiga spirit if you will. Your "from the ground upwards" kind of thinking is just what we need.

The big question of course is not so much if it can be done, it is more to do with if it can be done at the right price but please, don't let that put you off trying. I guess you'll work that out as you go along.

Good luck, I hope something comes from this.
Title: Re: a golden age of Amiga
Post by: actung_bab on February 08, 2012, 11:07:35 PM
nice franko can do these also
Title: Re: a golden age of Amiga
Post by: bloodline on February 08, 2012, 11:10:41 PM
The big question is "what problem does it solve"... In engineering almost anything is possible, but few things are actually useful :-/
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 11:19:23 PM
@Tripitaka
Thank you for your words of support, I'm glad you see the benefits of blurring the boundaries between hardware and software.

Regarding the price, I don't have much control over this, but FPGAs are clearly improving fast when it comes to their price/performance ratio, plus concentrating the processing and storage in a single chip should help decrease costs. To be honest, I say single chip, but there's no reason you have to limit this design to a single chip, it would scale well to multiple chips too, enhancing upgradability no end. Want a bit more processing power? Stick in another memristor FPGA. You still keep the processing power you had before, but now nearly seamlessly enhanced with the capacity you just added.

I hope you will feel free to post any more ideas and feedback you have, and I'd be interested to hear from your software engineer friend too.

Thanks again. :-)
Title: Re: a golden age of Amiga
Post by: Tripitaka on February 08, 2012, 11:23:31 PM
Quote from: bloodline;679897
The big question is "what problem does it solve"... In engineering almost anything is possible, but few things are actually useful :-/


Well in your case you could finish that music your doing on your super duper sound processing computer and then play a game that uses those same sound processing chips for 3D graphics (as they have just been re-written to be graphics processors). Your point is quite valid of course, it is exactly what I would expect a possible financial backer to say. Sometimes however, you can't see how useful something is until you try it. Arthur C Clarke was mocked for suggesting the radio satellite after all.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 08, 2012, 11:24:47 PM
Quote from: bloodline;679897
The big question is "what problem does it solve"... In engineering almost anything is possible, but few things are actually useful :-/


What problem do you want it to solve? Computers are programmable devices, you can turn them to anything that can be symbolically represented. Altering the architecture in the way being described allows increased efficiency, enhanced extensibility (both at the hardware and software level), easier maintainability and even potentially lower power draw and lower cost. What's not to like! ;-)
Title: Re: a golden age of Amiga
Post by: Tripitaka on February 08, 2012, 11:42:54 PM
Quote from: HenryCase;679900
@Tripitaka
Thank you for your words of support, I'm glad you see the benefits of blurring the boundaries between hardware and software.

Regarding the price, I don't have much control over this, but FPGAs are clearly improving fast when it comes to their price/performance ratio, plus concentrating the processing and storage in a single chip should help decrease costs. To be honest, I say single chip, but there's no reason you have to limit this design to a single chip, it would scale well to multiple chips too, enhancing upgradability no end. Want a bit more processing power? Stick in another memristor FPGA. You still keep the processing power you had before, but now nearly seamlessly enhanced with the capacity you just added.

I hope you will feel free to post any more ideas and feedback you have, and I'd be interested to hear from your software engineer friend too.

Thanks again. :-)


I'll email my friend your post and see what he says.

I remember what started the conversation off now I think about it. At the time we both worked for a DVD Authoring house (I was an Author/Graphic Designer, he was an Author/Sys Admin). We had been using a hardware based MPEG encoder that encoded in about twice realtime, so 3 hours for an average movie. Back then the software based encoders were way too slow. Soon however the software encoders won out over the hardware as CPUs got so damn quick. It only took a couple of years. We thought it was a terrible waste as the hardware encoder was now redundant and had cost a whopping amount of cash. Anyway, we got into talking about what a shame it was that we couldn't re-write the damn chip to do something more useful. It still horrifies me that a full DVD authoring setup back in the early days of DVD cost nearly £200K. sic. That was less than 15 years ago.
Title: Re: a golden age of Amiga
Post by: Digiman on February 09, 2012, 12:13:11 AM
Quote from: bloodline;679897
The big question is "what problem does it solve"... In engineering almost anything is possible, but few things are actually useful :-/


This is true. What is also true though is technology so advanced as A1000 in 1985 takes decades to be accepted. Today we take for granted the multitasking  multimedia OS but before 2000/XP Windows was a joke, a toy OS for accounting nerds and a general public who accepted the limits of Win 95-ME and OS1-9.

Second storage and main storage technology price/performance was 5 years behind the OCS A1000 chipset in 85. Would MP3 players have gone mass market if solid state memory to store the MP3s on was stuck at a max 32mb and transfer speeds of 64kb/sec for years? Exactly.
Title: Re: a golden age of Amiga
Post by: bloodline on February 09, 2012, 07:14:12 AM
My question wasn't meant as a criticism of HenryCase's idea, but more a suggestion to look at the problem from the other direction.

It's really fun to think about the technology, and what you could put together in an interesting way... But really when building a product, you need to find a core, basic need that is unforfilled and then try and meet that need in the simplest cheapest way... :)
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 09, 2012, 12:22:49 PM
Quote from: HenryCase;679892
Thank you for your ideas Mrs Beanbag.

With regards to FPGAs, you are correct that currently they are often used for rapid prototyping. However, they are occassionally used in commercial products,

I should have said rapid prototyping and low-volume production.  But in either case, the problem is to "make some hardware", to put it simply.  They are not yet thinking outside of the box.  Once the design is put on the chip (by some external device or circuit), it is a constant, so as to behave just like any other special-purpose chip.

Quote
Use of RAM for reconfigurable computers is an interesting idea, but I'm not quite sure how it would work. Could you explain more?

Basically, I'm suggesting that instead of the Look Up Tables (LUTs) being initialised once by some external device, it can configure and reconfigure itself on the fly.  Currently (as I understand it at least) the device has to be turned off, loaded with a design, and then turned on again.  But rather it could load its own design through a DMA channel or suchlike as it goes along.

In fact the Amiga already has an FPGA in it.  It is called the Blitter.  The blitter has three sources which it can combine using any combinatorial logic supplied to it in the Minterms field of its control registers.  This Minterms field then is exactly the same as the LUT in an FPGA's logic cell.  It applies the same LUT to every bit in a 16-bit word at once and then sequentially on word after word, which it pulls in through DMA and writes out again.  Now imagine if it could pull the Minterms in through DMA as well, it would open up all kinds of possibilities.  Then if you could connect lots of blitters together so that they could pipe their outputs to each other, many things become possible.

I've never seen anyone mention this, but you can configure the blitter to do arithmetic addition.  Presumably this is why the fill operation works in descending mode only - it is to propagate the carry bit.

So really using FPGAs isn't so different from the Amiga afterall!

Quote
Yes, I intended the CPU to be programmable in the way you suggest. When you say we could go further than an FPGA design here, what do you have in mind?

What I have in mind is a pipelined scheme where not only the data can be passed down the pipeline, but the functionality as well!  Say for instance you want a processing unit that can perform several different kinds of pipelined operation.  So instead of having a pipeline for each different operation, you can pull the circuits themselves from memory and actually pass the functionality down the one pipeline.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 09, 2012, 09:53:13 PM
@Tripitaka
Quote from: Tripitaka;679908
We thought it was a terrible waste as the hardware encoder was now redundant and had cost a whopping amount of cash. Anyway, we got into talking about what a shame it was that we couldn't re-write the damn chip to do something more useful.


I agree, it's a shame when dedicated hardware no longer adds value to a system. This is something reconfigurable computing aims to do away with.

@bloodline
Quote
It's really fun to think about the technology, and what you could put together in an interesting way... But really when building a product, you need to find a core, basic need that is unforfilled and then try and meet that need in the simplest cheapest way...


Make no mistake about it, I'm not going for a single niche here, I'm going for general purpose computing. Let's go through the benefits I listed before:

Increased efficiency - Important where performance is key. Whatever tasks need high performance computing could benefit. If you want a single example, think of physics simulations.

Enhanced extensibility (both at the hardware and software level) - Important where you have tasks you have specific niche requirements. This is important in markets that are just beginning, or are too small for large investment. For an example of a market where extensibility could be a benefit, think of home automation.

Easier maintainability - This is just common sense. By keeping the core operating system compact and easier to understand, you increase the number of people that are adept at reasoning about its function. The more accurately that people can build a mental model of how something works, the more adept they will be at using, fixing and enhancing it. Benefits will be felt in all industries where it is used.

(Potentially) lower power draw - The obvious answer is this is important for mobile devices, but it's important for much more than that, for example power draw in servers is also a big concern. Memristor-based FPGAs have a number of ways to reduce power draw. For example, with modern CPUs you hear people talking about 'dark silicon'. Dark silicon is where you have unused sections of your processor, because it's getting increasingly hard to send power the whole chip all of the time. We can use this to our advantage; by switching off the unused chip real estate through use of memristor switches, you can optimise your device for low power draw. Then, when more power is available, you switch the extra circuitry on again.

(Potentially) lower cost - If I really have to explain why this will help this succeed, I really don't know what to say!

With all that said, there are markets I think will adopt earlier than others. The obvious places to look is where FPGAs are already in use, i.e. in embedded systems. Embedded systems are specialised enough that you don't need a large, expansive software ecosystem to build what you need. Plus, the engineers building embedded systems are more attuned to choosing hardware based on its actual merits rather than from any emotional ties.

As the software ecosystem develops, the system will become attractive to more and more markets. However, I'm not going to waste my time imagining what these markets will be, I am confident that people will see the benefits, just need the system to be built so those benefits can be realised. Hope that answers your question.

@Mrs Beanbag
Quote
Currently (as I understand it at least) the device has to be turned off, loaded with a design, and then turned on again.


I quoted this particular text, as I think it shows I've confused you by using the term FPGA. The FPGAs that are possible with memristors do not need to follow the same restrictions that traditional FPGA designs have.

For the conversation to move forward, I think it is absolutely vital that you understand the benefits that memristors could bring.

Firstly, it can make FPGAs reprogrammable on-the fly.

Secondly, it can make FPGAs more efficient.

Thirdly, memristors could potentially replace RAM, as well as long term storage, and processing. Essentially you can get all three in one chip.

Now, I hinted before that there are some issues with memristors replacing RAM at the moment, but considering how new memristors are, I anticipate memristors will be used in RAM in the future, once these challenges are overcome.

If you'd like a shorter introduction to the memristor video I posted before, please watch this video, it's only 6 minutes long, and should help you understand how memristors can change the structure of FPGAs:
http://www.youtube.com/watch?v=rvA5r4LtVnc

@all
Welcome to field any more questions. Also, welcome to hear more feedback, positive or negative. Thanks.
Title: Re: a golden age of Amiga
Post by: Piru on February 09, 2012, 09:59:23 PM
Quote from: Mrs Beanbag;679953
The blitter has three sources which it can combine using any combinatorial logic supplied to it in the Minterms field of its control registers.  This Minterms field then is exactly the same as the LUT in an FPGA's logic cell.  It applies the same LUT to every bit in a 16-bit word at once and then sequentially on word after word, which it pulls in through DMA and writes out again.  Now imagine if it could pull the Minterms in through DMA as well, it would open up all kinds of possibilities.  Then if you could connect lots of blitters together so that they could pipe their outputs to each other, many things become possible.

I've never seen anyone mention this, but you can configure the blitter to do arithmetic addition.  Presumably this is why the fill operation works in descending mode only - it is to propagate the carry bit.
This has been explored in numerous different things. For example it's possible to create full screen game of life with only blitter (320x256 it reaches 12fps on A500). I have a code lying around doing just that.

Further, it's possible to set copper in "danger mode" (CDANG bit in COPCON) in which it can write more custom registers than normally. When in this mode the copper can program the blitter. Blitter can be used to blit new copper list, which is again activated by the copper. This way it's entirely possible to have arbitrary logic running entirely independent of the main CPU (except for the initialization, and the limitations set by the amount of chip memory). It'd be interesting to hear if anyone has ever attempted to construct such setup.
Title: Re: a golden age of Amiga
Post by: actung_bab on February 09, 2012, 10:56:53 PM
Quote from: HenryCase;679892
Thank you for your ideas Mrs Beanbag.

With regards to FPGAs, you are correct that currently they are often used for rapid prototyping. However, they are occassionally used in commercial products, and again I must stress that memristors bring a number of advantages to typical FPGAs that will allow the performance to get much closer to fixed function devices. Allow me to explain...

The simple explanation for current FPGAs is that they are organised into code blocks called logic elements. You can see a diagram of one here:
http://www.eecg.toronto.edu/~pc/research/fpga/des/Logic_Element/logic_element.html

Through configuring these logic elements you change the function of FPGAs to fit your needs. However, the architecture is sub-optimal compared to ASICs as the extra bulk required to manage the re-programmability limits their logic density (in other words, ASICs allow circuitry to be simplified).

However, using memristors in FPGAs has the potential to make FPGAs a lot more efficient. There are a few ways to build FPGAs using memristors, but let's first focus on the first way described in the video I shared. The simple way to think about this is that you have two layers in the chip; on one layer you have transistors, on the other layer you have memristors. Memristors act as the wiring between the transistors. They are ideally suited to this. Memristors are electrically controlled variable resisitors that remember their resistance. Increase the resistance high enough and you have a 'blocked' connection. Reduce the resistance and you have an 'open' connection. The circuitry to manage the memristor wiring would be minimal, and the simpler connections would allow more efficient designs to be implemented on FPGAs.

As suggested, this is just one way to implement FPGAs utilising memristors. The other way hinted at in the video was to use implication logic. What has been discovered is that memristors can do logic by themselves, you don't need transistors at all, but you need to use a different form of logic. My knowledge on implication logic is sketchy at the moment, but from what was suggested in the video, the preliminary work done at HP on this indicates that when compiling C code, you end up with smaller binaries using implication logic. It's an area I need to research more, but I should learn to walk before I can run! ;-)

Use of RAM for reconfigurable computers is an interesting idea, but I'm not quite sure how it would work. Could you explain more?



Yes, I intended the CPU to be programmable in the way you suggest. When you say we could go further than an FPGA design here, what do you have in mind?

a understand what your saying on very basic level what is diffrence between fpg
 the simple pic control chips i seen my friend program i know these have very limited storage but seems you can do alot with less , which what your saying .
apart from elagance of working with this whats the practice benifets of this
lower power use.
and chould you use the advanced c who is this diffrent ot a fpg i kinda dont understand what fpg is i know its a progamed aray how does it diffrent from a cpu ?
its been itresting read keep it up

I rememby freind who was into computers before amiga came out he had amstrad 128
talking about cpus and how many bits make up a byte it stayed with me all this time
Some pople just make this stuff great to learn about , kevin cameron guy on engine design in the same vein , makes complicated eem simple enough for you to want learn more and more
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 09, 2012, 11:14:43 PM
Quote from: HenryCase;680011
@Mrs Beanbag

I quoted this particular text, as I think it shows I've confused you by using the term FPGA. The FPGAs that are possible with memristors do not need to follow the same restrictions that traditional FPGA designs have.

For the conversation to move forward, I think it is absolutely vital that you understand the benefits that memristors could bring.

Ok memristors are very exciting, I'll give you that.  I did watch the long video after I composed my previous reply, and it's got me thinking even more.  I need to get my head round this "imaginary current" that you see in passive circuit analysis, because as much as it works, it makes very little sense.  I suspect there is something even more complex going on behind that but when I try to research it it's kind of a brick wall.  I suspect that SU(2) might come into play at some point, and we can start inventing some really weird passives.  But I'm struggling to devise a Lorentz Invariant formulation of Ohm's Law now...

But, what I propose with completely reconfigurable FPGAs is already possible with existing technologies.  In fact now I think about it I reckon it's possible to do it in an FPGA.  META.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 09, 2012, 11:19:09 PM
Quote from: actung_bab;680020
a understand what your saying on very basic level what is diffrence between fpg
 the simple pic control chips i seen my friend program i know these have very limited storage but seems you can do alot with less , which what your saying .
apart from elagance of working with this whats the practice benifets of this
lower power use.

A pic is a normal, simple CPU with some flash ROM on chip.  You just put a program there and it goes.  It is programmed in C or assembly language and runs sequentially, and typically very slowly.  Although in theory you could design one with an ARM core or a x86 core or whatever.  But the ones you can buy are really simple and tiny.

An FPGA can be rewired electronically to be any kind of chip at all.  It doesn't run a program (unless you design a CPU core on it), it simply reroutes data and performs any logical operation you want in any combination or sequence.  So you can make it do specialist tasks, and do umpteen things at once, the only limitation is the number of logic cells.  Ok that's not the only limitation but it will do for explanation's sake.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 10, 2012, 06:58:49 AM
Quote from: Mrs Beanbag;680022
I need to get my head round this "imaginary current" that you see in passive circuit analysis


I'm sorry, what imaginary current are you referring to?

Quote from: Mrs Beanbag;680022

But, what I propose with completely reconfigurable FPGAs is already possible with existing technologies.  In fact now I think about it I reckon it's possible to do it in an FPGA.  META.


The function of present day FPGA devices is fixed at boot time, how do you intend to get around this? You may be able to get around it using multiple fast booting FPGAs (see article here: http://electronicdesign.com/article/digital/fpgas-boot-in-a-flash15649.aspx ), is that what you intended?
Title: Re: a golden age of Amiga
Post by: HenryCase on February 10, 2012, 07:31:45 AM
Quote from: actung_bab;680020
a understand what your saying on very basic level what is diffrence between fpg
 the simple pic control chips i seen my friend program i know these have very limited storage but seems you can do alot with less , which what your saying .
apart from elagance of working with this whats the practice benifets of this
lower power use.
and chould you use the advanced c who is this diffrent ot a fpg i kinda dont understand what fpg is i know its a progamed aray how does it diffrent from a cpu ?
its been itresting read keep it up

I rememby freind who was into computers before amiga came out he had amstrad 128
talking about cpus and how many bits make up a byte it stayed with me all this time
Some pople just make this stuff great to learn about , kevin cameron guy on engine design in the same vein , makes complicated eem simple enough for you to want learn more and more


Thank you for your interest and support actung_bab. :)

With regards to the practical benefits, there are many areas of the system we haven't discussed yet, but the main point I've tried to get across so far is the benefits that come from blurring the lines between hardware and software.

If you'd like to know about the benefits of low power use in particular, as stated before power usage is a concern across the whole computing industry. With regards to the data on lower power draw, we'll have to wait until the FPGAs are built (which is why I said 'potentially' lower power draw), however to give you a ballpark figure, the estimates I've seen say we're looking at 1.5 to 2 times less power usage compared to equivalent transistor-based circuitry.

If you'd like to learn more about memristors, here are two articles worth looking at. One is less technical, the other is more technical. The less technical one:
http://highscalability.com/blog/2010/5/5/how-will-memristors-change-everything.html

The more technical one:
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCAQFjAA&url=http%3A%2F%2Fcadlab.cs.ucla.edu%2F~cong%2Fpapers%2Fnano11.pdf&ei=lcc0T9mZBaGh0QXP9-i7Ag&usg=AFQjCNHDb6_yA_Pn6nx6ZEBuYU8MiKI-DA

Hope this has been useful in broadening your understanding. Feel free to ask any further questions you have about this.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 10, 2012, 11:11:29 AM
Quote from: HenryCase;680050
I'm sorry, what imaginary current are you referring to?

Well we all know V=IR, right?  Ohm's law.  Well if you let R be a complex number (we write it as Z instead and call it "impedance" rather than resistance) you can model impedance and capacitance as well.  The voltage and current ends up complex as well, of course, which makes no sense, but nevertheless it works.  I don't know what the "impedance" of a memristor would be, I can only surmise that this simple "electronic theory hack" isn't quite up to the task of representing it.  The problem is that it's quite ad hoc, as far as I can tell it's not properly derived from fundamental laws, it's just made up and used because it works.

Quote
The function of present day FPGA devices is fixed at boot time, how do you intend to get around this? You may be able to get around it using multiple fast booting FPGAs (see article here: http://electronicdesign.com/article/digital/fpgas-boot-in-a-flash15649.aspx ), is that what you intended?

You can configure the LUTs to act as register files.  The same LUTs that would normally be used to hold your fixed designs can still be changed internally.  They are Read/Write.  What is needed is a design scheme that would let you do this in a useful way.

Whether our FPGA works on memristors or not, we need a design.  You can't just throw memristors at it and magically it becomes reconfigurable on the fly.  It might be reconfigurable quicker, but it will still have the same limitations.  It's not the SRAM that's the problem.  There are strategies for partial reconfiguration, but they all assume the design being fed in from some outside source.

You can design a DMA controller in an FPGA.  If you can access external memory your design can pump data in and populate its own LUTs.  I think a good analogy might be something like Conway's Game of Life.  (Surprisingly, it is Turing complete!  In fact I'm amused by the idea that given a board large enough, one could simulate John Conway.  But I digress.)  The external bootstrap circuit feeds in a small "agent" that has a DMA and a ruleset, the rest of the unconfigured cells are basically its playground, where it can wander about and pull in design blocks through its DMA and write them to the LUTs surrounding it.  We'd need it to be able to grow and branch and create paths that packets can be sent along.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 10, 2012, 12:30:32 PM
Quote from: Mrs Beanbag;680056
Well we all know V=IR, right?  Ohm's law.  Well if you let R be a complex number (we write it as Z instead and call it "impedance" rather than resistance) you can model impedance and capacitance as well.  The voltage and current ends up complex as well, of course, which makes no sense, but nevertheless it works.  I don't know what the "impedance" of a memristor would be, I can only surmise that this simple "electronic theory hack" isn't quite up to the task of representing it.  The problem is that it's quite ad hoc, as far as I can tell it's not properly derived from fundamental laws, it's just made up and used because it works.


Let me put it to you like this, do you understand how memristors work at the molecular level? Please watch that 6 minute video I posted before:
http://www.youtube.com/watch?v=rvA5r4LtVnc

There's nothing mysterious about this, all that is happening is electrons can be made to move between two different materials based on the direction of the current applied over them. This device happens to exhibit the properties described for a memristor, so they unsurprisingly called it a memristor. In simple terms, a memristor is a device where the flux and charge affect each other.

Quote from: Mrs Beanbag;680056

Whether our FPGA works on memristors or not, we need a design.  You can't just throw memristors at it and magically it becomes reconfigurable on the fly.  


Oh dear. I'm not throwing memristors at FPGAs and 'magically' expecting it to be reconfigurable, I know for a fact that memristors will make FPGA devices reconfigurable, because THE PEOPLE THAT DISCOVERED THE MEMRISTOR ARE SAYING THE SAME THING. Please see here:
http://pubs.acs.org/doi/abs/10.1021/nl901874j

Please stop trying to shoehorn your existing knowledge into this new model, and please try to see that use of memristors can alter the architectural possibilities for FPGAs. Thanks.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 10, 2012, 01:14:30 PM
Quote from: HenryCase;680060
Let me put it to you like this, do you understand how memristors work at the molecular level? Please watch that 6 minute video I posted before:
http://www.youtube.com/watch?v=rvA5r4LtVnc

No I don't, and I watched the video and I still don't.  Solid state physics was never my best subject.

Quote
In simple terms, a memristor is a device where the flux and charge affect each other.

Ok well, whatever.  It's still a device that doesn't fit into the generalised impedance model, which I'm saying is inadequate and makes no sense because it was made up ad-hoc to describe only what we knew already.  I'm saying that a better theory might predict even more different types of passive components.

Quote
Oh dear. I'm not throwing memristors at FPGAs and 'magically' expecting it to be reconfigurable, I know for a fact that memristors will make FPGA devices reconfigurable, because THE PEOPLE THAT DISCOVERED THE MEMRISTOR ARE SAYING THE SAME THING. Please see here:
http://pubs.acs.org/doi/abs/10.1021/nl901874j

Please stop trying to shoehorn your existing knowledge into this new model, and please try to see that use of memristors can alter the architectural possibilities for FPGAs. Thanks.

Memristors won't make FPGAs reconfigurable, or anything else.  Well maybe it will make them smaller.  It's a switch.  Like the guy is saying, you can replace ten transistors with one memristor (I'll take his word for it).  Maybe that opens up all sorts of new possibilities, but a possibility isn't a device.  You still have to DESIGN a dynamically reconfigurable FPGA, whether you use memristors or not.  No matter what components you use, even if it's alien technology that's a million years in advance of our own, how to design an "FPGA-like device" that you can reconfigure while it's running is still an architectural challenge.  And I'm saying it's possible already, even without memristors.  (With memristors, it would be even better, I guess.)

I can't read the full text of that article you posted, but the abstract doesn't mention anything about dynamic reconfiguration.  "Reconfigurable logic" could describe present FPGAs.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 10, 2012, 01:58:32 PM
Quote from: Mrs Beanbag;680063
No I don't, and I watched the video and I still don't.  Solid state physics was never my best subject.

Ok well, whatever.  It's still a device that doesn't fit into the generalised impedance model, which I'm saying is inadequate and makes no sense because it was made up ad-hoc to describe only what we knew already.  I'm saying that a better theory might predict even more different types of passive components.


I'm all up for entertaining alternative theories, but would suggest you'll have a better time of forming successful theories about what's happening with these devices if you understand how the electrons are moving in this device. Let's start with the basics: what makes a material positively or negatively electrically charged?

Quote from: Mrs Beanbag;680063

Memristors won't make FPGAs reconfigurable, or anything else.  Well maybe it will make them smaller.  It's a switch.  Like the guy is saying, you can replace ten transistors with one memristor (I'll take his word for it).  Maybe that opens up all sorts of new possibilities, but a possibility isn't a device.  You still have to DESIGN a dynamically reconfigurable FPGA, whether you use memristors or not.  No matter what components you use, even if it's alien technology that's a million years in advance of our own, how to design an "FPGA-like device" that you can reconfigure while it's running is still an architectural challenge.  And I'm saying it's possible already, even without memristors.  (With memristors, it would be even better, I guess.)

I can't read the full text of that article you posted, but the abstract doesn't mention anything about dynamic reconfiguration.  "Reconfigurable logic" could describe present FPGAs.


Again, you persist by insisting memristors won't make FPGAs reconfigurable. I don't really know how bluntly I should tell you that you're wrong before you'll listen. Take a look at this:
http://www.pnas.org/content/106/6/1699.full

I'll even quote you the relevant text to save you reading the whole thing:
Quote
A completely different type of demonstration is the conditional programming of a memristor by the integrated circuit in which it resides, which illustrates a key enabler for a reconfigurable architecture (21, 22, 25), memristor based logic (24) or an adaptive (or “synaptic”) circuit that is able to learn (26, 27). Based on a portion of the hybrid circuit described above, we showed that the output voltage from an operation could be used to reprogram a memristor inside the nanocrossbar array, which could have been used as memory, an electronic analog of a synapse or simply interconnect, to have a new function.


Do you believe me now? The reason I'm ignoring your hack to try to implement reconfigurable FPGAs using what we have now is that it's sub-optimal compared to memristor-based devices. I hope you agree.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 10, 2012, 02:52:09 PM
Quote from: HenryCase;680071
I'm all up for entertaining alternative theories, but would suggest you'll have a better time of forming successful theories about what's happening with these devices if you understand how the electrons are moving in this device. Let's start with the basics: what makes a material positively or negatively electrically charged?

I don't know whether you're leading me or patronising me here.  Why, it's electrons, of course.  I'm not trying to form a theory of "these devices" in particular.  I know there are deeper theories in solid state physics to account for many different things, they are a little over my head to be honest with you.  I don't want to have to worry about individual electrons.  All I want to get my head round for now is what the imaginary components of current and voltage physically represent.  Usually the textbooks just shrug it off and tell you the imaginary components don't really mean anything physical at all, it's just some maths to sweep under the rug at the end of your calculations.  Which makes no sense.  It's all a bit fudged.  If I could just work out how to derive these things from Maxwell's Equations...

Quote
Again, you persist by insisting memristors won't make FPGAs reconfigurable. I don't really know how bluntly I should tell you that you're wrong before you'll listen.

Do be as blunt as you feel is necessary.  But FPGAs arlready are reconfigurable, that's what "Field Programmable" means.  The problem is reconfiguring it while it's still running.  The quote that caught my attention was this:

Quote
Simulations                   of these architectures have shown that by removing the  transistor-based configuration memory and associated routing circuits                   from the plane of the CMOS transistors and replacing  them with a crossbar network in a layer of metal interconnect above the                   plane of the silicon, the total area of an FPGA can be  decreased by a factor of 10 or more while simultaneously increasing                   the clock frequency and decreasing the power  consumption of the chip (21 (http://www.pnas.org/content/106/6/1699.full#ref-21), 22 (http://www.pnas.org/content/106/6/1699.full#ref-22)).

In other words, the same thing, but better.  Memristors have other more specific advantages, from what I gather, if you want to build something like a hardware neural network.  Which I don't, personally.  So I think we might be talking at crossed purposes here.

Quote
Do you believe me now? The reason I'm ignoring your hack to try to implement reconfigurable FPGAs using what we have now is that it's sub-optimal compared to memristor-based devices. I hope you agree.

Well of course I believe you, I never doubted that it's possible to make a self-reconfigurable device.  But you would still have to design one.  They have shown that it's possible, but I still don't see that it isn't possible using transistors.  Memristors perhaps simplify the design, but I've still yet to see a detailed description of how this would actually work, functionally.  There is plenty about the strange properties of the memristor as a component.

Everything is sub-optimal.  Current FPGAs are maybe suboptimal by not being tomorrow's technology, but memristor devices are suboptimal for not actually existing yet.  I could go and buy myself a Virtex 6 tomorrow and implement my design on it.  I don't know where you are going to get your memristor-based technology from.  And I don't see how it is a "hack" to implement something interesting or useful on existing hardware.  I'm not proposing the use of undocumented features here.  There is nothing "hacky" about working with what you've got.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 10, 2012, 04:36:19 PM
Quote from: Mrs Beanbag;680081
I don't know whether you're leading me or patronising me here.  Why, it's electrons, of course.  I'm not trying to form a theory of "these devices" in particular.  I know there are deeper theories in solid state physics to account for many different things, they are a little over my head to be honest with you.  I don't want to have to worry about individual electrons.  All I want to get my head round for now is what the imaginary components of current and voltage physically represent.  Usually the textbooks just shrug it off and tell you the imaginary components don't really mean anything physical at all, it's just some maths to sweep under the rug at the end of your calculations.  Which makes no sense.  It's all a bit fudged.  If I could just work out how to derive these things from Maxwell's Equations...


I agree with you, I think it's important to understand how voltage, current and other electric phenomena work at a fundamental level, and I don't like it when learning sources dismiss an understanding at this level as unnecessary.

The best analogy for voltage and current I can think of off the top of my head is the rain: if you think of rain as a circuit the change in the circuit depends on how much water vapour is held in higher elevation (clouds), and the rate at which rain drops fall. Voltage can be thought of as potential difference in energy, and current can be thought of as the rate at which the potential energy is used. So for rainclouds, the voltage is the height and amount of water vapour in clouds, and the current is the amount of water that falls to the ground at a point in time.

In electric circuits, electrons do all the work. Current and voltage are just two ways of describing the state of the electrons in the circuit.

This is the best introduction to electronics I've found so far. If you're already familiar with Maxwell's equations it may be covering ground you are already familiar with, but I'll share it just in case:
http://lcamtuf.coredump.cx/electronics/

Quote from: Mrs Beanbag;680081

In other words, the same thing, but better.  Memristors have other more specific advantages, from what I gather, if you want to build something like a hardware neural network.  Which I don't, personally.  So I think we might be talking at crossed purposes here.


A chip that can implement a neural network needs to be able to change its own structure, otherwise it wouldn't be able to 'learn'. It's a specific application of a run-time configurable device.

Quote from: Mrs Beanbag;680081

Well of course I believe you, I never doubted that it's possible to make a self-reconfigurable device.  But you would still have to design one.


I don't intend to design one. I intend to buy one after they're manufactured. I don't even need to design my own PCB, a reference platform should suffice for a proof of concept. I've got plenty of research to do before I have a chance of implementing the real device, so this delay in availability is not a problem IMO.
Title: Re: a golden age of Amiga
Post by: Fats on February 10, 2012, 06:46:24 PM
Quote from: HenryCase;680011
@Tripitaka

Now, I hinted before that there are some issues with memristors replacing RAM at the moment, but considering how new memristors are, I anticipate memristors will be used in RAM in the future, once these challenges are overcome.


Don't believe the hype. People in the microelectronics world are already searching for the Holly grail, e.g. the universal memory a long time. The previous candidate was MRAM or magnetic RAM but did not follow on the hype.
Maybe memristors is the next Holly grail but I find the chance small. Problem is that memristors are a kind of resistive memories. They depend on the change of the solid state of materials to get a change in resistance. I think this will always be more involved then putting a few electrons on a small capacitor which is the base for DRAM.
More less hype driven info is here (http://www.eejournal.com/archives/articles/20120116-memristor/)

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Fats on February 10, 2012, 06:47:52 PM
Quote from: HenryCase;680050

The function of present day FPGA devices is fixed at boot time, how do you intend to get around this? You may be able to get around it using multiple fast booting FPGAs (see article here: http://electronicdesign.com/article/digital/fpgas-boot-in-a-flash15649.aspx ), is that what you intended?


look here (http://www.xilinx.com/tools/partial-reconfiguration.htm)
Staf.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 11, 2012, 10:53:33 AM
Quote from: Fats;680102
Don't believe the hype. People in the microelectronics world are already searching for the Holly grail, e.g. the universal memory a long time. The previous candidate was MRAM or magnetic RAM but did not follow on the hype.
Maybe memristors is the next Holly grail but I find the chance small. Problem is that memristors are a kind of resistive memories. They depend on the change of the solid state of materials to get a change in resistance. I think this will always be more involved then putting a few electrons on a small capacitor which is the base for DRAM.
More less hype driven info is here (http://www.eejournal.com/archives/articles/20120116-memristor/)

greets,
Staf.

So, to paraphrase your main argument, you're saying "Because this earlier technology didn't live up to it's promise, I doubt this newer technology will live up to its promise either". Forgive me if I take such a notion with a grain of salt, I prefer to assess each individual technology on its own merits.

Besides, I never used the term 'holy grail', you chose to use this label, it's not my fault if you choose to use such inaccurate labels. The point I've been trying to get across is that memristors can be very beneficial for improving FPGAs, which will be beneficial for the computer system I'm proposing. I've hinted at a couple of times that a memristor-based RAM wouldn't be competitive with DRAM yet, but only mentioned this as an aside, as it's the performance improvements being brought to FPGAs that is relevant to this discussion.

However, as you brought it up, let's take a look at the challenges that memristors face to be a viable replacement for DRAM. The two main issues are:

1. Memristor-based RAM would currently be slower than DRAM.
2. Need to increase the read-write lifecycles that can be achieved with memristors before it can replace DRAM.

Let's put some approximate numbers in place for the points above so we know the level of challenges were looking at. When memristors were first discovered, there was talk that they were approximately x10 slower than DRAM. With regards to read-write lifecycles, current memristors have been show to have approximately 1 million read-write lifecycles.

It's worth bearing in mind that memristors are a new technology, whereas DRAM is a mature technology. However, since the discovery of memristors there have a lot of companies investing in R&D on this technology. Case in point, the speed. Back in 2008 we were looking at x10 slower performance. In 2012, we're now looking at equivalent write performance. See here:
http://www.bbc.co.uk/news/technology-16725529
Quote
Recently, the Japanese memory manufacturer Elpida announced it had produced a prototype ReRAM memory with speeds comparable to DRAM.

"Its most attractive feature is that it can read/write data at high speeds using little voltage," Elpida said in a press release.

"It has a write speed of 10 nanoseconds, about the same as DRAM.


So in some ways, the speed gap issue has been addressed. The remaining issue then is the read-write cycles. I anticipate the companies working on memristor-tech for non-volatile storage will invest resources in improving the hardiness of memristor devices, and these benefits should eventually reach a tipping point where memristors become 'good enough' to replace DRAM. For example, if the read-write lifecycle improves to the point where memristor-based RAM would last 5 years in continuous use, this should be good enough performance to enable widespread memristor-based RAM usage. Also, the improved capacity of memristor devices may help this change happen sooner. If a 32GB memristor device was a lower cost than a 2GB DRAM, you could sell the memristor device as a 2GB DRAM replacement and use the massive redundancy to your advantage (effectively obtaining 16 million read-write cycles with current memristor performance using wear levelling).

With all that said, memristor-based RAM is off topic for what is being proposed, it's the improvements to FPGAs that matter here. I hope we can get back on topic now.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 11, 2012, 10:58:50 AM
Quote from: Fats;680103
look here (http://www.xilinx.com/tools/partial-reconfiguration.htm)
Staf.


Interesting, thanks for the link. Could you help further by advising on the lowest cost FPGA that offers partial reconfiguration?
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 11, 2012, 03:24:22 PM
Quote from: HenryCase;680093
This is the best introduction to electronics I've found so far. If you're already familiar with Maxwell's equations it may be covering ground you are already familiar with, but I'll share it just in case:
http://lcamtuf.coredump.cx/electronics/

I'll read that later, it may be "concise" but it's still quite a lot of reading!

Quote
A chip that can implement a neural network needs to be able to change its own structure, otherwise it wouldn't be able to 'learn'. It's a specific application of a run-time configurable device.

Indeed, but a neural network isn't programmable in the traditional sense.  Rather, you have to train it, which is a slow process.  Just like you can't retrain a plumber to be a heart surgeon in an afternoon, you can't retrain a GPU neural net to be a sound chip in a nanosecond.  I don't want a chip that learns, I want a chip that does what I tell it; Butlerian Jihad and all that.

Neural Networks are useful and interesting for all kinds of reasons, but it's not the problem I'm trying to solve.

Quote
I don't intend to design one. I intend to buy one after they're manufactured. I don't even need to design my own PCB, a reference platform should suffice for a proof of concept. I've got plenty of research to do before I have a chance of implementing the real device, so this delay in availability is not a problem IMO.

Well here's the rub.  But I do intend to design one... well, I intend to idly speculate about one... but if I could put my design on a standard FPGA I could put it on a memristor-based FPGA as well.

The problems that I'm trying to solve are architectural, rather than electronic.  We know it's possible for a cell to reconfigure itself, the problems are:
1) how does a cell know when to reconfigure itself?
2) how does it know what to reconfigure itself as?
3) how does the relevant data get there?

I'm thinking of a scheme based on systolic arrays.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 11, 2012, 03:28:23 PM
Quote from: Fats;680103
look here (http://www.xilinx.com/tools/partial-reconfiguration.htm)
Staf.

Partial reconfiguration is possible, but it's still externally driven (by software running on a CPU).  What I'm trying to devise is some mechanism by which the FPGA itself (or rather, the configuration thereon) would drive its own reconfiguration.  Although if this is already possible in hardware, maybe two FPGAs could help each other out.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 11, 2012, 07:05:14 PM
Quote from: Mrs Beanbag;680174
I'll read that later, it may be "concise" but it's still quite a lot of reading!


Yes, it is quite a bit to read. Hope it's useful to you. :)

Quote from: Mrs Beanbag;680174

Neural Networks are useful and interesting for all kinds of reasons, but it's not the problem I'm trying to solve.


Neural networks aren't the problem I'm trying to solve either. However, a chip that can model neural networks without 'software' in the traditional sense is one that is reprogrammable, and it's this reprogrammability that I was trying to highlight.  

Anyway, I'll stop going on about memristors now, all I hope is that I've done some good in raising awareness of what's incoming in FPGA tech.

Quote from: Mrs Beanbag;680174

Well here's the rub.  But I do intend to design one... well, I intend to idly speculate about one... but if I could put my design on a standard FPGA I could put it on a memristor-based FPGA as well.


Fair play to you! Of course I'm pleased to hear of your intentions, as like you say it'll allow you to get the ball rolling quicker.

Quote from: Mrs Beanbag;680174

The problems that I'm trying to solve are architectural, rather than electronic.  We know it's possible for a cell to reconfigure itself, the problems are:
1) how does a cell know when to reconfigure itself?
2) how does it know what to reconfigure itself as?
3) how does the relevant data get there?

I'm thinking of a scheme based on systolic arrays.


To me, the answers to those three problems are found in the OS design, which is one subject we haven't talked much about yet. To date, I've not worked on the low level issues you're discussing, but would be interested in exploring the design possibilities with you. Could you tell me what systolic arrays are?

Thought you might be interested in Tabula FPGAs, Mrs Beanbag. Is this hardware in line with what you're looking for?
http://www.popsci.com/technology/article/2011-04/reprogrammable-chips-could-allow-you-update-your-hardware-just-software

At the moment I'm working through the design of the file system. It's still early days, I'm currently working through the implications of taking Plan 9's 'everything is a file' notion (to people who know about this approach from Unix/Linux, Plan 9 takes this approach further), and morphing it into 'everything is an object'. The plan for this is to make every component in the OS as reusable as possible. So at one end of the system we'll be blurring the lines between hardware and software, and at the other end of the system we'll be blurring the lines between the OS and the applications. If you're interested to learn more about the plans for the OS I'm talking about, please feel free to ask me.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 11, 2012, 08:15:59 PM
Quote from: HenryCase;680190
Yes, it is quite a bit to read. Hope it's useful to you. :)
Sadly it's the same story, "this is Ohm's law.  It just is."  It's empirical, there isn't really a derivation.  But never mind.  I've made it Lorentz Covariant and squashed it down to two dimensions (length+time), compared it to V=IZ and... I've got it in terms of Quaternions.

We can write it as V=IZ where Z = sigma_1.R - i sigma_3.X

Well that's kind of neat, but what in Bob's name do sigma_0 and sigma_2 represent?

Still working... I've got an equation for Z as a rank 2 tensor in terms of the current and charge distribution in the device, but it might take a little while to solve.  For resistance it is easy because the current is constant in a straight line.  For inductance we can represent it as current in a spiral (a circle plus a length displacement) and that makes sense.  A capacitor must be current with a dip in the middle (there would be equal current at both ends but with a polarisation of charge in the middle.)  Now are there any other interesting shapes we can bend a wire into?

Quote
To me, the answers to those three problems are found in the OS design, which is one subject we haven't talked much about yet. To date, I've not worked on the low level issues you're discussing, but would be interested in exploring the design possibilities with you. Could you tell me what systolic arrays are?

Thought you might be interested in Tabula FPGAs, Mrs Beanbag. Is this hardware in line with what you're looking for?
http://www.popsci.com/technology/article/2011-04/reprogrammable-chips-could-allow-you-update-your-hardware-just-software

Now that's interesting, I guess it's kind of similar in principle (to switch cores in and out) but my idea is to be able to fetch them from off chip, this has a certain number in reserve that it can switch in and out.  I could use that technique as well, I guess.

Quote
... So at one end of the system we'll be blurring the lines between hardware and software, and at the other end of the system we'll be blurring the lines between the OS and the applications. If you're interested to learn more about the plans for the OS I'm talking about, please feel free to ask me.

You know this sounds very much like my own idea for an OS from several years back.
Title: Re: a golden age of Amiga
Post by: Fats on February 11, 2012, 08:56:15 PM
Quote from: HenryCase;680159
So, to paraphrase your main argument, you're saying "Because this earlier technology didn't live up to it's promise, I doubt this newer technology will live up to its promise either". Forgive me if I take such a notion with a grain of salt, I prefer to assess each individual technology on its own merits.


Maybe I am biased; I am already working for more than 15 years in the microelectronics research and development institute imec (http://www.imec.be). Over those years I have seen several memory technologies passing by that claim to be the next universal memory, e.g. that should be able to be used both as non-volatile memory and as main RAM.
At imec there is already for a few years a project on ReRAM; I was even involved in a tape-out in this project. Only recently HP started to hype their memristors but as an insider I know there are still a lot of roadblocks to take.
The term 'holy grail' I think I also got as a description that was used in an older eejournal article on universal memory (it is a really good site and I advise anybody who is interested in the topic to follow it). I found it appropriate as you seemed to have fallen for the HP marketing/hype; but I did not in any way wanted to use it in a pejorative way to you.

Quote

However, as you brought it up, let's take a look at the challenges that memristors face to be a viable replacement for DRAM. The two main issues are:

1. Memristor-based RAM would currently be slower than DRAM.
2. Need to increase the read-write lifecycles that can be achieved with memristors before it can replace DRAM.


You forget the most important one:
3. Density and cost (both are related as major cost for memory is how much silicon area it takes). Also the yield is driving the cost. If I put billions of ReRAM how many of them won't work.
And another one:
4. Power: how much energy is needed for a write operation.

It's these latter two that will decide if ReRAM/memristors can replace DRAM or not. Solving 1 and 2 are just the condition to get enough investment money to start tackling 3 and 4.

My personal opinion is that ReRAM is a possible good candidate for the next non-volatile memory, but then only if the prediction of the scaling stop for NAND flash is finally becoming reality. I don't think it will replace DRAM.

Quote

With all that said, memristor-based RAM is off topic for what is being proposed, it's the improvements to FPGAs that matter here. I hope we can get back on topic now.


Do you know the term 'analog computer' (http://en.wikipedia.org/wiki/Analog_computer) ? I think that is a direction where you want to go with your combination of memristors + FPGA. I am no expert in those as the research topic had already mostly died out before I entered university in 1990. I think the main reason they failed is that they are too hard to program. Debugging a sequential program in a low level or high level language is already hard enough. Doing it for a chip with hundreds or thousands of analog signals is I think something that the human brain hardly can grasp or tackle.

Another thing I want to mention is that there are already NVM based FPGAs ATM not based on ReRAM but based on Flash technology.

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Fats on February 11, 2012, 09:07:01 PM
Quote from: Mrs Beanbag;680175
Partial reconfiguration is possible, but it's still externally driven (by software running on a CPU).


There is nothing stopping a FPGA partially reconfiguring itself. One of the roadblocks is that the FPGA manufacturers want to keep their bitstream format proprietary so you have to use their software to generate them. But I think this problem can be solved with some clever reverse engineering (https://docs.google.com/viewer?a=v&q=cache:DLolRifLjNkJ:citeseerx.ist.psu.edu/viewdoc/download?doi%3D10.1.1.117.6043%26rep%3Drep1%26type%3Dpdf+from+bitstream+to+netlist&hl=nl&gl=be&pid=bl&srcid=ADGEESh3iS2wUFFAWiVD32u9VVeRpWVGOZyd281XpXLDLEWfgtugrhMTcIhiy3CSZIQZ7ZRZQmaIA9p8ZkarwpgLEeVPgaRuQSWXph1f4PU71VgpMXCqbRGHyoCnVWe61Zugy090uttY&sig=AHIEtbQkqcBYA9BXaEmqU3HB02U_h4_DoQ).

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Fats on February 11, 2012, 09:11:10 PM
Quote from: HenryCase;680160
Interesting, thanks for the link. Could you help further by advising on the lowest cost FPGA that offers partial reconfiguration?


Unfortunately I am no expert on PCB board design etc; I learned about partial reconfiguration from a presentation from a Xilinx guy at my work. I'm afraid you'll have to go through the spec sheets of the Xilinx.

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 12, 2012, 10:10:05 AM
Hang on a minute... I've got a Virtex 5 manual sitting on my shelf in here, I should probably have a look at it.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 12, 2012, 03:35:55 PM
Quote from: Mrs Beanbag;680200
Sadly it's the same story, "this is Ohm's law.  It just is."  It's empirical, there isn't really a derivation.


Sorry it wasn't as useful to you as I'd hoped.

Quote from: Mrs Beanbag;680200

Now that's interesting, I guess it's kind of similar in principle (to switch cores in and out) but my idea is to be able to fetch them from off chip, this has a certain number in reserve that it can switch in and out.  I could use that technique as well, I guess.


Yes, the Tilera approach is definitely interesting. It also seems to be reasonably affordable, judging from this video:
http://www.youtube.com/watch?v=lFghLrpPy6M
"Tad over $100 in 2000 lot quantities". Of course, the price will be higher for single units. Sadly the devboard is too pricey for me:
http://dangerousprototypes.com/2011/04/27/what-does-a-7500-dev-kit-look-like/

Quote from: Mrs Beanbag;680200

You know this sounds very much like my own idea for an OS from several years back.


Cool. What ideas did you have for your OS?
Title: Re: a golden age of Amiga
Post by: HenryCase on February 12, 2012, 03:36:41 PM
Quote from: Fats;680207
Unfortunately I am no expert on PCB board design etc; I learned about partial reconfiguration from a presentation from a Xilinx guy at my work. I'm afraid you'll have to go through the spec sheets of the Xilinx.


Okay, thanks.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 12, 2012, 03:40:24 PM
Quote from: Fats;680204
Maybe I am biased; I am already working for more than 15 years in the microelectronics research and development institute imec (http://www.imec.be).


I'm not going to derail this thread further by countering your full post (as I have more to discuss than just memristors), but would like to point out that someone at your company thinks memristor-based RAM is worth researching:
http://www2.imec.be/be_en/research/sub-22nm-cmos.html
Quote
Advanced memory: DRAM, floating gate, resistive RAM


I take it you work in a different department?
Title: Re: a golden age of Amiga
Post by: Fats on February 13, 2012, 06:58:06 PM
Quote from: HenryCase;680274
I'm not going to derail this thread further by countering your full post (as I have more to discuss than just memristors), but would like to point out that someone at your company thinks memristor-based RAM is worth researching:
http://www2.imec.be/be_en/research/sub-22nm-cmos.html


I take it you work in a different department?


I can only quote from my previous post:
"My personal opinion is that ReRAM is a possible good candidate for the next non-volatile memory, but then only if the prediction of the scaling stop for NAND flash is finally becoming reality. I don't think it will replace DRAM."
I want to add that probably certain characteristics of ReRAM will find their niche market even if it won't replace FLASH NVM.

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: HenryCase on February 13, 2012, 08:17:53 PM
Quote from: Fats;680413
I can only quote from my previous post:
"My personal opinion is that ReRAM is a possible good candidate for the next non-volatile memory, but then only if the prediction of the scaling stop for NAND flash is finally becoming reality. I don't think it will replace DRAM."
I want to add that probably certain characteristics of ReRAM will find their niche market even if it won't replace FLASH NVM.


At the end of the day, neither of our opinions is going to change the outcome of how the tech takes off, so let's let the tech stand on its own merits, and bring this conversation back to that which we can have an influence on i.e. the next evolution of personal computing. Do you have any opinions on what has been discussed so far (other than the memristor stuff)?
Title: Re: a golden age of Amiga
Post by: Fats on February 14, 2012, 08:32:02 PM
Quote from: HenryCase;680419
Do you have any opinions on what has been discussed so far (other than the memristor stuff)?


Yes, I do think the Amiga community is now more enjoyable then some time ago but talking about a golden age of Amiga is a bridge too far IMHO.
:)

greets,
Staf.
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 14, 2012, 10:42:54 PM
We can dream, can't we?  And I like my dreams to be as detailed as possible.
Title: Re: a golden age of Amiga
Post by: Thorham on February 15, 2012, 12:04:06 AM
Quote from: Fats;680516
Yes, I do think the Amiga community is now more enjoyable then some time ago but talking about a golden age of Amiga is a bridge too far IMHO.

Very true, especially if you want to make it happen using things have nothing to do with Amiga in the first place: A golden age of Amiga without Amigas? Yeah, right :lol:
Title: Re: a golden age of Amiga
Post by: HenryCase on February 15, 2012, 03:09:05 PM
@Thorham
What's being discussed is not necessarily a golden age of computers called Amigas, but a new golden age of personal computing, the next evolutionary step from what we had before if you will.

If you weren't attracted to the revolutionary platforms of the past, then you won't be interested in any new revolutions that come along, that much is true, but for people who don't have that ambition they can settle for nostalgia instead.
Title: Re: a golden age of Amiga
Post by: Thorham on February 15, 2012, 03:36:19 PM
@HenryCase:

Fair enough :)

Yes, I've lost interest in that completely. I don't care about the fastest computers anymore at all (my peecee is a 667 mhz P3), because I've become a software man :) This is incidentally why I still like Amigas: There's a lot of room for improvement software wise (just look at the OS, for example), and it's challenging.

All this new hardware is very nice, but to me it's pointless to pursue if it's only going to run bloatware ports (yes, I think we should start from scratch, call me crazy if you want :)).
Title: Re: a golden age of Amiga
Post by: Mrs Beanbag on February 15, 2012, 08:39:01 PM
@Thorham

you know what, me too.  This is something of a paradox for me, because I love to think about solutions to problems such as how to make the fastest computer ever, but I personally have no use whatsoever for the solution...

I used to work in software with a guy who used to tell me there's no point optimising my code because "everyone has 3GHz CPUs and 4Gb of RAM now", grr :(
Title: Re: a golden age of Amiga
Post by: HenryCase on February 17, 2012, 11:11:49 AM
@Mrs Beanbag
The thing about computing, is that as the power increases and algorithms evolve, the tasks you'd class as too impractical to consider before become much more accessible. It's hard to guess what level of performance improvement is going to be achieved, so it's also hard to predict the tasks that will enter the realms of practicality. Therefore, you often find technology evolves with a 'build first, think of the applications later' approach.

Going back to your exploration of Ohm's Law, found this thread, thought you might find it useful:
http://www.physicsforums.com/showthread.php?t=179056