It is. It simply depends on ones ability. Maybe I can't do it, maybe you can't do it, but that doesn't make it impossible.
Not really. It is a matter of available mental capacity. The more you invest into the low-level details, the less is left for the high-level structure. Take this as a fact that you only have a limited memory, limited overview, limited time. In other words, when writing assembly, you will be lost in detail, and loose the overview to the really important infrastructure decisions.
And how exactly is that a problem? What I see as the main problem is when you're changing things all the time. Tells me that you didn't think it through thoroughly enough, and we've all been there.
What I hear from other developers, and what I have experienced myself is that you're continuously refactoring code. It sounds wasteful, but it is just the experience that, when designing the code, you made estimates on complexity, memory footprint, usage etc... that are typically not correct, tasks are ill-defined, the architecture is typically not fully designed... It is just a fact that nobody has a complete overview on the project when started, and the only way how to get an overview is to actually implement it. Then it's better not to waste time in implementing too much, but instead invest time in prototyping and testing, and later refine and improve the implementation as soon as you get a feeling where the complexity is.
You can also try to get it mostly right from the start, so that you can focus on writing actual functional code. Anyone who's lost in the details all the time is simply doing it wrong.
You haven't done any complex program, but let me tell you that you *never* get it right from the start. If you do it in assembler, you're at some point in the stage of "oh, the heck of it", and then you start to complete the program in the design you intended originally, knowing that the choices were not ideal. That's different if you work on a higher level - it is easier to change the code, and to throw code away, or to rewrite and adapt the code.
Yes, you will be lost in detail in assembler - you're moving data from register dx to register an. That's detail, and it is a detail level that is not helpful to structure the problem.
That's only one part of it, and it's not the most important one. True software engineering means that you don't write a single line of code until you have a proper design. Something that works, and won't break when you want to add or change things. It's this part that's the hard part, and it can make the actual implementation process almost seem trivial in comparison. That's why when you're changing a lot of things you're not doing it right. You shouldn't have to, except when designing the software.
That's probably what management wants it to be (the "waterfall design"). Believe me, that's nonsense. It doesn't work this way, never did except for trivial programs. Ask other experienced developers. Software design means continuous refactoring, as the code grows, as the problem becomes defined better, as you learn more about the bottlenecks, as the customer learns more what he actually wants... Good programs have been written multiple times until they became good. But not too often, as then you're running in danger of the "second system design effect" (too complex, too much detail). Finding the right balance is the art.
Yes, but why JPEG 2000? I'd rather do something that's more interesting. Same for the MPEG decoder you suggested in the other thread. JPEG 2000 isn't widely used, and MPEG on 68020/30s isn't very useful. I suggested a new GUI system, but that's not complex enough. So the big question is what would be really worthwhile to do on 68020/30 that's also reasonably complex? An HTML/CSS engine maybe?
I picked it because that's were I first admired the benefits of high level languages. You can also try to write a HTML engine, but probably a javascript interpreter (or compiler) would be interesting. It doesn't really matter too much. As soon as the problem becomes big enough, and you need to coordinate with other partners or with other software, you'll run into trouble.
I don't pick algorithms just because they're easier to implement in assembly language. I look for algorithms that get the job done properly and efficiently, and if implementing them sucks, then it sucks and I do it anyway (or I don't because I'm lazy).
Complex software is more than an algorithm. It mean sticking lots of algorithms together to form a complete architecture, and you usually do not make the right choices for each of the algorithms when you start. Essentially, you can't, it's a chicken and egg problem. You don't have the full problem at your hands when you start (you never have, regardless what management says), you don't have the full data, so you depend on assumptions. Best guess is that these assumptions are usually wrong. Thus, you find yourself in the situation that you have to replace algorithms, put them together in a different way, or rethink your design. It is not unusual that several parts of such a project are dumped and rewritten because you learned more as the project proceeded. That's normal, and nothing to be afraid of.
I don't doubt it, but we're also not talking about ten million line programs here, we're talking about software that will run well on 68020/30s. Such constraints automatically limit the complexity of a project, because programs that are so huge and heavy won't run well on 68k anyway.
Why? Program size is only limited by your ability to use the right tools, not the CPU. Most complex programs do not even depend on "heavy work" for the CPU. It is just complex work. Running time does not correlate with program size.
To give you some ideas: The largest scale Amiga program I have written in assembler is ViNCed, which is about 63000 lines assembler. It is *barely* managable in that size. It's hard to change, and its hard to maintain.
The largest C program I wrote on the Amiga is probably VideoEasel, it's about 45000 lines of C and assembly, and even though it does a lot more than just a console and is more complex, it is easier to handle. Both work on the same hardware.
A large C++ program I wrote on *ix is the JPEG 2000 codec, in total about 250000 lines. Yes, it would compile on Amiga, and it is still in a state where it can be maintained. Yet, it is the fastest JPEG 2000 you'll find on the market. Not "despite it uses C++", but "because it uses C++". It would have been simply impossible to do that in assembler. Yes, there are assembler parts in it, but only where it matters. (The reason for the size is that it does a lot more than encoding and decoding of images. JPEG 2000 has multiple parts, and this specific implementation covers parts 1,2,3,9 and 12, and was the basis for many of the reference images in part 4.)
And once again, even this implementation is small compared to a full scale project, such as a web-browser, where a harmless image codec is just one of the many tiny details you need to fit together to make the whole thing work. Then you have products that built on top of web-browsers and java script, and projects that are built on top of java-script again... It's a full hierarchy of complexity. Of course, even if you have a java code, that is compiled into javascript, that is delivered over TCP/IP, that is compiled by a web-browser JIT compiler, that is run in the browser sandbox, it is all asssembler in the end. But only there, in the end. Nobody sane in his mind would develop such an application in assembler.
Or even different: If you as a human would have to think and coordinate every heart-beat, every breath and every mussle by your higher brain functions, you would get nuts. There are automatisms built into the lower areas into the brain that take all the complexity away, and that do that "automatically for you" to reduce the complexity, but that limits your control on these functions. For the better of your life.