90's style. That is funny. Using SDL is not 80's or 90's style.
unsigned char* out = (unsigned char*)0xA0000000L;
Go. Don't forget your interrupt handler.
I do miss those days, and I miss the Amiga terribly. What I don't miss are the days of thunking, marshalling, bank switching, and segmented memory.
> What I don't miss are the days of thunking, marshalling, bank switching, and segmented memory.
Been thinking about this for a while. Why don't instruction sets define arrays at the hardware level? That seems to be where practically all the pain of memory management comes from - dynamically sized arrays (and 2D arrays i.e. matrices) that grow or shrink throughout the program's lifecycle. Why aren't `malloc` and `free` architecture-level instructions? Let the hardware worry about finding space within memory, it'll almost certainly be faster than any software algorithm. And if you can do that, can't you putdynamically sized arrays into the architecture as well? This solves so many software related problems. x86 is CISC so it's not like they bother with instruction count; is there something I'm missing? Has this been tried before? I know SIMD is something similar, but I don't think anything exists that tries to replace malloc/free.