Ah yes. Though that’s a C-series issue more than the OS. Low dos, Lodos, faux dos, alt dos... is ultimate a divide by zero issue. And it’s a problem in most dos systems but didn’t really become an issue until now.
Ignoring the theoretical math aspect of divide by zero ≈ divide by 1.
Overly simplified! It’s a race fault. Execution time is set in fractions of seconds. And later fractions of fractions. At or after a selected amount of time an answer is returned from a command. But on a significantly fast processor a computation can be completed before the clock cycles have passed and the answer returned is ignored or missed. When the allotted time is then spent and no NEW answer is seen the software sets the answer as nul, on high quality code, or 0.
In standard consumer products if a (div0) is received the system with accept the fault and crash. Paged memory crashes the program (normally) and non pages are will crash the system. If ee -n (in non OO languages) is set though for making crash logs the process will continue in an error state. And continue to continue until the system is reset. (Hard shutdown). Or the system over heats and bursts into flame. Crafty engineering has lead to some strange alternates in reality.
Like running Perl in Next via Fshl or a CL in most post BSD systems calling bash inside Fshl. Here the upper system crashes out but the lower system still has access. Same thing with ZSH in Microsoft DOS. Now you have direct access to a dead system.
It’s rare. Very few pieces of software ship with ee -n. But this is why OS/2 refuses to load on modern equipment.
_________________ 42 6F 61 72 64 73 6F 72 74 2E 63 6F 6D
|