Date: Tue, 27 Sep 94 22:39:20 +0100 From: buers AT dg1 DOT chemie DOT uni-konstanz DOT de (Dieter Buerssner) To: djgpp AT sun DOT soe DOT clarkson DOT edu Subject: int86 vs. _go32_dpmi_simulate_int, bug? Hello, I have included a small piece of source, that shows a very peculiar difference between int86 and _go32_dpmi_simulate_int. In the source int 21, ah=18 is called via one of the two functions in the subject line. This interrupt does actually nothing (returns al=0), and is supported by go32/expthdlr.c. After compiling with gcc test.c you can run the program with "go32 a.out" (int86) or with "go32 a.out d" (_go32_dpmi_simulate_int). I believe (from reading teh documentation) that the unused registers don't have to be set to special values. When the register struct is on the stack, the unused registers will contain garbage values. I found that the values they are set to really doesn't matter, with one strange difference: setting the 0x0100 bit of flags to one, will give an exception in go32, after _go32_dpmi... is called, and when _not_ in dpmi mode (only XMS driver loaded, no VCPI). int86 doesn't seem to care. When in dpmi-mode the setting of flags doesn't matter at all. It gets even more interesting. When running the following program in gdb (or in debug32) in non-dpmi mode, the _go32_dpmi... -route won't give an exception, but the returned al will be 178 (instead of zero). I tried this with other functions as well (MSDOS mkdir, get-version, get-date get-time). They all show the same behaviour. With go32_dpmi... the program will not work when the 0x0100 bit of flags is set, and go32 is running in non-dpmi mode. int86 will always work. Of course it is easy enough to mask the offending flag, but I suspect that there may be more problems behind it. From reading this mailing list, I got the feeling, that calling _go32_dpmi_simulate_int is the more modern way to call interrupts. Is there some special reason for this? I know, that int86 does need special support by go32. But using int86, maybe together with a list of supported interrupts, makes the source code compile under other MSDOS C-compilers, which I think, is an advantage. Also, on my machine int86 is faster than go32_dpmi... . The call in my example program needs about 0.65 msec with int86 and 1.50 msec with _go32_dpmi... (This is on 386SX-16, the 0.65 msec probably is just a little bit more than the time for switching to real mode and back, correct?) And, of course int86 is faster to type. Will int86 still be supported by djgpp2? What are the reasons to prefer _go32_dpmi_simulate_int? Dieter #include #include #include #if 1 #define setflags(r) r.x.flags = 0x0100 /* fails with _go32_dpmi... */ #else #define setflags(r) r.x.flags = ~0x0100 /* works */ #endif /* call int 21,18. Should just return zero in al */ static int dummy_dpmi(void) { _go32_dpmi_registers r; r.h.ah = 0x18; r.x.ss = r.x.sp = 0; setflags(r); _go32_dpmi_simulate_int(0x21, &r); return r.h.al; } static int dummy_int86(void) { union REGS r; r.h.ah = 0x18; setflags(r); int86(0x21, &r, &r); return r.h.al; } int main(int argc, char *argv[]) { if (argc > 1 && argv[1][0] == 'd') printf("int 21,18 (_go32_dpmi...) returns %d (should be 0)\n", dummy_dpmi()); else printf("int 21,18 (int86) returns %d (should be 0)\n", dummy_int86()); return 0; }