Xref: news2.mv.net comp.os.msdos.djgpp:971 From: Jurgen Wenzel Newsgroups: comp.os.msdos.djgpp Subject: GRXlib question Date: Sat, 10 Feb 1996 19:35:53 +0100 Organization: Solace Computer Society, Sundsvall, Sweden Lines: 86 Message-ID: <311CE589.41C67EA6@solace.mh.se> NNTP-Posting-Host: rocwiz AT krynn DOT solace DOT mh DOT se To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp Greetings... I've got a problem and a couple of questions that concerns GRXlib. If you're not interested, I'd advise you to skip this rather long (I'm sorry. It could probably have been a lot shorter, but I wanted to try and be as clear as possible) note and read something more interesting. Background: I'm writing a highly graphical program where I basically do everything but the actual interface to the graphicscard myself. I've chosen the GRXlib (1.03 or something like that, not the 2.0 anyway) to handle that part - initialization and screen flushing - for me and have now run into some problems. Specification: Before fireing up the graphical mode I allocate a screen buffer whose pointer I pass to the setup function. I then let my own functions work against this screen buffer only (the intention is to have an easy to port program). When I flush the buffer to the screen, I use the blitting function that comes with the screencontext (which is connected to my buffer by the way the graphic mode is initialized). In 640 x 480 x 256 (palette) mode every pixel is represented by an unsigned char with the palette entry. Thus all I need to do is to set a palette (using GRXlib palette functions) and then set the appropriate entries in my screen buffer. This works fine and fast enough for the needs I have. Everything would've been great if it wasn't for the fact that sometimes 256 colours aren't enough... I therefore switched to 640 x 480 x 32K (RGB) mode. I initialize the graphical mode according to above by connecting my own buffer to the screen context. Now, however, the buffer looks a bit different. According to the GRXlib documentation each pixel in 32K RGB mode is represented as xrrrrrgggggbbbbb (it never mentiones the actual type). There is no palette used. So instead of handling simple palette entries I now must SHIFT and OR actual RGB values. I do this (through some lookups to keep speed up) and produce an unsigned int containing the proper colour value. This value -matches- the value returned by the function GrAllocColour(r, g, b) and when I do a GrPlot(x, y, value) it plots a pixel of the -correct- colour. From this, I've drawn the conclusion that there is nothing wrong with the way I produce the RGB colour value. Then there is the issue of setting the unsigned int colour value into my unsigned char screen buffer. (When passing a buffer in graphic mode initialization, it is supposed to be a char pointer.) This is easily solved by getting the highbyte and the lowbyte and then set them both sequentially in the screen buffer. Thus the the colour values end up in xrrrrrgggggbbbbb in the screen buffer. When I plot each character pair in the screen buffer using GrPlot(x, y, (*buf << 8) | *(buf + 1)) it plots the colours correctly through the entire screen buffer. I therefore draw the conclusion that the colour values have been properly set in the screen buffer. Finally I try to blit the screen context the same way I did in 256 colour mode only to see that it -fails-, producing colours that's not the ones I wanted. Questions: The main question is of course "Why doesn't this work?" To answer it one can come up with some sub-questions such as "Do you find any mistakes in my reasoning and my conclusions above?", "Have I misunderstood the documentation or is there perhaps something wrong with it?", "Can this be done at all?" and "If not, does anyone know of another way to do it instead?" Any help at -all- will be -greatly- appreciated. I really -need- to solve this and am getting kinda desperate. Thank you for reading this far, and hope to hear from you soon... :O) J Wenzel rocwiz AT solace DOT mh DOT se