Any Questions/Comments/Criticisms/Praises/Threats, please feel free to E-Mail the author of
this section, Frederico Jerónimo.
And, why not visit his site as well where you can find other tutorials : Dark Realms
This tutorial has been split in two due to its length. Check Part 2 as well.
Study! The sound of this word alone is enough to strike fear into the heart of even the most seasoned and devoted student. This unavoidable and time consuming demon seems to creep out of nowhere and usually hits us in the most unappropriated of times. And if you're forced or the study contents are truly unpleasant, it hits you even harder. So, either you flee and abandon yourself into the bottomless pit of ignorance or allow yourself to be dragged into study hell where your task is to get out as quickly and efficiently as possible. How to accomplish this? Use a study technique. Of course, everyone has its own personal favourite study technique. Mine for instance is leaving everything to the last day and then study to death while cursing my foolishness. Ok, not much of a technique I agree... and unfortunately not very original either. But back in my old high school days I had a friend that had a very peculiar and unique way of studying.
He thought that since we had so many different disciplines and each demanded some amount of study, why not study them all at once. When he had to study a given subject, for example maths, he proceeded to opening his math book and then reading it for a couple of minutes. So far nothing strange. But, he would then mark his position with a pencil and jump to another book, for example history, which he would read for a few minutes. Then, he returned to the marked position in the maths book and read a little more. He moved on to yet another book, let's say biology, before returning to the newly marked position in the maths book. This ritual continued until he decided study was over. He claimed that this method allowed him to distance himself from the subject at hand (in this case maths) so he could better absorb what he had just read and at the same time learn a little bit of other disciplines in the process, gaining a better understanding of what he was sure he had to study in the near future (for an exam of a different subject).
Without trying to figure out if this is an efficient technique (although somehow it seemed to work...), let's just say that it's a good thing that computers work in a similar way. You can have a program running that is interrupted somewhere along its execution so it can perform for example a system command. The processor keeps track of where the program stopped, carries out the system command by jumping to the right address and when this is over jumps back to the previous location and resumes program execution. If, at a later stage, it needs to be interrupted again the process is repeated. If an interrupt wasn't issued and instructions were always carried out in a linear sequential way this kind of operation would be impossible to perform. And the whole idea of a computer system as we conceive it would collapse.
Of course, a program may be interrupted by a multitude of reasons other than the execution of a system command. For example, interrupts can occur when there is an I/O device request, when a memory protection violation is detected or when there is an hardware malfunction or a power failure. All these events trigger the arrest of program execution and transfer of control to another piece of code, often called an handler or ISR (Interrupt Service Routine). The handler takes care of everything required by the interrupt and then returns to original program.
Thus, interrupts and handlers are a vital part of any computer system. Unfortunately, the terminology to describe exceptional situations where the normal execution order of instruction is changed varies among machines and authors. Terms like interrupt, fault, trap and exceptions are used, though not in a consistent fashion. So, in the first part of this tutorial I'll start by providing a description of an interrupt and then I'll cover the various types of interrupts. I'll then move on to handlers and the care you must take when designing one. Next, I'll explore the hardware responsible or directly affected by interrupts. Afterwards, I'll take you on a guided tour through the pitfalls of protected mode interrupt design in the second part of this tutorial, giving full coverage to this delicate subject and offering plenty of useful examples. This section is without question the most empirical of all. Finally, I'll discuss some miscellaneous related issues of interest. This HTML document was split in two due to space constraints.
The third and final part of this tutorial is as if not more important. And it comes in a form of a zipped file. This compressed file contains essential coded elements and examples. And not only that, but it works as a full tutorial of its own. The code is exhaustively commented and accompanies you step by step as you unravel the deepest secrets of interrupt and handler control. So please, download this file so we can get down to serious business...
I'll be using Djggp (The DOS® version of the GNU gcc compiler) 2.03 and NASM 0.98 in the protected-mode section of this document and in the zipped tutorial files. However, most of the notions can be applied to other DPMI compilers. You can get Djgpp here and NASM here.
This tutorial would not have been possible without the invaluable help of a few people. Please check the acknowledgements section for further details.
One final word of warning, this tutorial is quite long. So, grab some hot chocolate and get ready to jump into the fire of hell study...
This section is a bore. Unfortunately, nowadays, it is also a necessity...
This text is provided "as is", without warranty of any kind or fitness for a particular purpose, either expressed or implied, all of are hereby explicitly disclaimed. In no way can the author of this text be made liable for any damages that are caused by it. You are using this document at your own risk!
Now for the good news. Nothing that comes in this document ever made my computer crash and I personally think that all information within is absolutely harmless. I wrote this to help out fellow programmers and I sincerely hope it is not a pointless article. Please send some feedback.
An interrupt is a request of the processor to suspend its current program and transfer control to a new program called the Interrupt Service Routine (ISR). Special hardware mechanisms that are designed for maximum speed force the transfer. The ISR determines the cause of the interrupt, takes the appropriate action, and then returns control to the original process that was suspended.
Why do you need interrupts? The structure of the processor of any computer is conceived so it can carry out instructions endlessly. As soon as an instruction has been executed, the next one is loaded and executed. Even if it appears the computer is inactive, when it is waiting in the DOS prompt or in Windows for your next action, it does not mean it has stopped working , only to start again when instructed to. No, not at all. In fact, many routines are always running in the background independently of your instructions , such as checking the keyboard to determine whether a character has been typed in. Thus, a program loop is carried out. To interrupt the processor in its never-ending execution of these instructions, a so-called interrupt is issued. That is why it is possible for you to reactivate the CPU whenever you press a key (fortunately...). Another example, this time an internal one, is the timer interrupt, a periodic interrupt, that is used to activate the resident program PRINT regularly for a short time.
For the 80x86 256 different interrupts (ranging from 0-255) are available in total. Intel has reserved the first 32 interrupts for exclusive use by the processor but this unfortunately hasn't prevented IBM from placing all hardware interrupts and the interrupts of the PC BIOS in exactly this region which can give rise to some strange situations.
Speaking of hardware interrupts, you can distinguish three types of interrupts:
- Software Interrupts
- Hardware Interrupts
I will give a brief description of the previous categories but a detailed analysis is beyond the scope of this document. Please consult a reference manual, like the excellent "The Indispensable PC Hardware Book" by Hans-Peter Messmer and published by Addisson-Wesley (see the reference section), or send me an email if you wish to know further details.
Software interrupts are initiated with an INT instruction and, as the name implies, are triggered via software. For example, the instruction INT 33h issues the interrupt with the hex number 33h.
In the real mode address space of the i386, 1024 (1k) bytes are reserved for the interrupt vector table (IVT). This table contains an interrupt vector for each of the 256 possible interrupts. Every interrupt vector in real mode consists of four bytes and gives the jump address of the ISR (also known as interrupt handler) for the particular interrupt in segment:offset format.
When an interrupt is issued, the processor automatically transfers the current flags, the code segment CS and the instruction pointer EIP (or IP in 16-bit mode) onto the stack. The interrupt number is internally multiplied by four and then provides the offset in the segment 00h where the interrupt vector for handling the interrupt is located. The processor then loads EIP and CS with the values in the table. That way, CS:EIP of the interrupt vector gives the entry point of the interrupt handler. The return to the original program that launched the interrupt occurs with an IRET instruction.
Software interrupts are always synchronized with program execution; this means that every time the program gets to a point where there is an INT instruction, an interrupt is issued. This is very different from hardware interrupts and exceptions as you'll soon find out.
As the name suggests, these interrupts are set by hardware components (like for instance the timer component) or by peripheral devices such as a hard disk. There are two basic types of hardware interrupts: Non Maskable Interrupts (NMI) and (maskable) Interrupt Requests (IRQ).
An NMI in the PC is, generally, not good news as it is often the result of a serious hardware problem, such as a memory parity error or a erroneous bus arbitration. An NMI cannot be suppressed (or masked as the name suggests). This is quite easy to understand since it normally indicates a serious failure and a computer with incorrectly functioning hardware must be prevented from destroying data.
Interrupt requests, on the other hand, can be masked with a CLI instruction that ignores all interrupt requests. The opposite STI instruction reactivates these interrupts. Interrupt requests are generally issued by a peripherical device.
Hardware interrupts (NMI or IRQ) are, contrary to software interrupts, asynchronous to the program execution. This is understandable because, for example, a parity error does not always occur at the same program execution point. This makes the detection of program errors very difficult if they only occur in connection with hardware interrupts.
This particular type of interrupt originates in the processor itself. The production of an exception corresponds to that of a software interrupt. This means that an interrupt whose number is set by the processor itself is issued. When do exceptions occur? Generally, when the processor can't handle alone an internal error caused by system software.
There are three main classes of exceptions which I will discuss briefly.
- Fault : A fault issues an exception prior to completing the instruction. The saved EIP value then points to the same instruction that created the exception. Thus, it is possible to reload the EIP (with IRET for instance) and the processor will be able to re-execute the instruction, hopefully without another exception.
- Trap : A trap issues an exception after completing the instruction execution. The saved EIP points to the instruction immediately following the one that gave rise to the exception. The instruction is therefore not re-executed again. Why would you need this? Traps are useful when, despite the fact the instruction was processed without errors, program execution should be stopped as with the case of debugger breakpoints.
- Abort : This is not a good omen. Aborts usually translate very serious failures, such as hardware failures or invalid system tables. Because of this, it may happen that the address of the error cannot be found. Therefore, recovering program execution after an abort is not always possible.
A signal is an notification to a process that an event has occurred. Signals are sometimes called software interrupts. And this, causes a few problems... Are signals different from the software interrupts we treated above? Or only a different name for the same thing? Before answering to those questions, you should know that the concept of an interrupt (in particular a software interrupt) has expanded in scope over the years. The problem is that this expansion has not been an organized one, but rather a 'I'll do as I please' rampage. The 80x86 family has only added to the confusion surrounding interrupts by introducing the INT (software interrupts) instruction discussed above. The result of all this mess? There is no clear consensus of what terms to use in a given situation and different authors adopted different terms to their own use. So, words like software interrupts, signals, exceptions, traps,etc came bouncing around in completely different contexts.
In order to avoid further confusion, this document will attempt to use the most common meaning for these terms. Also, in order to differentiate between signals and software interrupts, we'll consider that :
- Software interrupts - Are explicitly triggered with an INT instruction and are therefore synchronous, as discussed previously.
- Signals - Don't make use of the INT instruction and usually occur asynchronously, that is, the process doesn't know ahead of time exactly when a signal will make its appearance.
Now that we've cleared the pathway, let's dive into the pool. The concept of signal handling was born (or at least it gained strength) with the Unix platform, a protected mode and multi-threaded system. Therefore, I will start by providing a general overview of signal handling and only afterwards will I explain what changes occur in a real-mode system like MS-DOS using a protected mode compiler like Djgpp. So, get ready for a thrill...
A signal is said to be generated for (or sent to) a process when the event associated with that signal first occurs. Signals can be sent by one process to another process ( or to itself) or by the OS to a process. And what kind of events can raise a signal? Here are a few examples :
As you can easily see, the events that generate signals fall into three major categories : Errors, external events and explicit requests.
An error means that the program performed something invalid. But not all kinds of errors generate signals--in fact, most do not. For example, trying to open a nonexistent file is an error but it does not raise a signal. This error is associated with a specific library call. The errors which raise signals are those that can happen anywhere in the program, not just in library calls. These include division by zero and invalid memory addresses.
An external event generally has to do with I/O or other processes. These include the arrival of input, the expiration of a timer or the termination of a child process.
An explicit request means the use of a library function such as kill whose purpose is specifically to generate a signal.
Signals can be generated synchronously or asynchronously (the latter being more common). If you try to reference an unmapped, protected or bad memory address a SIGSEV or SIGBUS can be issued, a floating point exception can generate a SIGFPE, and the execution of an illegal instruction can generate SIGILL. All the previous events, called errors if you recall, generate synchronous signals.
Events such as keyboard interrupts generate signals (SIGINT) which are sent to the target process. Such events generate asynchronous signals.
We now know how signals are generated and how about delivery? Well, when a signal is generated, it becomes pending. Normally, it remains pending for just a short period of time and then is delivered to the process that was signaled. However, if that kind of signal is currently blocked, it may remain pending indefinitely--until signals of that kind are unblocked. Once unblocked, it will be delivered immediately. Once a signal has been delivered, the target program has a choice : Ignore the signal, specify an handler function or accept the default action for that kind of signal. If the first option is selected, any signal that is generated is discarded immediately, even if blocked at the time. Building handler functions will be examined in closer detail later on. Finally, if a signal arrives which the program has neither handled nor ignored, its default action takes place. Each kind of action has its own default action : It can be to terminate the process (the most common one) or for certain "harmless" events, to do nothing.
There are few other things concerning signals that might interest you, but for our purpose we're done. If you wish to know more, check any Unix reference manual (see the reference section) or give me a ring. Also, I decided not to give a complete listing of standard signals here as it would fill too much space, but I'll cover quite a few that are accepted by Djgpp, the DOS version of the GNU gcc compiler, on the next section of this document.
As with so many other things, signal handling in Djgpp under MS_DOS brings a few additional complications. Therefore, as described in the info docs, due to the subtleties of protected-mode behaviour in MS-DOS programs, signal handlers cannot be safely called from within hardware interrupt handlers. In reality, what happens is that signal handler is only called when the program hits protected-mode and starts messing with its data. So, if the signal is raised while the processor is in real-mode, like when calling DOS services, the signal handler won't be called until the call returns. An example of this is if you try to press 'CTRL-C' in the middle of a gets() instruction, you will need to press 'ENTER' before the signal handler for SIGINT (CTRL-C) is called. Another consequence of this implementation is that when the program isn't touching any of its data (like in very tight loops which only use values in the registers), it can't be interrupted.
But how do you incorporate signal handling in your programs? This is when the signal() function steps in. Here's its rather complicated prototype :
The first argument is the signal number that you want to address. Every signal has a mnemonic that you should use for portable programs, but we'll get back to that in a second. The second argument is the name of the function that will be registered as the signal handler for the signal given by the first argument. After you call signal() and register the function as a signal handler, it will be called when that signal occurs. The execution of the program will then be suspended until the handler returns or calls 'longjmp'.
Instead of passing a function name as the second argument, you have other options at your disposal. You may pass SIG_DFL as the value of 'func' to reset the signal handling for signal number 'sig' to default., SIG_ERR to force an error when the signal is raised or SIG_IGN to ignore that signal.
If signal can't honor the request, that is if its first argument is outside valid limits for signal numbers, it returns SIG_ERR instead.
I promised in the previous section that I would give you a list of supported signals in Djgpp. I also told you that every signal number has a mnemonic associated with it. So, here are the items you should use as signal numbers and their corresponding description :
An handler, also known as callback, is in fact the routine that is called by an interrupt, or in other words, it's the ISR itself. So, why a different name? Well, the word handler is normally used for ISRs created by you, the programmer, as opposed to those that are pre-built either in the OS or BIOS.
The next question is why create an handler if the ISRs are already present? The answer is simple : To have more control and flexibility. Without handlers, your programs would have to abide by strict and rigid rules which would limit their usefulness. Handlers are indispensable in several situations as you will soon find out. Keep on reading.
The creation of interrupt handlers has traditionally been considered one of the most arcane of programming tasks, suitable only for the elite cadre of system hackers. However, writing an interrupt handler in itself is quite straightforward. Let's hope that the following guidelines will help clear the myth...
A program preparing to handle interrupts must do the following :
The interrupt handler must observe the following sequence of steps :
When writing an interrupt handler, take it easy and try to cover all the bases. The main reason interrupt handlers have acquired such a mystical reputation is that they are so difficult to debug when they contain obscure errors. Because interrupts can occur asynchronously - that is, because they can be caused by external events without regard to the state of the currently executing process - bugs can be a real problem to detect and correct. This means that an error can manifest its presence in the program much later than it actually occurs, thus leading to a true quest of the Holy Grail.
This section is only to inform you of some restrictions and rules that apply to a handler for hardware interrupts under MS-DOS :
For all of you hardware freaks out there, we'll start by examining in close detail the chip that allows the existence of interrupts : The 8259A PIC.
Afterwards, we'll explore each of the 8259A PIC input lines, that is, the interrupts that can trigger a reaction from this device. A brief explanation of each input will be given, but since this is a tutorial about interrupts (and quite a big one might I add) everything not directly related with interrupts will only be approached lightly. However, stay tuned for a hardware section on this site that will not avoid such issues.
As explained in the Interrupt Driven I/O vs. Polling section I/O devices can be serviced in two different ways : The CPU polling method and the interrupt based technique. The 8259A Programmable Interrupt Controller (PIC) allows for the later. It is designed to permit prioritizing and handling of hardware interrupt requests from peripheral devices, mainly in a PC environment.
As the processor usually only has a single interrupt input but requires data exchange with several interrupt-driven devices, the 8259A PIC is implemented to manage them. The 8259A PIC acts like a bridge between the processor and the interrupt-requesting components, that is, the interrupt requests are first transferred to the 8259A PIC, which in turn drives the interrupt line to the processor. Thus, the microprocessor is saved the overhead of determining the source and priority of the interrupting device.
How does it work? The PIC receives an interrupt request from an I/O device and informs the microprocessor. The CPU completes whatever instruction it is currently executing and then fetches a new routine (ISR) that will service the requesting device. Once this peripheral service is completed, the CPU resumes doing exactly what it was doing when the interrupt request occurred (as explained throughout this entire document). The PIC functions as an overall manager of hardware interrupt requests in an interrupt driven system environment.
In case you're wondering how is accomplished the interrupt acknowledge sequence, here's a quick overview :
The EOI command has two forms, specific and non-specific. The controller responds to a non-specific EOI command by resetting the highest in-service bit of those set. In a mode that uses a fully-nested interrupt structure, the highest in-service bit set is the level that was just acknowledged and serviced. This is the default mode for PCs. In a mode that can use other than the fully-nested interrupt structure, a Specific EOI command is required to define which in-service bit to reset.
Is this all there is to it? Usually, yes. But things can get a little trickier depending on the environment. For example, as the name indicates the 8259A programmable interrupt controller can be programmed under several different modes and for a defined operation it needs to be initialized first. For instance, it can be programmed to mask certain interrupt request lines. In order to do that the interrupt mask register is implemented. A set bit in this register masks all the interrupt requests of the corresponding peripheral, that is, all requests on the line allocated the set bit are ignored; all others are not affected by the masking.
And what happens if an interrupt comes when another is being processed, and the EOI for it wasn't issued yet? This really depends on interrupt priorities. If a certain interrupt request is in-service (that is, the corresponding bit in the ISR is set), all interrupts of a lower priority are disabled because the in-service request is serviced first. Only an interrupt of a higher priority pushes its way to the front immediately after the INTA sequence of the serviced interrupt. In this case the current INTA sequence is completed and the new interrupt request is already serviced before the old request has been completed by an EOI. Thus, interrupt requests of a lower priority are serviced once the processor has informed the PIC by an EOI that the request has been serviced. Please note that, under certain circumstances, it is favourable also to enable requests of a lower priority using the PIC programming abilities to set the special mask mode (if you're curious check the reference section for further reading). The next table shows the priority among simultaneous interrupts and exceptions :
|Class of interrupts and exceptions||Priority|
|Faults except debug faults||Highest||Trap instructions INTO, INT n, INT 3|
|Debug traps for this instruction|
|Debug faults for next instruction|
Another characteristic of the 8259A PIC is its cascading capability, that is, the possibility to interconnect one master and up to eight slave PIC's in an application. But these subjects could build a tutorial of their own, so I'll forward you to any serious hardware book if you need more details (alternately, you can always mail me).
For our purposes we only need to know that a typical PC uses two PICs to provide 15 interrupt inputs (7 on the master PIC and 8 on the slave one). The sections following this one will describe the devices connected to each of those inputs. In the meantime, the following table lists the interrupt sources on the PC (sorted in descending order of priority) :
|IRQ 0||Highest||08h||Timer Chip|
|IRQ 2||0Ah||Cascade for controller 2 (IRQ 8-15)|
|IRQ 9/1||71h||CGA vertical retrace (and other IRQ 2 devices)|
|IRQ 8/0||70h||Real-time clock|
|IRQ 12/4||74h||Reserved in AT, auxiliary device on PS/2 systems|
|IRQ 13/5||75h||FPU interrupt|
|IRQ 14/6||76h||Hard disk controller|
|IRQ 3||0Bh||Serial Port 2|
|IRQ 4||0Ch||Serial Port 1|
|IRQ 5||0Dh||Parallel port 2 in AT, reserved in PS/2 systems|
|IRQ 6||0Eh||Diskette drive|
|IRQ 7||Lowest||0Fh||Parallel Port 1|
I'll assume you've all played old games, back in the days where gameplay and plot were far more important that fancy graphics and nice box-sets... (although these concepts are all important and shouldn't be mutually exclusive one can't help but notice that priorities shifted to uncanny and greedy grounds). Game programmers of the period, often one-man teams with lots of imagination, were sometimes confronted with the problem of implementing certain delays in the game (damn, that enemy plane is closing in too fast... evasive maneuvers... I'll ne...arghhhhhhh). Often, dummy loops of the following form were employed :
This seemed to work. However, this kind of work-around has a significant disadvantage : It relies on processor speed. What a surprise to find out that the powerful hero Kill Them All of our favourite platform game, so deft and gracious on our 20MHZ i386, now, with a 600MHZ Pentium III running a GeForce beast, helplessly dashes against every kind of inoffensive and pitiful obstacle before you even can operate a single key!
Summarizing, we need a way to generate exactly defined time intervals. And what better way than by hardware? Thus, the PC's designers have implemented one (PC/XT and most ATs) or sometimes two (some new ATs or EISA) Programmable Interval Timers (PITs).
The PIT 8253/8254 generates programmable time intervals from an external clock signal of a crystal oscillator that are defined independently from the CPU. It's very flexible and has six modes of operation in all (these modes will not be explained in this tutorial, maybe in a future less generic one).
The 8253/8254 chip comprises three independently and separately programmable counters 0-2, each of which is 16 bits wide. Each counter, or channel, is supplied with its own clock signal (CLK0-CLK2) which serves as the time base for each counter. Each channel is responsible for a different task on the PC :
The timer interrupt vector (channel 0) is probably the most commonly patched interrupt in the system. However, it turns out there are two of these vectors in the system. The first one, int 08h, is the hardware vector associated with the timer interrupt. Unless you're willing to taunt fate, it's not a good idea to patch this interrupt. If you want to build a timer handler, go for the second interrupt, interrupt 1ch. The BIOS' timer ISR (int 08h) always executes an int 1ch instruction before it returns. Catching it, assuming control and chain back to the old ISR is the best way to design your timer handler. Unless you're willing to duplicate the BIOS and DOS timer code, you should never completely replace the existing timer ISR with one of your own. Twiddling with int 1ch can be very dangerous and misuse can cause your system to crash or otherwise malfunction.
Finally, without entering into too much detail, I'll leave you with the port addresses of the various 8253/8254 PIT registers (the control register loads the counters and controls the various operation modes) :
The keyboard is the most common and most important input device for PCs (excluding the mouse). Despite the birth and rise of many new "hi-tech" input devices such as scanners and voice input systems, the keyboard still plays the major role if commands are to be issued or data input to a computer.
Contrary to popular belief, every keyboard has a keyboard chip, even the old "dumb" PC/XT keyboard with the 8048. This chip supervises the detection of key presses or releases. When you press a key, the keyboard generates a so-called make-code interrupt. If, on the other hand, you release a pressed key then the keyboard generates a so-called break-code interrupt. This occurs on IRQ 1 of master 8259A PIC. The BIOS responds to these interrupts by reading the key's scan code (1 byte code that identifies each keyboard key), converting this to an ASCII character , and storing the scan and ASCII codes away in the system type ahead buffer.
The keyboard really deserves a tutorial of its own and I won't let it down... So, please be patient or send me an email.
The serial interface is essential in a PC because of its flexibility. Various devices such as a plotter, modem, mouse and, of course, a printer can be connected to a serial interface. This document will not cover the structure, functioning and programming of the serial interface, but will take a quick look instead at its interrupt driven serial communication capabilities.
The PC uses two interrupts, IRQ 3 and IRQ 4, to support interrupt driven serial communications, as seen in the following table :
|COM 1||3F8h||IRQ 4|
|COM 2||2F8h||IRQ 3|
|COM 3||3E8h||IRQ 4|
|COM 4||2E8h||IRQ 3|
Just like the LPT ports, the base addresses for the COM ports can be read from the BIOS Data Area.
|0000:0400||COM1's Base Address|
|0000:0402||COM2's Base Address|
|0000:0404||COM3's Base Address|
|0000:0406||COM4's Base Address|
The Universal Asynchronous Receiver/Transmitter (UART) 8250 (or compatible) generates an interrupt in one of four situations : a character arriving over the serial line, the UART finished the transmission of a character and is requesting another, an error occurs or a status change is requested. The UART activates the same interrupt line (IRQ 3 or IRQ 4) for all four interrupt sources. This means that the ISR needs to determine the exact nature of the interrupt interrogating the UART.
Every PC is equipped with at least one parallel and one serial interface. Unlike the serial interface, for which a lot of applications exist, the parallel interface ekes out its existence as a wallflower, as it's only used to serve a parallel printer. In a similar way to what was done in the serial port section, we'll only concern ourselves with the basics of interrupt driven parallel communications.
BIOS and DOS can usually serve up to four parallel interfaces in a PC, denoted LPT1, LPT2, LPT3 and LPT4 (for line printer). The abbreviation PRN (for printer) is a synonym (an alias) for LPT1. When BIOS assigns addresses to your printer devices, it stores the address at specific locations in memory, so we can find them as listed in the following table :
|0000:0408||LPT1's Base Address|
|0000:040A||LPT2's Base Address|
|0000:040C||LPT3's Base Address|
|0000:040E||LPT4's Base Address|
Now that we've come to parallel ports interrupts we face a little enigma. Why did IBM design the original system to allow two parallel port interrupts and then promptly designed a printer interface card that didnít support the use of interrupts? As a result, almost no DOS based software today uses the parallel port interrupts (IRQ 5 and IRQ 7). Actually, DOS based software is almost harder to find than diamonds nowadays but we'll not go that way...
"Great, now that we have some useless interrupts hanging around..." Wait! These interrupts were not dumped into the scrap hill! In fact, many devices make use of them. Examples include SCSI and sound cards. Because of this, many devices today include interrupt jumpers that let you select IRQ 5 or IRQ 7 on installation.
I will not explain what floppy and hard disk drives are or how they work or what their structure is. Although these are all interesting topics to cover in this text, this article is already long enough. I will explore their relationship with interrupts however. But before I do that I need to ask you a little question : Do you think that your hard disk is the most important and valuable part of your PC? You do? Why? What? You haven't made a single data backup and all the past three years's work is on the hard disk? I'll just leave you with a serious piece of advice : Always have a backup handy! You never know...
The floppy and hard disk drives generate interrupts at the completion of a disk operation. This is a very useful feature for multitasking systems like OS/2, Linux, or Windows. While the disk is reading or writing data, the CPU can go execute instructions for another process. When the disk finishes the read or write operation, it interrupts the CPU so it can resume the original task.
Before IBM made the Real Time Clock (RTC) chip standard equipment on its PC AT in 1984, users were prompted to enter the date manually every time they turned on their computers. Why? Because at every boot process the PC initialized itself to 01.01.1980, 0:00 o'clock. The user had to input the current date and time via the DOS commands DATE and TIME. DOS managed all times relative to this time and date.
Not very practical, was it? So the RTC was born. The RTC chip is powered by an accumulator or an in-built battery to ensure it can keep time even when the PC is turned off. The RTC is independent of the CPU and all other chips (including the 8253/8254 that deals with internal system clock)and keeps on updating time, day, month, and 2-digit year. It typically contains seven registers that store time and date values. Six of the registers are updated automatically. Each one of them stores a different value: seconds, minutes, hours, days, months, and years. The year register stores the last two digits Ė "99" in 1999 or "00" in 2000. A seventh one, the century register, stores the first two digits of the 4-digit year. The century register reads either "19" in 1999 or "20" in 2000 and is not updated automatically. It will change only if updated by either the BIOS or the operating system.
The real-time clock interrupt (int 70h) is called 1024 times per second for periodic and alarm functions. By default, it is disabled. You should only enable this interrupt if you have an int 70h ISR installed.
One last thing. If you notice that the system clock is not accurate losing a number of minutes each day, or not incrementing the time when the system is turned off, then the problem might be the RTC battery. The power consumption by the CMOS RAM and the RTC is so low that usually it plays no role in the lifetime of the batteries or accumulators. Instead, the life expectancy is determined by the self-discharge time of the accumulator or battery, and is about 3 years (or 10 years for lithium batteries). Also, the quality of components these days is rather questionable sometimes. So, if you have a PC, old or not, that keeps losing track of time, take it for a visit at the local computer store.
The Floating-Point Unit (FPU), also known as a maths co-processor, provides high-performance floating-point processing capabilities. Floating point operations such as decimals and logarithms can take many instruction steps on the main processor. Such calculations can be handled more efficiently if passed on to a co-processor. The FPU executes instructions from the processor's normal instruction stream and greatly improves its efficiency in handling the types of high-precision floating-point processing operations commonly found in scientific, engineering, and business applications.
Before the advent of the 80486, the FPU was an optional chip with a reserved slot on the motherboard, close to the CPU. Nowadays, all PCs come with in-built FPU.
The 80x87 FPU generates an interrupt whenever a floating point exception occurs. On CPUs with built-in FPUs (80486DX and better) there is a bit in one of the control register you can set to simulate a vectored interrupt. BIOS generally initializes such bits for compatibility with existing systems.
Non-maskable interrupts (NMI) are critical interrupts such as those generated after a power failure (memory parity error) that cannot be blocked by the CPU. This is in contrast to most common device interrupts such as disk and network adapter interrupts, which are considered maskable (you can enable or disable them with sti and cli instructions).
This interrupt (int 02h) is always enabled by default since it cannot be masked.
As mentioned in the section on the 8259A PIC, there are several interrupts reserved by IBM. Many systems use the reserved interrupts for the mouse or for other purposes. Since such interrupts are inherently system dependent, we will not describe them here.
|webmaster||delorie software privacy|
|Copyright © 2000||Updated Dec 2000|