CA2091993C - Fault tolerant computer system - Google Patents

Fault tolerant computer system

Info

Publication number
CA2091993C
CA2091993C CA002091993A CA2091993A CA2091993C CA 2091993 C CA2091993 C CA 2091993C CA 002091993 A CA002091993 A CA 002091993A CA 2091993 A CA2091993 A CA 2091993A CA 2091993 C CA2091993 C CA 2091993C
Authority
CA
Canada
Prior art keywords
engine
processing means
computer system
state
providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA002091993A
Other languages
French (fr)
Other versions
CA2091993A1 (en
Inventor
Drew Major
Kyle Powell
Dale Neibaur
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micro Focus Software Inc
Original Assignee
Novell Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novell Inc filed Critical Novell Inc
Publication of CA2091993A1 publication Critical patent/CA2091993A1/en
Application granted granted Critical
Publication of CA2091993C publication Critical patent/CA2091993C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1675Temporal synchronisation or re-synchronisation of redundant processing components
    • G06F11/1687Temporal synchronisation or re-synchronisation of redundant processing components at event level, e.g. by interrupt or result of polling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2076Synchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2048Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage

Abstract

A method and apparatus for providing a fault-tolerant backup system such that if there is a failure of a primary processing system, a replicated system can take over without interruption. The invention provides a software solution for providing a backup system. Two servers are provided, a primary and secondary server. The two servers are connected via a communications channel. The servers have associated with them an operating system. The present invention divides this operating system into two "engines." An I/O engine is responsible for handling and receiving all data and asynchronous events on the system. The I/O engine controls and interfaces with physical devices and device drivers. The operating system (OS) engine is used to operate on data received from the I/O engine. All events or data which can change the state of the operating system are channeled through the I/O engine and converted to a message format. The I/O engine on the two servers coordinate with each other and provide the same sequence of messages to the OS engines. The messages are provided to a message queue accessed by the OS engine. Therefore, regardless of the timing of the events, (i.e., asynchronous events), the OS engine receives all events sequentially through a continuous sequential stream of input data.
As a result, the OS engine is a finite state automata with a one-dimensional input "view" of the rest of the system and the state of the OS engines on both primary and secondary servers will converge.

Description

209199~
wO 92/05487 ~ I Pcr/uS9l/05679 ,_ FAULT TOLERANT COMPUTER SYSTEM

BACKGROUND OF THE INVENTION

5 1. FIELD OF THE INVENTION

This invention relates to the field of operating system software-based fault-tolerant computer systems utilizing multiple processors.

10 Z. BACKGROUND ART

In computer system applications, it is often desired to provide for continuous operation of the computer system, even in the event of a component failure. For example, personal computers (PC's) or workstations 15 often use a computer network t~ allow the sharing of data, applications, files, processing power, communications and other resources, such as printers, modems, mass storage and the like. Generally, the sharing of resources is accomplished by the use of a network server. The server is a processing unit dedicated to managing the centralized resources, managing 20 data, and sharing these resources with client PC's and workstations. The server, network and PC's or workstations combined together constitute the computer system. If there is a failure in the network server, the PC's and workstations on the network can no longer access the desired centralized resources and the system fails.
To maintain operation of a computer system during a component ~ failure, a redundant or backup system is required. One prior art backup system involves complete hardware redundancy. Two identical processors are provided with the same inputs at the hardware signal level at the same Wo 92/05487 2 0 ~ 1 9 9 3 2 Pcr/US9l/05679 time during operation of the computer system. Typically, one processor is considered the primary processor and the other is a secondary processor. If the primary processor fails, the system switches to the secondary processor.
An example of such a hardware redundancy system is described in Lovell, U.S. Patent No. 3,444,528. In Lovell, two identical computer systems receive the same inputs and execute the same operations. However, only one of the computers provides output unless there is a failure, in which case the second computer takes control of the output. In operation, the output circuits of the backup computer are disabled until a malfunction occurs in 10 the master computer. At that time, the outputs of the backup computer are enabled.

The use of identical processors or hardware has a number of potential disadvantages. One disadvantage is the complexity and cost of 15 synchronizing the processors at a signal level.

Another prior art method of providing a backup system is referred to as a "checkpoint" system. A checkpoint system takes advantage of a principle known as "finite state automata." This principle holds that if two 20 devices are at the same state, identical inputs to those devices will result in identical outputs for each device, and each device will advance to the same identical state.

In a checkpoint system, the entire state of a device, such as the 25 processor state and associated memory, is transferred to another backup processor after each operation of the primary processor. In the event of a failure, the backup processor is ideally at the most recent valid state of the primary processor. The most recent operation is provided to the backup processor and operation continues from that point using the backup 2 ~ 9 1 g ~ 3 PCI'/US9l/05679 ,_ processor. Alternatively, the state information is provided to mass storage after each operation of the primary processor. In the event of a failure, the stored state information is provided to a backup processor which may or may not have been used for other operations prior to that event.

One prior art checkpoint system is described in Glaser, U.S. Patent No.
4,590,554. In Glaser, a primary processor is provided to perform certain tasks. A secondary processor is provided to perform other tasks.
Periodically, the state of the primary processor is transferred to the secondary10 processor. Upon failure of the primary processor, any operations executed by the primary processor since the last synchronization of the primary and backup processors are executed by the backup processor to bring it current with the primary processor. The system of Glaser, as well as other checkpoint systems, suffer a number of disadvantages. One disadvantage is 15 the amount of time and memory required to transfer the state of the primary system t( the secondary system. Another disadvantage of checkpoint systems is the interruption of service upon failure of the primary system. The secondary system must be "brought up to speed" by execution of messages in a message string.
One prior art attempt to solve this problem is to update only those portions of the state of the primary system that have been changed since the previous update. However, this requires compiex memorv and data management schemes.
It is an object of the invention to provide a backup system that does not require specialized hardware for the synchronization of the backup system.

It is another object of the invention to provide a backup system which is transparent to asynchronous events.

It is still another object of the present invention to provide an 5 improved backup system for network server operation.

It is another object of the present invention to provide continuous service through a single hardware component failure.

WO 92/05487 ~ 0 9 ~ 9 ~ 3 Pcr/uS9l/OS679 .~
SUMMARY OF THE INVENTION

The invention is a method and apparatus for providing a fault-tolerant backup system such that if there is a failure of a primary processing 5 system, a replicated system can take over without interruption. The primary and backup processing systems are separate computers connected by a high speed communications channel. The invention provides a software solution for synchronizing the backup system. The present invention is implemented as a network server, but the principles behind the invention 10 could be used in other processing environments as well. Each server may utilize one or more processors. The servers use a specially architected operating system. The present invention divides this operating sys'~m into two "engines." An input/output (I/O) engine is responsible for handling and receiving all data and asynchronous events on the system. The I/O
15 engine controls and interfaces with physical devices and device drivers. The operating system (OS) engine is used to operate on data received from the I/O engine. In the primary server, these engines are referred to as the primary I/O engine and the primary OS engine.

All events or data which can change the state of the operating system are channeled through the I/O engine and converted to a message format.
The messages are provided to a message ~ueue accessed by the OS engine.
Therefore, regardless of the timing of the events, (i.e., asynchronous events), the OS engine receives all events sequentially through a continuous 25 sequential stream of input data. As a result, the OS engine is a finite stateautomata with a one-dimensional input "view" of the rest of the system.
Thus, even though the OS engine is operating on asynchronous events, the procession of those events is controlled through a single-ordered input sequence.

._ 6 On startup, or when a secondary processor is first provided, the primary processor is "starved," that is, all instructions or other state-cll~nging events are llalted until the OS engine reaclles a stable state. At tllat point, tlle slate is transferred to the OS engine of the backup system. From 5 tllat point on, identical messages (events) are provided to each OS engine.
Because botll systems begin at an identical state and receive identical inp7lts,the OS engine part of the systems produce identical outputs and advance to identical states.

The backup system also divides the operating system into a secondary OS engine alld a secondary l/O engine. The secondary I/O engine is in communication witl- tlle primary I/O engine. Upon failure of the primary system, the remainder of the computer system is switched to the secondary sys(em wi~h virlually no interruption. This is possiL~le because each evenl is 15 executed substantially simultaneously by the backup system and the primary system. Tllus, tllere is no loss of system operation during a component failure. In addition, no transfer of state is required once initial synchrollization has been achieved. This reduces system complexity, reduces memory managing requirements and provides for uninterrupted 20 service.

Accordingly, in one of its aspects, the present invention resides in a method for providing a fault tolerant computer system comprising the steps of:
providing a first processing means for operation of said computer system, said 25 first processing means comprising a first operating system (OS) engine and a first input/output ~I/O) engine; providing a second processing means, said 6a second processing means comprising a second operating system (OS) engine and a second input/output ~[/O) engine; determining a state of said first processingmeans and providing said state to said second processing means; ~efining an operation that can change said state of said first OS engine as an event;
5 providing a plurality of events to said first I/O engine and converting each of said events into a message; providing said message to a first message queue in said first OS engine and to a second message queue in said second OS engine;
executing said message in said first OS engine and said second OS engine; and switching said computer system operation to said second processing means 10 upon failure of said first processing means, such that no loss of operation of said computer system occurs during said switch-over.

In a further aspect, the present invention provides a fault tolerant computer system comprising: first processing means for operation of said 15 computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine; second processing means comprising a second operating system (OS) engine and a second input/output ~/O) engine; said first I/O engine coupled to said second I/O
engine on a first bus; said first I/O engine in~ ing a converting means for 20 converting operations that can change said state of said first OS engine into a message; said first I/O engine for providing said message to a first message queue in said first OS engine and to a second message queue in said second OS
engine; said first OS engine and said second OS engine inrlll~ling means for P~e~lting said message; and means for switching said computer system 25 operation to said second OS engine upon failure of said first processing means 6b such that no loss of operation of said computer system occurs during said switch-over.

In a still further aspect, the present invention resides in a method for 5 providing a fault tolerant computer system comprising the steps of: providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (i/O) engine; providing a second processing means comprising a second operating system (OS) engine and a second input/output (i/O) engine;
10 determining a state of said first processing means and providing said state to said second processing means; clefining an operation that can change said state of said first OS engine as an event; providing a plurality of events to said first I/O engine and seri~li7.ing said events into an event sequence; providing successive events in said event sequence to said first OS engine and to said 15 second OS engine; executing said successive events in said first OS engine and said second OS engine; and switching said computer system operation to said second processing means upon failure of said first processing means, such that no loss of operation to said computer system occurs during said switch-over.

A further aspect of the invention resides in a method of disk mirroring in a computer system, comprising the steps of: providing a first processing means for operation of said computer system; providing a second procecsing means for operation of said computer system; providing said first processing means with primary mass storage; providing said second processing means with 25 secondary mass storage; providing a first manager for control of said primarymass storage; providing a second manager for control of said second mass 6c storage; synchronizing said primary mass storage and said secondary mass storage using said first manager and said second manager; marking said primary mass storage and said secondary mass storage with a current synchronization level counter value to in~1ic~te that said primary mass storage and said 5 secondary mass storage are fully synchronized; and ch~nging said current valuesynchronization level counter when there is a change to synchronization state.

A still further aspect of the invention resides in a method for e~recllting an operation in a fault tolerant computer system comprising the steps of:
10 providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output ~/O) engine; generating a request by said first OS engine to said first I/O engine and said first OS engine waiting for a reply from said first I/O engine; and executing in said first I/O engine the requested operation as 15 specified by said request and m~tching an initial I/O event by m~tching it with said request.

In a further aspect, the present invention resides in a method for ~ fining the states of a fault tolerant computer system comprising the steps of:20 providing a first processing means for operation of said computer system, said first procescing means comprising a first operating system (OS) engine and a first input/output ~l/O) engine; providing a second processing means, said second processing means comprising a second operating system (OS) engine and a second input/output a~O) engine; providing a first state to define the status 25 of the fault tolerant computer to identify when said first engine is operational but said first engine is not operational called No Server Active State; providing 6d a second state to define the status of the fault tolerant computer to identify when said first I/O engine is operational but said second I/O engine is not called Primary System With No Secondary State; providing a third state to define the status of the fault tolerant computer to identify when said first I/O5 engme lS runmng m a mlrrored prlmary system; provldmg a fourth state to define the status of the fault tolerant computer to identify when said first I/Oengine is running in a mirrored secondary system; allowing a transition from said first state to said second state when said first OS engine is activated;
allowing a transition from said second state to said third state when said first10 processing means is synchronized with said second processing means; allowing a transition from said first state to said fourth state when said second OS engineis synchronized with said first processing means; allowing a transition from said fourth state to said second state when said first processing means fails; and allowing a transition from said third state to said second state when said second 15 processing means fails.

In a still further aspect, the present invention provides a fault tolerant computer system comprising: a first processing means for operation of said computer system, a second processing means for operation of said computer 20 system, wherein said second processing means is a backup processing means forsaid first processing means, and a first bus connecting said first processing means and said second processing means, characterized in that said first processing means comprises a first operating system engine (OS) and a first input/output (~/O) engine, said first OS engine comprising a first message 25 queue, said first message queue coupled to said first I/O engine for receiving messages, and that said second processing means comprises a second OS engine 6e 7~
and a second I/O engine, said second OS engine comprising a second message queue, said second message queue coupled to said second I/O engine for receiving messages; that said first bus connects said first I/O engine and said second I/O engine for transferring messages; and wherein said first I/O engine 5 is configured to convert operations that can change the state of said first OSengine into messages, said messages provided to said first message queue and to said second message queue for subsequent execution by said first OS engine and said second OS engine, respectively.

~ ~.

wO 92/05487 2 0 9 1 9 ~ 3 Pcr/US9l/05679 BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is a block diagram of the preferred embodiment of the present invention.

Figure 2 is a detailed view of the I/O engine of Figure 1.

Figure 3 is a detailed view of the OS engine of Figure 1.

Figure 4A is a flow diagram illustrating OS engine operation during execution of requests and events.

Figure 4B is a flow diagram illustrating operation of primary and secondary I/O engines during execution of events.
Figure 4C is a flow diagram illustrating operation of primary and secondary I/O engines during execution of requests.

Figure 5 is a diagram illustrating state transitions of this invention.
Figure 6 is a flow diagram illustrating primary and secondary system synchronization .

Figure 7 is a block diagram of an alternate embodiment of this 25 invention.

Wo 92/05487 2 0 9 1 9 9 3 Pcr/US91/05679 DETAILED DESCRIPTION OF THE INVENTION

A fault-tolerant system used as a network server is described. In the following description, numerous specific details are set forth in order to 5 provide a more thorough description of the present invention. It will be apparent, however, to one skilled in the art, that the present invention mav be practiced without these specific details. In other instances, well-known features have not been described in detail so as not to obscure the invention.

1~ BLOCK DIAGRAM OF THIS INVENTION

A block diagram of the preferred embodiment of this invention is illustrated in Figure 1. The invention provides a primary processor and operating system generally desigrlated by those elements within dashed 15 lines 21 and a backup or secondary processor and operating system generally designated by those elements falling within dashed lines 22. The primary operating system 21 comprises an operating system (OS) engine 10 coupled to an input/output (I/O) engine 12. The I/O engine and OS engine communicate via "event" and "request" queues. The I/O engine writes 20 events onto the event queue and the OS en~gine reads the events. The OS
engine writes requests onto the request queue and the I/O engine reads the request.

The backup 22 includes its own OS engine 16 that communicates 25 through event queue 17 and request queue 42 to I/O engine 18. I/O engine 12 communicates with I/O engine 18 through a high speed communications bus 15A and B. 15A and B are one hardware channel that is used to communicate two types of messages, A and B. The high speed communications bus is used to transfer events from the primary server to i~ V ~
WO 92/05487 9 Pcr/US9l/OS679 the secondary server (15A). It is also used for other communication between the I/O engines (15B). I/O engine 12 also may access mass storage 14 through line 13. I/O engine 12 is also coupled to other devices, such as timers, keyboards displays, etc., shown symbolically as block 44A coupled to 5 I/O engine 12 through bus ~4. I/O engine 18 is coupled through line 19 to mass storage 20. The I/O engine 12 and I/O engine 18 are each connected to network 23. I/O engine 18 is coupled to block 44B (timers, keyboards, display, etc.) through bus 65.

The I/O engine 12 receives data and asynchronous events from the computer system of which it is a part. For example, if the invention is used as a network server, the I ~ engine 12 receives LAN packets from other devices coupled to the network. The I/O engine also controls and interfaces with physical devices and device drivers, such as mass storage device 14, a 15 keyboard or a timer.

The OS engines operate on data received from the I/O engines via the event queues 11 and 17. After a desired operation has been performed, the data is returned to the I/O engines via the request queues 41 and 42 for 20 output to other system devices.

The primary server 21 receives data or events from the network 23 on input line 24. The I/O engine 12 converts these events or data into a "message" format. Each message represents data or an event which can 25 change the state of the operating system. The I/O engine 12 provides these messages first to bus 15A, and when I/O engine 18 signals that it has received the message, the message is then given by I/O engines 12 and 18 to both the OS engines through the event message queue buses 11 and 17.
These messages are executed sequentially by OS engines 10 and 16. By wO 92/05487 2 0 9 1 9 Y 3 lo Pcr/US9l/05679 queueing the messages, time dependency is removed from the system so that all asynchronous events are converted into a synchronous string of event messages. By separating the OS engine from the I/O engine, the OS
engine is made to operate as if it were a finite state automata having a one 5 dimensional view of the system (i.e., the event message queue).

The buses 15A and 15B linking the primary I/O engine 12 to the secondary I/O engine 18 utilize a bi-directional communications channel.
Ideally, the buses 15A and B provide high speed communications, have low 10 latency and low CPU overhead. Any suitable communications channel can be utilized with this invention, including bus extenders and local area network (LAN) cards.

The OS engine and I/O engine can be implemented with a single 15 processor if desired. Alternatively, separate processors, one for the OS
engine and one for the I/O engine, can be lltili7er~. Additional OS engines, using additional processors, can also be utilized in this invention. The states of all OS engines are then mirrored.

Regardless of whether one or two processors is utilized for the OS
engine and I/O engine, system RAM memory is divided between the two engines. The I/O engine can access OS engine memory but the OS engine cannot access I/O engine memory. This is because memory buffer addresses may be different for the primary and secondary I/O engines, leading to the 25 state of the primary and secondary OS engines becoming different if they were allowed to access addresses in I/O engine memory.

It is not necessary for the primary and backup servers to have identical processors. The performance of the processors should be similar W O 92/05487 11 2 ~9 ~ 9 3 PC~r/US91/05679 ,~_ (CPU type, CPU speed) and the processors must execute instructions in the same manner, not necessarily at the pin and bus cycle level but at the values written to memory and the instruction sequencing level. For example, an 80386 microprocessor manufactured by Intel Corporation of Santa Clara, 5 California, could be used in the primary server with an Intel 80486 in the secondary server. The secondary engine is required to have at least as much RAM as is being used by the primary OS engine. In addition, both the primary and secondary servers should have the same amount and configuration of disk storage.
Hardware and/or software upgrades and changes can be made to the system without loss of service. For example, a user may wish to add more I~AM to the primary and secondary servers. To accomplish this, the primary or secondary server is taken out of the system. If the primary server 15 is taken off line, the secondary server will treat that occurrence as a failure and will begin to operate as the primary server, such that there is no dlsruption or interruption of the operation of the system. The off-line server can then be upgraded and placed back on-line. The servers are then resynchronized and the other server is taken off line and upgraded. After 20 upgrade of the second server, it is placed back on-line and the servers are resynchronized and both start using the newly added RAM. Thus, hardware and software upgrades can ~ made without loss of service. Although the invention is described in relation to network servers, it has equal application to general purpose computer systems.
To initialize the secondary operating system, all new events are ~vithheld from the primary OS engine 10 until it has reached a stable state.
At that point, the state of the OS engine 10 (embodied in the memory image of the OS engine 10) is transferred through message bus 15B to the OS

W092/05487 - ~~919(~,3) 12 Pcr/US9l/05679 engine 16 of the backup operating system. The OS engine 10 then has a state identical to OS engine 16. At this time, all messages generated by I/O engine 12 that are provided to OS engine 10 are also provided on bus 15A to I/O
engine 18 for transfer to OS engine 16. Since both OS engines 10 and 16 5 begin in an identical state and receive identical inputs, each OS engine will advance to an identical state after each event or message.

In the present invention, identical messages produce identical states in the primary and backup operating system engines, such that prior art 10 check-pointing operations are not required. Time dependent considerations are minimized, and synchronization of the respective OS engines for simultaneous operation is unnecessary because synchronous and asynchronous events are provided to a message queue, the message queue serving as a method to convert asynchronous events to synchronous 15 events.

If there is a failure of a primary system, the I/O engine 18 of the secondary operating system is coupled to the network 23. The secondary I/O
engine 18 is then used to generate messages which are provided to the 20 secondary OS engine 16. Because the backup operating ~ystel~l is at the same state as the primary operating system, no loss of operation to the clients using the server occurs during a server switchover.

I/O ENGINE/OS ENGINE SEPARATION
In the present invention, the I/O engine and OS engine are substantially logically independent. To prevent unwanted state changes that cannot be mirrored on the backup OS engine, data shared by the I/O and OS
engines is controlled, as further described below. Each engine has its own 20~199~
WO 92/05487 13- PCr/US91/OS679 stand-alone process scheduler, command interpreter, memory management system, and code associated with tha. portion of the OS essential to its function.

The division between the OS engine and I/O engine is made above the hardware driver level at the driver support layer. The driver support layer software is duplicated in both the I/O engine and the OS engine and maintains the same top-level interface. The support layer software is modified for the I/O engine and the OS engine. The driver support la~er of the I/O engine maintains driver level interfaces and communicates to physical hardware drivers. It converts hardware driver level events into messages which are provided to the event queue of the OS engine.

The OS engine has no hardware driver interface support routines, 15 such as for registering interrupts or allocating I/O port addresses. When the OS engine requests an operation involving a hardware component (e.g., writing or reading from disk), the driver s~ ~port layer software in the OS
engine converts the action into a request and provides it to the I/O engine request queue for execution. The results of that request are then returned to 20 the OS engine as an event message generated by the I/O engine driver support layer.

I/O ENGINE

~. ferring now to Figure 2, the I/O engine consists of three levels, a driver level, a management software level and a message level. Device drivers 26A-26E drive hardware elements such as printers, storage devices (e.g., disk drives), displays, LAN adaptors, keyboards, etc. The management software level includes controllers for device drivers. For example, the disk ~V~1993 wo 92/05487 14 Pcr/US9l/05679 block 27 controls the disk device driver (e.g., disk device driver 26a). Disk block 27 controls the initiation of disk reads and writes. In addition, disk block 27 tracks the status of a disk operation. The disk block 27 of the primary I/O engine (i.e., I/O engine 12) communicates the status of disk 5 operations to the backup I/O engine. The primary mass storage 14 and the secondary mass storage 20 are substantially identical systems. If the primary I/O engine executes a read from disk 14, it communicates to I/O engine 18 that the read has been completed. If the primary I/O engine completes the read first, the data may be sent as a message on bus 15B to the secondary I/O
10 engine 18. Alternatively, I/O engine 18 reads the data from its own disk drive 20.

The LAN block 28 controls external communications such as to a local area network. This invention is not limited to local area networks, 15 however, and any type of communication may be utilized with this invention. The LAN controller receives information packets from the network and determines whether to provide that packet to the OS engine.

The display block 29 controls communications to a display device 20 such as a CRT screen through device driver 26C. The timer block 30 drives the system time clock and keyboard block 31 provides an interface and communication with a keyboard.

Message block 47 converts system events into messages to provide to 25 the event queue of the OS engine and dequeues requests from the OS
engine. A message consists of a header field and a data field. The header field indicates the type of message or operation. The data field contains the data on which the operation is to be executed. The message level communicates event messages with the I/O engines through event bus 15A.

WO 92/05487 2 0 319 3 ~ Pcr/us9l/o5679 " _ OS ENGINE

Referring to Figure 3, the OS engine includes message level 32 to 5 dequeue event messages received from the I/O engine in sequential order and to enqueue requests to provide OS engine requests to the request block 47 of the I/O engine. The OS engine also includes management software corresponding to the management software of the I/O engine. For example, the OS engine includes disk management software 33, LAN management 10 software 34, message management software 35, timer management software 36 and keyboard software 37. The top level 48 of the OS engine is the operating system of the computer system using this invention.

The disk management software 33 controls the mirrored copies of 15 data on ~he redundant disks 14 and 20. When a disk operation is to be ~e~fo~ ed, such as a disk read operation, the disk management software 33 determines whether both I/O engines 12 and 18 will perform a read operation or whether the primary I/O engine 12 will perform a read and transfer the data to the secondary I/O engine 18. The timer management 20 software 36 controls timer events. Generally, an operating system has a timer that is interrupted periodically. Often this timer interruption is used for time dependent operations. In this invention, a timer interrupt is itself an event on the input queue. By turning the timer interrupt into a message, the timer events become relative instead of absolute. Time events are ~5 changed from asynchronous to synchronous events. The LAN block 34, display block 35 and keyboard block 37 control network, display and keyboard events, respectively.

OPERATION

W O 92/05487 2 U 919 ~ 3 16 PC~r/US91/05679 When the OS engine receives an event message, several changes can occur to the state of the OS engine and these changes can take some finite time to occur. In this invention, once a message has been accepted by the OS
5 engine, the OS engine performs all operations that can be performed as a function of the message. After all such operations are performed, the OS
engine checks the message queue to determine if another message is available for execution. If there is no other message available, the OS engine becomes inactive until a message is available. This method of operation is 10 required so that the primary OS engine and the second OS engine remain synchronized. New messages can be given to the primary and secondary OS
engines at different times because the I/O engines are asynchronous.
Therefore, the presence or absence of a new event cannot be acted upon or lltili7e(1 to change the state of the OS engine.
In the preferred embodiment of the present invention, the OS
environment is defined to be non pre-empting. Pre-emption is inherently an asynchronous event. In the prior art, an executing task can be interrupted and replaced by another task by a timer interrupt. Because the 20 present system executes a single message at a time, the timer interrupt or pre-emption request does not affect the OS engine until it reaches that message in the message queue. The task running on the OS engine must relinquish control before the timer event can be received and executed by the OS engine.
INTERENGINE COMMUNICATION

In the present invention, communication between the OS engine and I/O engine is controlled. The invention is designed to preserve a single Z~9~Y ~J ~
WO 92/05487 17 pcr/us91/o~679 .~

source of input to the OS engine, thereby preventing time dependent events and changes made by the I/O engines from affecting the state of the OS
engine.

Communication between the I/O engine and OS engine is characterized as follows:

1. The OS engine can only access its own OS engine memory. All communication between the OS engine and the I/O engine must occur in the memory of the OS engine. The OS engine cannot access memory designated as I/O engine memory. Memory coherency is preserved. The primary OS engine and secondary OS engine are mirrored in this invention, but the primary I/O engine and secondary I/O engine are not. Therefore, memory contents of each I/O engine can be different. So long as the OS
15 engines do not access the I/O memory, the state synchronization is maintained.

2. When the OS engine requests that a block of memory be modified by the I/O engine, the OS engine may not access that memory 20 block until the I/O engine sends back an e~rent notifying the OS engine that the modification had been done. The primary and secondary OS engines do not operate in exact synchronization. There may be some skewing and divergence of their operations (although the states always converge). In addition, the primary and secondary I/O engines may modify the OS engine 25 memory at different times. If decisions were then made by the OS engine related to the current value of a memory location in the process of being changed by the I/O engine and the memory locations contain different data due to the different modification times, the synchronization of the states between the two OS engines would be lost.

Wo 92/05487 ~ 2 ~'9 1 9 ~ 3 18 Pcr/US9l/05679 In actual operation, if the OS engine requires a copy of data from the I/O engine, it allocates a work buffer to hold the data and provides the address of the work buffer to the I/O engine. The I/O engine copies the 5 requested data into the work buffer and generates an event to the OS engine confirming that the data has been placed. The OS engine copies the data from the work buffer to its ultimate destination and releases the work buffer.
3. The I/O engine cannot change memory designated as OS
engine memory unless it has been given explicit control over that memorv location by the OS engine. Once the I/O engine has transferred control of the OS engine memory back to the OS engine, (via an event) the I/O engine cannot access that memory.
4. The OS engine software cannot "poll" for a change in a memory value without relinquishing control of the processor during the poll loop, because the OS engine cannot be preemptive or interrupt driven in the present implementation. All changes are made via events, and new 20 events are not accepted until the processor is relinquished by the running process.

When the primary server fails, the secondary server becomes the primary server. The address of the OS engine does not change, but messages 25 received from the "network" are re;outed to direct the messages to the secondary server.

W O 92/05487 i9 2 0 ~ 1 9 9 3 PC~r/US91/05679 DISK MIRRORING

The primary storage 14 and the secondary storage 20 must be mirrored for operation of this invention. When a new secondary engine is brought 5 on line, the disk system maps the drives on the secondary engine to the corresponding drives on the primary engine. The drives on the two engines are marked with a "current synchronization level" counter that can be used to indicate which drive is more current or that two drives are already fully synchronized. If there is any change to the synchronization state (i.e. ~he 10 other server has failed) the current synchronization level is incremen,~.. by the surviving server. The surviving engine also starts tracking memory blocks which are written to disk. When the failed engine comes back on line, after verifying that it has the same media as before, the repaired er -inecan be resynchronized by transferring over only the memory blocks that 15 were changed while it was ou of service. When the system is first brought up and the original primary engine is brought on line, it tracks which disk blocks have been changed for the same reasons.

PRIMARY AND SECONDARY I/O ENGINE COMMUNICATION
The I/O engine of the primary system determines the sequence of events provided ts~ the primary OS engine and the secondary OS engine. An event plus any data that was modified in the primary OS engine memory is communicated to the secondary OS engine before the primary OS engine is 25 given the event in its event queue. This communication is over bus 15A.
The secondary system's I/O engine modifies the secondarv OS engine memory and provides the event to the secondary OS engine.

wO 92/05487 2 0 9 1 9 9 3 20 Pcr/US91/05679 In addition to communicating events, the primary and secondarv I/O
engines communicate other information. Mechanisms are provided so that various driver layer support routines can communicate with their counterparts in the other system. This communication is bi-directional and 5 is over bus 15B. Examples of such communication include completion of disk I/O requests and communication of disk I/O data when the data is onlv stored on one of the systems due to disk hardware failure.

There are two procedures used for communications between the OS
10 engines. "AddFSEvent" is used by the I/O engine to give an event to the OS
engine and "MakeIORequest" is called by the OS engine to communicate a request to the I/O engine. AddFSEvent can only be called by the primary I/O
engine. Both calls use a request type or event type to identify the request or event being made. In addition, both calls pass a parameter defined in a 15 function-specific manner. For example, it may be a pointer to a data structure in the OS engine memory.

When the primary system I/O engine modifies a data structure in the OS engine, the same modification needs to be made in the secondary OS
20 engine as well before the event can be given to the OS engine. AddFSEvent can be given pointers to data structures in the OS engine that will be transferred to the secondary server along with events to transfer OS engine data modifications to the secondary system.

In the secondary system, there are handler procedures in the I/O
engine, one per request type, that are called when events are received from the primary server. The handler procedure is called with the original parameter, and pointers to the areas in the OS engine that need to be modified.

~ v ~
Wo 92/05487 21 Pcr/US9l/05679 " _ The secondary I/O engine event handler procedures have the option of accepting or holding off the events. Hold off would be used if the event is in response to a request from the OS engine and the secondary 5 system has not got the request yet. If the event wasn't held off, then potentially memory could be prematurely changed in the OS engine.
Usually, the event handlers in the secondary I/O engine remove an outstanding request that they have been tracking and signal to accept the event. After the data is copied, the event is given to the secondary OS
10 engine. Note that the secondary system event handlers can do other modifications to OS engine memory if necessary by the implementation.

It is important for the primary I/O engine to wait until the secondary system receives an event before giving the event to the primary OS engine.
15 Otherwise, the primary OS engine could process the event and provide a response before the original event has been transferred to the secondary system (the event could be delayed in a queue on the primary system waiting to be sent to the secondary ~y~le~ll). If the primary system generated a request that was a function of the event not yet transferred to the 20 secondary system, then if the primary system failed, its state, as viewed from an external client, would not be synchronized with the secondary system.

SERVER STATES OF OPERATION AND TRANSITIONS

The I/O engine software runs in four states: no server active state, primary system with no secondary state, primary system with secondary state, and secondary system state. In addition, the I/O engine makes the following state transitions: no server active to primary system no secondary, primary system no secondary to primary system with secondary, and WO 92~05487 2 0 9 1 9~9 ~ 22 PCI/US9l/05679 secondary system to primary system. There are some additional states that occur during the synchronization of a secondary system.

The states of the system of this invention are illustrated in Figure ~.
5 As noted, the I/O engine operates in one of four states S1, S2, S3 and S4.
State S1, no server engine, occurs when the I/O engine is operational but the OS engine is not. State 2, primary no secondary, occurs when both the I/O
engine and OS engine are loaded, but the system is not mirrored. When the system is mirrored, it will become the primary OS engine and the I/O
10 engine will act as the primary I/O engine.

State 3 is referred to as primary with secondary. In this state, the I/O
engine is running in a mirrored primary system. State S4, secondary with primary, occurs when the I/O engine is running in a mirrored secondary 15 system.

There are five possible state transitions that can be experienced by the I/O engine. These are indicated by lines T1-T5. The first transition T1 is from state S1 to state S2. This transition occurs after the OS engine is 20 activated.

The second transition T2 is from state S2 to state S3 and occurs within the primary system when it is synchronized with the secondary system.
Transition T3 is from state S1 to state S4 and occurs within the secondary 25 system when the OS engine is synchronized with the primary system.

Transition T4 is from state S4 to state S2 and occurs when the primar~
system fails. Transition T5 is from state S3 to state S2 and occurs when the secondary svstem fails.

W O 92/05487 2~9~19 ~ ~ P(~r/US91/05679 _.

SECONDARY SERVER TRACKING AND EXECUTION OF REQUESTS

The secondary system I/O engine receives requests from its own OS
5 engine but usually does not execute them. Instead, it enqueues the request and waits until the primary I/O system responds to the request, then gets a copy of the response (the Event generated by the primary I/O system), unqueues its own copy of the request and allows the response "event" to be given to its own OS engine.
The secondary I/O engine has to enqueue the requests from the OS
engine for several reasons. First of all, the OS engine usually expects some sort of response "event" from every one of its requests. If the primary system fails, then the secondary system (now primary system) completes the 15 request and generates the appropriate response event. Another reason is that the secondary system has to wait until it has received the request before it can approve receiving the response event (a case which can occur if the primary system is significantly ahead of the secondary system), otherwise the secondary system may transfer data to its OS engine that the OS engine is 20 not yet prepared to received. If the secondary system has enqueued the request it will accept the response event; if not it signals the primary system to "hold off" and try again.

There are requests given by the OS engine that may need to be ''5 executed by both servers and then have the actual completion "event"
coordinated by the primary system. One example of this is disk writes. The secondary system has to signal the primary system when it is done with the request; the primary system waits until it has completed the write and has WO 92/05487 2 0 9 1 9 9 3 24 Pcr/US9l/05679 received completion confirmation from the secondary system before it generates the completion "event."

A flow diagram illustrating the execution of events and requests is 5 illustrated in Figures 4A-4C. Referring first to Figure 4A, the operation of the OS engines is illustrated.

The operation of the OS engine when it generates a request is shown at steps 51 and 52. The operation of the OS engine when it receives an event 10 is shown at steps 53 and 54. At step 51, the management layer of the OS
engine determines that there is a need to perform an I/O operation. At step 52, the OS engine generates a request for the I/O engine and enters a wait mode, waiting for a reply event from the I/O engine.

At step 53, an event is received from the I/O engine in the event queue of the OS engine. The event is given to the appropriate management layer block such as the disk block, LAN block, keyboard block, etc. At step 54, the management layer completes the initial I/O event by matching it with the original request.
A flow chart illustrating the operation of the I/O engine during event processing states is illustrated in Figure 4B. Steps 55-58 illustrate the primary I/O engine and steps 59-63 illustrate the secondary I/O engine. At step 55, the management layer of the primary I/O engine determines there is 25 an event for the OS engine. At step 56, this event is built into a message and communicated to the secondary I/O engine. The primary I/O engine then waits until the secondary I/O engine has acknowledged the event before providing the message to the primary OS engine. At decision block 57, a decision is made as to whether the event has been accepted by the secondary wo92/05487 25 2~ 9~3 Pcr/ussl/os679 OS engine. If the event has not yet been accepted, the primary I/O engine waits until acknowledgement has been made. If the secondary I/O engine has accepted the event, satisfying the condition of decision block 57, the I/O
engine places the event in the primary OS engine event queue at step 58.

The secondary I/O engine, at step 59, waits for an event from the primary I/O engine. At decision block 60, the secondary I/O engine determines whether it is ready for the received event. If the secondary I/O
engine is not ready, it sends a don't accept message to the primary I/O
10 engine at step 61 and returns to step 59 to wait for another event. If the secondary I/O engine is ready to take the event, and the conditions at decision block 60 are satisfied, the secondary I/O engine sends an acknowledgement of the event to the primary I/O engine at step 62. The secondary I/O engine then places the event in the secondary OS engine 15 event queue at step 63.

Figure 4C illustrates the processing state of the I/O engine when processing requests generated by the OS engine. Steps 70-74 illustrate the state of the primary I/O engine during these operations and steps 75-81 20 illustrate the secondar~ I/O engine during these operations. At step 70, the message level of the I~ O engine determines that there is a request available in the request queue. At step 71, the request is executed by the I/O engine.
This request may be a disk write operation, sen~l a packet on the LAN, etc.
At decision block 72, it is determined whether execution of the request by 25 the secondary I/O engine is also required. If ~ other execution is required, the primary I/O engine proceeds to step 74. If a secondary execution is required, the primary I/O engine proceeds to decision block 73. If the secondary processor is completed, the primary I/O engine proceeds to step 74A. If the secondary step is not completed, the primary I/O engine waits W O 92/05487 2 0 g 1 9 ~ 3 P(~r/US91/05679 until the secondary step has been completed. At decision block 74A, determination is made as to whether the request generates a completion event. If the answer is yes, the primary I/O engine proceeds to step 74B and generates the completion event. If a completion event is not required, the 5 primary I/O engine proceeds to step 74C and is done.

At step 75, the secondary I/O engine message level determines that there is a request available from the OS engine. At decision block 76, determination is made as to whether the secondary processor is required to 10 execute the request. If the secondary I/O engine is to execute the request, asecondary I/O engine proceeds to step 77 and executes the request. After execution of the request, the secondary I/O engine informs the primary I/O
engine of completion. If the secondary I/O engine is not to execute the request, the secondary I/O engine proceeds to decision block 79 and 15 determines whether the request generates a completion event. If there is no completion event generated by the request, the secondary I/O engine proceeds to step 80, and is done. If the request does generate an event, the secondary I/O engine awaits the corresponding event from the primary I/O
engine at step 81.
SERVER SYNCHRONIZATION SEQUENCE

During the synchronization of the secondary system with the primarv system, the entire "state" of the OS engine, as well as the state of the primar~~5 I/O engine pertaining to the state of the OS engine, must be communicated to the secondary system. To initiate the synchronization of the primarv and secondary systems, the primary OS engine system is "starved" of new events. That is, no new events are provided to the event queue of the primary system. After the message queue of the primary system is empty, ~ U ~ ' ~ J ~
W O 92/05487 27 PC~r/US91/05679 -the primary system OS engine loo~s, waiting for a new event. When the OS
engine is waiting for a new event, .t again is in a stable state and remains consistent until a new event is encountered. The entire state of the OS
engine is then contained in the memory image of the OS engine; the memory image is then simply transferred to the secondary system.
Eventually, both of the OS engines are given the same set of new events and begin mirroring each other.

A flow diagram illustrating the synchronization sequence of this 10 invention is illustrated in Figure 6. Steps 85-89 represent the states and transitions of the primary server. Steps 90-93 represent the states and transitions of the secondary server. The primary server is initially at state S~at step 85 and the secondary server is initially at state S1 (I/O engine only) at step 90.
The I/O engines coordinate the synchronization sequencing. When the servers are given a command to synchronize, the management software of the primary I/O engine prepares for synchronization at step 86. This allows the various driver support layers to communicate with the OS
20 engine and complete any tasks that would prevent synchronization. The primary system starts "starving" the OS engine and stops taking requests from the OS engine, as well.

Next, any outstanding requests that are being executed by the I/O
~5 engine are completed (and the appropriate completion event transferred to the OS engine memory image but is hidden and not given at this time to the OS engine). At step 87 and 91, the I/O engines exchange state information.
The primary I/O engine provides its state information to the secondary I/O
engine so that the I/O engines are aware of the state of each other plus the Wo 92/05487 28 PCI/US91/05679 secondary I/O engine becomes aware of any outstanding state from the OS
engine. This step is represented by step 91 of the secondary I/O engine sequence.

At step 88, the primary I/O engine transfers the OS engine memory image to the secondary server. This corresponds to step 92 of the secondary server sequence in which the secondary I/O engine receives the OS engine memory image from the primary server.

At step 89, the synchronization is complete and the primary system is in state S3, (primary with secondary). Similarly, corresponding step 93 of the secondary server, the synchronization process is complete and the secondary server is in state S4.

There can be server or communications failures during the synchronization sequence. If the primary system fails or the server-to-server communication link fails, the secondary system must quit as well. If the secondary system fails or if the communication link fails, the primary system must recover and return back to the "PrimaryNoSecondary" S2 state.
20 These failures are signaled at different times during the synchronization sequence. After the change happens, the hidden and queued up events are given back to the OS engine and the I/O engine starts processing requests from the OS engine again. If a failure occurs during synchronization, the I/O engine management software needs to undo whatever changes have 25 been done to synchronize and return back to the non-mirrored state.

TRANSITION DUE TO PRIMARY SERVER FAILURE

wO 92/05487 2 0 91 9 3 3 PCr/US91/05679 When the primary system fails, the secondary system must be able to step in and assert itself as the server, with the only thing that changed being the LAN communications route to reach the server. Packets being sent to the server at the time of failure can be lost. However, all LAN
- 5 communication protocols must be able to handle lost packets. Thesecondary I/O management support layers are notified of the failure.

When the failure occurs, the driver support layers need to take any outstandi~g requests they have from the OS engine and complete executing them. The secondary-turned-primary system's AddFSEvent procedure is activated prior to the failure notification so that new events can be given to the OS engine. Any messages being sent to the former primary system are discarded. Any requests from the OS engine that were waiting data or completion status from the primary ~ysLeln are completed as is. There is a 15 need to use a special event to notify the OS engine that the servers were changed. For example, a special event is used to tell the OS engine to send a special control packet to all of the clients, indicating that the change occurred. This can accelerate the routing level switchover to the new server.
TRANSITION DUE TO SECONDARY SERVER FAILURE

When there is a secondary system failure, all messages queued to be sent to the secondary system are discarded. If the messages are OS engine 25 events, they are simply provided to the OS engine. The driver support layer of the I/O engine completes any requests that were waiting pending notification from the secondary system.

WO 92/05487 2 0 9 1 9 ~ 3 30 Pcr/US9l/05679 MULTIPLE OS ENGINES AND EXTRA PROCESSORS

The present invention has been described in terms of primary and secondary servers that each have a single OS engine. An alternate 5 embodiment of the present invention is illustrated in Figure 7 in which the primary and/or secondary server can have one or more OS engines.
Referring to Figure 7, the primary server is comprised of three processors.
Processor 1 implements the I/O engine of the primary server. A first and second OS engine are implemented on processor 2 and processor 3, 1 0 respectively.

Similarly, the secondary server has a first processor implementing an I/O engine and second and third processors implementing first and second OS engines. In operation, multiple event queues are maintained for each 15 OS engine so that each OS engine operates on the same events. In this manner, the states of each OS engine can be maintained substantially identical so that upon failure of one server, another can begin operation.

Thus, a fault tolerant computer system has been described.

Claims (49)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A method for providing a fault tolerant computer system comprising the steps of:
providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
providing a second processing means, said second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
determining a state of said first processing means and providing said state to said second processing means;
defining an operation that can change said state of said first OS engine as an event;
providing a plurality of events to said first I/O engine and converting each of said events into a message;
providing said message to a first message queue in said first OS engine and to a second message queue in said second OS engine;
executing said message in said first OS engine and said second OS engine;
and switching said computer system operation to said second processing means upon failure of said first processing means, such that no loss of operation of said computer system occurs during said switch-over.
2. The method of claim 1 further including the steps of:

providing each event to said second I/O engine when said first processing means does not operate;
converting each of said events to a message in said second I/O engine;
and providing said message to said second message queue in said second OS
engine for execution by said second OS engine.
3. The method of claim 1 wherein said steps of determining the state of said first processing means and providing said state to said second processing means comprises the steps of:
executing in said first OS engine any messages available to said first OS
engine until said first OS engine has achieved a stable state; and transferring a memory image of said first OS engine through said first I/O engine to said second processing means.
4. The method of claim 1 wherein said first processing means comprises at least one processor.
5. The method of claim 1 wherein said second processing means comprises at least one processor.
6. The method of claim 1 further including the steps of:
generating a request in said OS engine, said request for accomplishing an input/output operation;
providing said request to a first request queue in said first I/O engine for execution by said first I/O engine; and generating a reply to said first OS engine to indicate execution of said request.
7. The method of claim 1 wherein said event is asynchronous.
8. A fault tolerant computer system comprising:
first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
said first I/O engine coupled to said second I/O engine on a first bus;
said first I/O engine including a converting means for converting operations that can change said state of said first OS engine into a message;
said first I/O engine for providing said message to a first message queue in said first OS engine and to a second message queue in said second OS engine;
said first OS engine and said second OS engine including means for executing said message; and means for switching said computer system operation to said second OS
engine upon failure of said first processing means such that no loss of operation of said computer system occurs during said switch-over.
9. The computer system of claim 8 wherein said first processing means comprises at least one processor.
10. The computer system of claim 8 wherein said second processing means comprises at least one processor.
11. The computer system of claim 8 further including a first storage means coupled to said first processing means, said first storage means storing a memory image corresponding to said state of said first OS engine.
12. The computer system of claim 11 further including a second storage means coupled to said second processing means, said second storage means storing a memory image corresponding to said state of said second OS engine.
13. The computer system of claim 8 wherein said first OS engine controls execution of instructions of said computer system.
14. The computer system of claim 13 wherein said second OS engine controls execution of instructions of said computer system when said first OS
engine cannot execute said instructions.
15. The computer system of claim 8 wherein said I/O engine controls communication with input and output devices.
16. The computer system of claim 8 wherein said message comprises synchronous and asynchronous events.
17. A method for providing a fault tolerant computer system comprising the steps of:

providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
providing a second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
determining a state of said first processing means and providing said state to said second processing means;
defining an operation that can change said state of said first OS engine as an event;
providing a plurality of events to said first I/O engine and serializing said events into an event sequence;
providing successive events in said event sequence to said first OS engine and to said second OS engine;
executing said successive events in said first OS engine and said second OS engine; and switching said computer system operation to said second processing means upon failure of said first processing means, such that no loss of operation to said computer system occurs during said switch-over.
18. The method of claim 17 further including the steps of:
providing each event to said second I/O engine when said first processing means does not operate;
serializing said events into an event sequence in said second I/O engine;
and providing successive events of said event sequence to said second OS
engine for execution by said second OS engine.
19. The method of claim 17 wherein said step of determining the state of said first processing means and providing said state to said second processing means comprises the steps of:
executing in said first OS engine any successive events available to said first OS engine until said first OS engine has achieved a stable state; and transferring a memory image of said first OS engine through said first I/O engine to said second processing means.
20. The method of claim 17 wherein said first processing means comprises at least one processor.
21. The method of claim 17 wherein said second processing means comprises at least one processor.
22. The method of claim 17 further including the steps of:
generating a request in said OS engine, said request for accomplishing an input/output operation;
providing said request to a first request queue in said first I/O engine for execution by said first I/O engine; and generating a reply to said first OS engine to indicate execution of said request.
23. The method of claim 17 wherein said plurality of events are asynchronous.
24. A fault tolerant computer system comprising:

first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
said first I/O engine coupled to said second I/O engine on a first bus;
said first I/O engine including a converting means for converting operations that can change said state of said first OS engine into an operation sequence;
said first I/O engine for providing said operations in sequence to said first OS engine and to said second OS engine;
said first OS engine and said second OS engine including means for executing said operations; and means for switching said computer system operation to said second OS
engine upon failure of said first processing means such that no loss of operation of said computer system occurs during said switch-over.
25. The computer system of claim 24 wherein said first processing means comprises at least one processor.
26. The computer system of claim 24 wherein said second processing means comprises at least one processor.
27. The computer system of claim 24 further including a first storage means coupled to said first processing means, said first storage means storing a memory image corresponding to said state of said first OS engine.
28. The computer system of claim 27 further including a second storage means coupled to said second processing means, said second storage means storing a memory image corresponding to said state of said second OS engine.
29. The computer system of claim 24 wherein said first OS engine controls execution of instructions of said computer system.
30. The computer system of claim 29 wherein said second OS engine controls execution of instructions of said computer system when said first OS
engine cannot execute said instructions.
31. The computer system of claim 24 wherein said I/O engine controls communication with input and output devices.
32. The computer system of claim 24 wherein said sequence comprises synchronous and asynchronous operations.
33. A method of disk mirroring in a computer system, comprising the steps of:
providing a first processing means for operation of said computer system;
providing a second processing means for operation of said computer system;
providing said first processing means with primary mass storage;
providing said second processing means with secondary mass storage;
providing a first manager for control of said primary mass storage;

providing a second manager for control of said second mass storage;
synchronizing said primary mass storage and said secondary mass storage using said first manager and said second manager;
marking said primary mass storage and said secondary mass storage with a current synchronization level counter value to indicate that said primary massstorage and said secondary mass storage are fully synchronized; and changing said current value synchronization level counter when there is a change to synchronization state.
34. The method of claim 33 further including the steps of:
determining whether both processing means will perform a disk operation using said first manager;
completing said disk operation by said first processing means and waiting until it has receive completion confirmation from said second processing means when both I/O engines perform said disk operation;
determining by said first manager which processing means will perform said disk operation when said first manager determines that only one processing means will perform said disk operation; and transferring data by the processing means that performs said disk operation to the other processing when only one processing means performs said disk operation.
35. The method of claim 33 wherein said first processing means tracks which memory blocks have been changed.
36. The method of claim 33 further including the steps of:

changing said current synchronization level counter value by a surviving processing means upon the failure of the other processing means;
tracking memory blocks written to disk by said surviving processing means;
verifying that the failed processing means has the same data as before said failure upon said failed processing means being brought back on line; and synchronizing a repaired processing means by transfer to said repaired processing means the memory blocks that were changed while it was out of service.
37. A method for executing an operation in a fault tolerant computer system comprising the steps of:
providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
generating a request by said first OS engine to said first I/O engine and said first OS engine waiting for a reply from said first I/O engine; and executing in said first I/O engine the requested operation as specified by said request and matching an initial I/O event by matching it with said request.
38. The method of claim 37 further including the steps of:
providing a second processing means, said second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
determining by said first I/O engine that there is an event for said first OS engine;

building said event into a message by said first I/O engine and communicating said message to said second I/O engine;
waiting by said first I/O engine until said second I/O engine accepts said message before providing said message to said first OS engine;
accepting said message from said first I/O engine by said second I/O
engine if said second I/O engine is ready;
sending an acknowledgement of said message by said second I/O engine to said first I/O engine and placing said event in the event queue of said second OS engine;
placing the event in the event queue of said first OS engine after acceptance of said message by said second I/O engine;
executing said event by said first I/O engine;
determining if said event should be executed by said second processing means;
waiting by the first I/O engine for the completion of said event by said second processing means if secondary execution is necessary;
executing said request by said second processing means if said secondary execution is necessary;
informing said first I/O engine of completion of said request by said second processing means if said secondary execution is necessary;
determining by said first I/O engine if said event generates a completion event;
generating said completion event by said first I/O engine if said completion event is necessary; and waiting by the second I/O engine for said completion event from said first I/O engine if said completion event is necessary.
39. A method for synchronous management of timer interrupts, comprising the steps of:
providing a first processing means for operation of a computer system, said first processing means comprising a first operating system (OS) engine and an input/output (I/O) engine;
defining a timer interrupt as an event;
placing said timer interrupt in an event queue;
relinquishing control of said first OS engine by a task currently running on said first OS engine; and executing said first timer interrupt by said first OS engine when said OS
engine reaches a message in said event queue.
40. A method of defining the states of a fault tolerant computer system comprising the steps of:
providing a first processing means for operation of said computer system, said first processing means comprising a first operating system (OS) engine and a first input/output (I/O) engine;
providing a second processing means, said second processing means comprising a second operating system (OS) engine and a second input/output (I/O) engine;
providing a first state to define the status of the fault tolerant computer to identify when said first engine is operational but said first engine is not operational called No Server Active State;
providing a second state to define the status of the fault tolerant computer to identify when said first I/O engine is operational but said second I/O engine is not called Primary System With No Secondary State;

providing a third state to define the status of the fault tolerant computer to identify when said first I/O engine is running in a mirrored primary system;
providing a fourth state to define the status of the fault tolerant computer to identify when said first I/O engine is running in a mirrored secondary system;
allowing a transition from said first state to said second state when said first OS engine is activated;
allowing a transition from said second state to said third state when said first processing means is synchronized with said second processing means;
allowing a transition from said first state to said fourth state when said second OS engine is synchronized with said first processing means;
allowing a transition from said fourth state to said second state when said first processing means fails; and allowing a transition from said third state to said second state when said second processing means fails.
41. A fault tolerant computer system comprising:
a first processing means for operation of said computer system, a second processing means for operation of said computer system, wherein said second processing means is a backup processing means for said first processing means, and a first bus connecting said first processing means and said second processing means, characterized in that said first processing means comprises a first operating system (OS) engine and a first input/output (I/O) engine, said first OS engine comprising a first message queue, said first message queue coupled to said first I/O engine for receiving messages, and that said second processing means comprises a second OS engine and a second I/O engine, said second OS
engine comprising a second message queue, said second message queue coupled to said second I/O engine for receiving messages;
that said first bus connects said first I/O engine and said second I/O
engine for transferring messages; and wherein said first I/O engine is configured to convert operations that can change the state of said first OS engine into messages, said messages provided to said first message queue and to said second message queue for subsequent execution by said first OS engine and said second OS engine, respectively.
42. The computer system of claim 41 wherein said first processing means comprises at least one processor.
43. The computer system of claim 41 wherein said second processing means comprises at least one processor.
44. The computer system of claim 41 further including a first storage means coupled to said first processing means, said first storage means storing a memory image corresponding to said state of said first OS engine.
45. The computer system of claim 44 further including a second storage means coupled to said second processing means, said second storage means storing a memory image corresponding to said state of said second OS engine.
46. The computer system of claim 41 wherein said first OS engine controls execution of instructions of said computer system.
47. The computer system of claim 41 wherein said I/O engine controls communication with input and output devices.
48. The computer system of claim 46 wherein said second OS engine controls execution of instructions of said computer system when said first OS
engine cannot execute said instructions.
49. The computer system of claim 41 wherein said message comprises synchronous and asynchronous events.
CA002091993A 1990-09-24 1991-08-09 Fault tolerant computer system Expired - Lifetime CA2091993C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US586807 1984-03-06
US07/586,807 US5157663A (en) 1990-09-24 1990-09-24 Fault tolerant computer system

Publications (2)

Publication Number Publication Date
CA2091993A1 CA2091993A1 (en) 1992-03-25
CA2091993C true CA2091993C (en) 1998-12-22

Family

ID=24347179

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002091993A Expired - Lifetime CA2091993C (en) 1990-09-24 1991-08-09 Fault tolerant computer system

Country Status (13)

Country Link
US (2) US5157663A (en)
EP (1) EP0550457B1 (en)
JP (1) JP3156083B2 (en)
KR (1) KR0137406B1 (en)
AT (1) ATE152261T1 (en)
AU (1) AU660939B2 (en)
BR (1) BR9106875A (en)
CA (1) CA2091993C (en)
DE (1) DE69125840T2 (en)
FI (1) FI101432B (en)
NO (1) NO302986B1 (en)
RU (1) RU2108621C1 (en)
WO (1) WO1992005487A1 (en)

Families Citing this family (342)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
JP2864741B2 (en) * 1990-12-19 1999-03-08 株式会社日立製作所 Communication system that guarantees data integrity
EP0496506B1 (en) * 1991-01-25 2000-09-20 Hitachi, Ltd. Fault tolerant computer system incorporating processing units which have at least three processors
JP3189903B2 (en) * 1991-06-03 2001-07-16 富士通株式会社 Device with capability saving / restoring mechanism
DE69227956T2 (en) * 1991-07-18 1999-06-10 Tandem Computers Inc Multiprocessor system with mirrored memory
US5278969A (en) * 1991-08-02 1994-01-11 At&T Bell Laboratories Queue-length monitoring arrangement for detecting consistency between duplicate memories
EP0537903A2 (en) * 1991-10-02 1993-04-21 International Business Machines Corporation Distributed control system
WO1993009494A1 (en) * 1991-10-28 1993-05-13 Digital Equipment Corporation Fault-tolerant computer processing using a shadow virtual processor
US5379417A (en) * 1991-11-25 1995-01-03 Tandem Computers Incorporated System and method for ensuring write data integrity in a redundant array data storage system
JPH05191388A (en) * 1992-01-14 1993-07-30 Fujitsu Ltd Communication processing system
KR930020266A (en) * 1992-03-06 1993-10-19 윌리암 에이취. 뉴콤 How to interface applications and operating system extensions with a computer
JPH05260134A (en) * 1992-03-12 1993-10-08 Fujitsu Ltd Monitor system for transmission equipment
FR2688907B1 (en) * 1992-03-20 1994-05-27 Kiota Int METHOD FOR RECORDING AND PLAYING A TWO-LAYER MAGNETIC TAPE AND SYSTEM FOR IMPLEMENTING THE SAME.
CA2106280C (en) * 1992-09-30 2000-01-18 Yennun Huang Apparatus and methods for fault-tolerant computing employing a daemon monitoring process and fault-tolerant library to provide varying degrees of fault tolerance
US5715386A (en) * 1992-09-30 1998-02-03 Lucent Technologies Inc. Apparatus and methods for software rejuvenation
GB2273180A (en) * 1992-12-02 1994-06-08 Ibm Database backup and recovery.
US5751932A (en) * 1992-12-17 1998-05-12 Tandem Computers Incorporated Fail-fast, fail-functional, fault-tolerant multiprocessor system
US5751955A (en) 1992-12-17 1998-05-12 Tandem Computers Incorporated Method of synchronizing a pair of central processor units for duplex, lock-step operation by copying data into a corresponding locations of another memory
US5469573A (en) * 1993-02-26 1995-11-21 Sytron Corporation Disk operating system backup and recovery system
US5608872A (en) * 1993-03-19 1997-03-04 Ncr Corporation System for allowing all remote computers to perform annotation on an image and replicating the annotated image on the respective displays of other comuters
US5664195A (en) * 1993-04-07 1997-09-02 Sequoia Systems, Inc. Method and apparatus for dynamic installation of a driver on a computer system
JP3047275B2 (en) * 1993-06-11 2000-05-29 株式会社日立製作所 Backup switching control method
US5812748A (en) * 1993-06-23 1998-09-22 Vinca Corporation Method for improving recovery performance from hardware and software errors in a fault-tolerant computer system
AU7211194A (en) * 1993-06-23 1995-01-17 Vinca Corporation Method for improving disk mirroring error recovery in a computer system including an alternate communication path
AU7211594A (en) * 1993-07-20 1995-02-20 Vinca Corporation Method for rapid recovery from a network file server failure
US5978565A (en) 1993-07-20 1999-11-02 Vinca Corporation Method for rapid recovery from a network file server failure including method for operating co-standby servers
US6289390B1 (en) 1993-08-18 2001-09-11 Microsoft Corporation System and method for performing remote requests with an on-line service network
US5473771A (en) * 1993-09-01 1995-12-05 At&T Corp. Fault-tolerant processing system architecture
US5566299A (en) * 1993-12-30 1996-10-15 Lockheed Martin Corporation Fault tolerant method and system for high availability document image and coded data processing
KR0128271B1 (en) * 1994-02-22 1998-04-15 윌리암 티. 엘리스 Remote data duplexing
JP2790034B2 (en) * 1994-03-28 1998-08-27 日本電気株式会社 Non-operational memory update method
JP3140906B2 (en) * 1994-04-12 2001-03-05 株式会社エヌ・ティ・ティ・データ How to update and restore system files
WO1995034860A1 (en) * 1994-06-10 1995-12-21 Sequoia Systems, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system
US5659682A (en) * 1994-06-16 1997-08-19 International Business Machines Corporation Scheme to determine completion of directory operations for server recovery
US5566297A (en) * 1994-06-16 1996-10-15 International Business Machines Corporation Non-disruptive recovery from file server failure in a highly available file system for clustered computing environments
JPH0816421A (en) * 1994-07-04 1996-01-19 Hitachi Ltd Electronic device with simplex/duplex switching input/ output port, and fault tolerance system
JPH0816446A (en) * 1994-07-05 1996-01-19 Fujitsu Ltd Client server system
US5537533A (en) * 1994-08-11 1996-07-16 Miralink Corporation System and method for remote mirroring of digital data from a primary network server to a remote network server
US5764903A (en) * 1994-09-26 1998-06-09 Acer America Corporation High availability network disk mirroring system
US5996001A (en) * 1994-09-27 1999-11-30 Quarles; Philip High availability on-line transaction processing system
US5649152A (en) 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5835953A (en) 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
KR0133337B1 (en) * 1994-12-21 1998-04-21 양승택 Tarket system control
US5757642A (en) * 1995-01-20 1998-05-26 Dell Usa L.P. Multi-function server input/output subsystem and method
CA2167634A1 (en) * 1995-01-23 1996-07-24 Michael E. Fisher Method and apparatus for maintaining network connections across a voluntary process switchover
US5790791A (en) * 1995-05-12 1998-08-04 The Boeing Company Apparatus for synchronizing flight management computers where only the computer chosen to be the master received pilot inputs and transfers the inputs to the spare
US5675723A (en) * 1995-05-19 1997-10-07 Compaq Computer Corporation Multi-server fault tolerance using in-band signalling
US5696895A (en) * 1995-05-19 1997-12-09 Compaq Computer Corporation Fault tolerant multiple network servers
US5822512A (en) * 1995-05-19 1998-10-13 Compaq Computer Corporartion Switching control in a fault tolerant system
TW292365B (en) * 1995-05-31 1996-12-01 Hitachi Ltd Computer management system
US5774668A (en) * 1995-06-07 1998-06-30 Microsoft Corporation System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing
US5621885A (en) * 1995-06-07 1997-04-15 Tandem Computers, Incorporated System and method for providing a fault tolerant computer program runtime support environment
US5956489A (en) * 1995-06-07 1999-09-21 Microsoft Corporation Transaction replication system and method for supporting replicated transaction-based services
US6901433B2 (en) * 1995-06-07 2005-05-31 Microsoft Corporation System for providing users with a filtered view of interactive network directory obtains from remote properties cache that provided by an on-line service
JP3086779B2 (en) * 1995-06-19 2000-09-11 株式会社東芝 Memory state restoration device
US5594863A (en) * 1995-06-26 1997-01-14 Novell, Inc. Method and apparatus for network file recovery
US5933599A (en) * 1995-07-17 1999-08-03 Microsoft Corporation Apparatus for presenting the content of an interactive on-line network
US6728959B1 (en) 1995-08-08 2004-04-27 Novell, Inc. Method and apparatus for strong affinity multiprocessor scheduling
US5956509A (en) 1995-08-18 1999-09-21 Microsoft Corporation System and method for performing remote requests with an on-line service network
US5941947A (en) * 1995-08-18 1999-08-24 Microsoft Corporation System and method for controlling access to data entities in a computer network
US6029175A (en) * 1995-10-26 2000-02-22 Teknowledge Corporation Automatic retrieval of changed files by a network software agent
US5864657A (en) * 1995-11-29 1999-01-26 Texas Micro, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system
US5737514A (en) * 1995-11-29 1998-04-07 Texas Micro, Inc. Remote checkpoint memory system and protocol for fault-tolerant computer system
US5751939A (en) * 1995-11-29 1998-05-12 Texas Micro, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system using an exclusive-or memory
US5745672A (en) * 1995-11-29 1998-04-28 Texas Micro, Inc. Main memory system and checkpointing protocol for a fault-tolerant computer system using a read buffer
US5802265A (en) * 1995-12-01 1998-09-01 Stratus Computer, Inc. Transparent fault tolerant computer system
US5838921A (en) * 1995-12-08 1998-11-17 Silicon Graphics, Inc. Distributed connection management system with replication
GB2308040A (en) * 1995-12-09 1997-06-11 Northern Telecom Ltd Telecommunications system
GB9601585D0 (en) * 1996-01-26 1996-03-27 Hewlett Packard Co Fault-tolerant processing method
GB9601584D0 (en) * 1996-01-26 1996-03-27 Hewlett Packard Co Fault-tolerant processing method
US5777874A (en) * 1996-02-12 1998-07-07 Allen-Bradley Company, Inc. Programmable controller backup system
US5761518A (en) * 1996-02-29 1998-06-02 The Foxboro Company System for replacing control processor by operating processor in partially disabled mode for tracking control outputs and in write enabled mode for transferring control loops
US5905860A (en) * 1996-03-15 1999-05-18 Novell, Inc. Fault tolerant electronic licensing system
US5708776A (en) * 1996-05-09 1998-01-13 Elonex I.P. Holdings Automatic recovery for network appliances
US5796934A (en) * 1996-05-31 1998-08-18 Oracle Corporation Fault tolerant client server system
US6032271A (en) * 1996-06-05 2000-02-29 Compaq Computer Corporation Method and apparatus for identifying faulty devices in a computer system
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US6321270B1 (en) * 1996-09-27 2001-11-20 Nortel Networks Limited Method and apparatus for multicast routing in a network
TW379298B (en) * 1996-09-30 2000-01-11 Toshiba Corp Memory updating history saving device and memory updating history saving method
US6484208B1 (en) 1996-10-15 2002-11-19 Compaq Information Technologies Group, L.P. Local access of a remotely mirrored disk in a computer network
US5917997A (en) * 1996-12-06 1999-06-29 International Business Machines Corporation Host identity takeover using virtual internet protocol (IP) addressing
JP3507307B2 (en) * 1996-12-27 2004-03-15 キヤノン株式会社 Information processing apparatus, network print system, control method therefor, and storage medium storing program
US6151688A (en) 1997-02-21 2000-11-21 Novell, Inc. Resource management in a clustered computer system
JPH10240557A (en) * 1997-02-27 1998-09-11 Mitsubishi Electric Corp Stand-by redundant system
US6654933B1 (en) 1999-09-21 2003-11-25 Kasenna, Inc. System and method for media stream indexing
US6073209A (en) * 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
US5941999A (en) * 1997-03-31 1999-08-24 Sun Microsystems Method and system for achieving high availability in networked computer systems
US7389312B2 (en) * 1997-04-28 2008-06-17 Emc Corporation Mirroring network data to establish virtual storage area network
US6247080B1 (en) 1997-05-13 2001-06-12 Micron Electronics, Inc. Method for the hot add of devices
US6202111B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. Method for the hot add of a network adapter on a system including a statically loaded adapter driver
US6304929B1 (en) 1997-05-13 2001-10-16 Micron Electronics, Inc. Method for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6170067B1 (en) 1997-05-13 2001-01-02 Micron Technology, Inc. System for automatically reporting a system failure in a server
US6134668A (en) * 1997-05-13 2000-10-17 Micron Electronics, Inc. Method of selective independent powering of portion of computer system through remote interface from remote interface power supply
US6189109B1 (en) 1997-05-13 2001-02-13 Micron Electronics, Inc. Method of remote access and control of environmental conditions
US6243838B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Method for automatically reporting a system failure in a server
US6195717B1 (en) 1997-05-13 2001-02-27 Micron Electronics, Inc. Method of expanding bus loading capacity
US6138250A (en) * 1997-05-13 2000-10-24 Micron Electronics, Inc. System for reading system log
US6182180B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Apparatus for interfacing buses
US6249828B1 (en) 1997-05-13 2001-06-19 Micron Electronics, Inc. Method for the hot swap of a mass storage adapter on a system including a statically loaded adapter driver
US6179486B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Method for hot add of a mass storage adapter on a system including a dynamically loaded adapter driver
US6243773B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Configuration management system for hot adding and hot replacing devices
US6249885B1 (en) 1997-05-13 2001-06-19 Karl S. Johnson Method for managing environmental conditions of a distributed processor system
US6324608B1 (en) 1997-05-13 2001-11-27 Micron Electronics Method for hot swapping of network components
US6122746A (en) * 1997-05-13 2000-09-19 Micron Electronics, Inc. System for powering up and powering down a server
US6292905B1 (en) * 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US6122758A (en) * 1997-05-13 2000-09-19 Micron Electronics, Inc. System for mapping environmental resources to memory for program access
US6163853A (en) * 1997-05-13 2000-12-19 Micron Electronics, Inc. Method for communicating a software-generated pulse waveform between two servers in a network
US6173346B1 (en) 1997-05-13 2001-01-09 Micron Electronics, Inc. Method for hot swapping a programmable storage adapter using a programmable processor for selectively enabling or disabling power to adapter slot in response to respective request signals
US5987554A (en) * 1997-05-13 1999-11-16 Micron Electronics, Inc. Method of controlling the transfer of information across an interface between two buses
US6073255A (en) * 1997-05-13 2000-06-06 Micron Electronics, Inc. Method of reading system log
US6247079B1 (en) * 1997-05-13 2001-06-12 Micron Electronics, Inc Apparatus for computer implemented hot-swap and hot-add
US6249834B1 (en) 1997-05-13 2001-06-19 Micron Technology, Inc. System for expanding PCI bus loading capacity
US6499073B1 (en) 1997-05-13 2002-12-24 Micron Electronics, Inc. System using programmable processor for selectively enabling or disabling power to adapter in response to respective request signals
US6219734B1 (en) 1997-05-13 2001-04-17 Micron Electronics, Inc. Method for the hot add of a mass storage adapter on a system including a statically loaded adapter driver
US6282673B1 (en) 1997-05-13 2001-08-28 Micron Technology, Inc. Method of recording information system events
US6363497B1 (en) 1997-05-13 2002-03-26 Micron Technology, Inc. System for clustering software applications
US6338150B1 (en) 1997-05-13 2002-01-08 Micron Technology, Inc. Diagnostic and managing distributed processor system
US6526333B1 (en) 1997-05-13 2003-02-25 Micron Technology, Inc. Computer fan speed control system method
US6253334B1 (en) 1997-05-13 2001-06-26 Micron Electronics, Inc. Three bus server architecture with a legacy PCI bus and mirrored I/O PCI buses
US6134673A (en) * 1997-05-13 2000-10-17 Micron Electronics, Inc. Method for clustering software applications
US6170028B1 (en) 1997-05-13 2001-01-02 Micron Electronics, Inc. Method for hot swapping a programmable network adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6330690B1 (en) 1997-05-13 2001-12-11 Micron Electronics, Inc. Method of resetting a server
US6163849A (en) * 1997-05-13 2000-12-19 Micron Electronics, Inc. Method of powering up or powering down a server to a maintenance state
US6269417B1 (en) 1997-05-13 2001-07-31 Micron Technology, Inc. Method for determining and displaying the physical slot number of an expansion bus device
US6418492B1 (en) 1997-05-13 2002-07-09 Micron Electronics Method for computer implemented hot-swap and hot-add
US6148355A (en) * 1997-05-13 2000-11-14 Micron Electronics, Inc. Configuration management method for hot adding and hot replacing devices
US6192434B1 (en) 1997-05-13 2001-02-20 Micron Electronics, Inc System for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6247898B1 (en) 1997-05-13 2001-06-19 Micron Electronics, Inc. Computer fan speed control system
US6145098A (en) 1997-05-13 2000-11-07 Micron Electronics, Inc. System for displaying system status
US6202160B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. System for independent powering of a computer system
US5892928A (en) * 1997-05-13 1999-04-06 Micron Electronics, Inc. Method for the hot add of a network adapter on a system including a dynamically loaded adapter driver
US6490610B1 (en) * 1997-05-30 2002-12-03 Oracle Corporation Automatic failover for clients accessing a resource through a server
US6199110B1 (en) 1997-05-30 2001-03-06 Oracle Corporation Planned session termination for clients accessing a resource through a server
US5983371A (en) * 1997-07-11 1999-11-09 Marathon Technologies Corporation Active failure detection
JP3111935B2 (en) * 1997-08-15 2000-11-27 日本電気株式会社 LAN emulation server redundant system
US6035420A (en) * 1997-10-01 2000-03-07 Micron Electronics, Inc. Method of performing an extensive diagnostic test in conjunction with a bios test routine
US6212585B1 (en) 1997-10-01 2001-04-03 Micron Electronics, Inc. Method of automatically configuring a server after hot add of a device
US6088816A (en) * 1997-10-01 2000-07-11 Micron Electronics, Inc. Method of displaying system status
US6154835A (en) * 1997-10-01 2000-11-28 Micron Electronics, Inc. Method for automatically configuring and formatting a computer system and installing software
US6009541A (en) * 1997-10-01 1999-12-28 Micron Electronics, Inc. Apparatus for performing an extensive diagnostic test in conjunction with a bios test routine
US6263387B1 (en) 1997-10-01 2001-07-17 Micron Electronics, Inc. System for automatically configuring a server after hot add of a device
GB2330034A (en) 1997-10-01 1999-04-07 Northern Telecom Ltd A narrowband to broadband interface for a communications system
US6065053A (en) * 1997-10-01 2000-05-16 Micron Electronics, Inc. System for resetting a server
US6175490B1 (en) 1997-10-01 2001-01-16 Micron Electronics, Inc. Fault tolerant computer system
US6199173B1 (en) 1997-10-01 2001-03-06 Micron Electronics, Inc. Method for mapping environmental resources to memory for program access
US6138179A (en) * 1997-10-01 2000-10-24 Micron Electronics, Inc. System for automatically partitioning and formatting a primary hard disk for installing software in which selection of extended partition size is not related to size of hard disk
US6014667A (en) * 1997-10-01 2000-01-11 Novell, Inc. System and method for caching identification and location information in a computer network
US6173420B1 (en) * 1997-10-31 2001-01-09 Oracle Corporation Method and apparatus for fail safe configuration
US6799224B1 (en) 1998-03-10 2004-09-28 Quad Research High speed fault tolerant mass storage network information server
DE19810814B4 (en) * 1998-03-12 2004-10-28 Telefonaktiebolaget Lm Ericsson (Publ) Computer system and status copying process for scalable software updates
DE19810807A1 (en) * 1998-03-12 1999-09-23 Ericsson Telefon Ab L M Message conversion system for upgrading systems without halting
US6185695B1 (en) * 1998-04-09 2001-02-06 Sun Microsystems, Inc. Method and apparatus for transparent server failover for highly available objects
US6260155B1 (en) 1998-05-01 2001-07-10 Quad Research Network information server
US6061602A (en) 1998-06-23 2000-05-09 Creative Lifestyles, Inc. Method and apparatus for developing application software for home automation system
US6195739B1 (en) 1998-06-29 2001-02-27 Cisco Technology, Inc. Method and apparatus for passing data among processor complex stages of a pipelined processing engine
US6513108B1 (en) 1998-06-29 2003-01-28 Cisco Technology, Inc. Programmable processing engine for efficiently processing transient data
US6101599A (en) * 1998-06-29 2000-08-08 Cisco Technology, Inc. System for context switching between processing elements in a pipeline of processing elements
US6119215A (en) * 1998-06-29 2000-09-12 Cisco Technology, Inc. Synchronization and control system for an arrayed processing engine
US6836838B1 (en) 1998-06-29 2004-12-28 Cisco Technology, Inc. Architecture for a processor complex of an arrayed pipelined processing engine
US6154849A (en) * 1998-06-30 2000-11-28 Sun Microsystems, Inc. Method and apparatus for resource dependency relaxation
US6223234B1 (en) 1998-07-17 2001-04-24 Micron Electronics, Inc. Apparatus for the hot swap and add of input/output platforms and devices
US6205503B1 (en) 1998-07-17 2001-03-20 Mallikarjunan Mahalingam Method for the hot swap and add of input/output platforms and devices
US9361243B2 (en) 1998-07-31 2016-06-07 Kom Networks Inc. Method and system for providing restricted access to a storage medium
US8234477B2 (en) 1998-07-31 2012-07-31 Kom Networks, Inc. Method and system for providing restricted access to a storage medium
DE19836347C2 (en) 1998-08-11 2001-11-15 Ericsson Telefon Ab L M Fault-tolerant computer system
US7305451B2 (en) * 1998-08-24 2007-12-04 Microsoft Corporation System for providing users an integrated directory service containing content nodes located in different groups of application servers in computer network
US6266785B1 (en) 1998-09-01 2001-07-24 Ncr Corporation File system filter driver apparatus and method
US6247141B1 (en) 1998-09-24 2001-06-12 Telefonaktiebolaget Lm Ericsson (Publ) Protocol for providing replicated servers in a client-server system
US6728839B1 (en) 1998-10-28 2004-04-27 Cisco Technology, Inc. Attribute based memory pre-fetching technique
US6460146B1 (en) * 1998-12-04 2002-10-01 Cisco Technology, Inc. System and method for establishing processor redundancy
US6389459B1 (en) * 1998-12-09 2002-05-14 Ncr Corporation Virtualized storage devices for network disk mirroring applications
US6173386B1 (en) 1998-12-14 2001-01-09 Cisco Technology, Inc. Parallel processor with debug capability
US6385747B1 (en) 1998-12-14 2002-05-07 Cisco Technology, Inc. Testing of replicated components of electronic device
US6920562B1 (en) 1998-12-18 2005-07-19 Cisco Technology, Inc. Tightly coupled software protocol decode with hardware data encryption
US6853623B2 (en) 1999-03-05 2005-02-08 Cisco Technology, Inc. Remote monitoring of switch network
US6457138B1 (en) * 1999-04-19 2002-09-24 Cisco Technology, Inc. System and method for crash handling on redundant systems
US6298474B1 (en) 1999-04-30 2001-10-02 Intergral Vision, Inc. Method and system for interactively developing a graphical control-flow structure and associated application software for use in a machine vision system and computer-readable storage medium having a program for executing the method
US6529983B1 (en) 1999-11-03 2003-03-04 Cisco Technology, Inc. Group and virtual locking mechanism for inter processor synchronization
US6681341B1 (en) 1999-11-03 2004-01-20 Cisco Technology, Inc. Processor isolation method for integrated multi-processor systems
TW454120B (en) * 1999-11-11 2001-09-11 Miralink Corp Flexible remote data mirroring
US7203732B2 (en) * 1999-11-11 2007-04-10 Miralink Corporation Flexible remote data mirroring
US6338126B1 (en) * 1999-12-06 2002-01-08 Legato Systems, Inc. Crash recovery without complete remirror
US6769027B1 (en) * 2000-01-31 2004-07-27 Avaya Technology Corp. System and method for using multi-headed queues for bookmarking in backup/recover scenarios
US6738826B1 (en) 2000-02-24 2004-05-18 Cisco Technology, Inc. Router software upgrade employing redundant processors
JP2001256067A (en) * 2000-03-08 2001-09-21 Mitsubishi Electric Corp Power saving control method for processor, storage medium and power saving controller for processor
US6892237B1 (en) 2000-03-28 2005-05-10 Cisco Technology, Inc. Method and apparatus for high-speed parsing of network messages
JP3651353B2 (en) * 2000-04-04 2005-05-25 日本電気株式会社 Digital content reproduction system and digital content distribution system
US6687851B1 (en) 2000-04-13 2004-02-03 Stratus Technologies Bermuda Ltd. Method and system for upgrading fault-tolerant systems
US6820213B1 (en) 2000-04-13 2004-11-16 Stratus Technologies Bermuda, Ltd. Fault-tolerant computer system with voter delay buffer
US6735717B1 (en) * 2000-04-13 2004-05-11 Gnp Computers, Inc. Distributed computing system clustering model providing soft real-time responsiveness and continuous availability
US6901481B2 (en) 2000-04-14 2005-05-31 Stratus Technologies Bermuda Ltd. Method and apparatus for storing transactional information in persistent memory
US6691225B1 (en) 2000-04-14 2004-02-10 Stratus Technologies Bermuda Ltd. Method and apparatus for deterministically booting a computer system having redundant components
US6802022B1 (en) 2000-04-14 2004-10-05 Stratus Technologies Bermuda Ltd. Maintenance of consistent, redundant mass storage images
US6505269B1 (en) 2000-05-16 2003-01-07 Cisco Technology, Inc. Dynamic addressing mapping to eliminate memory resource contention in a symmetric multiprocessor system
US6892221B2 (en) * 2000-05-19 2005-05-10 Centerbeam Data backup
US7225244B2 (en) * 2000-05-20 2007-05-29 Ciena Corporation Common command interface
US6742134B1 (en) 2000-05-20 2004-05-25 Equipe Communications Corporation Maintaining a local backup for data plane processes
US6715097B1 (en) 2000-05-20 2004-03-30 Equipe Communications Corporation Hierarchical fault management in computer systems
US6983362B1 (en) 2000-05-20 2006-01-03 Ciena Corporation Configurable fault recovery policy for a computer system
US6760859B1 (en) 2000-05-23 2004-07-06 International Business Machines Corporation Fault tolerant local area network connectivity
US7263476B1 (en) * 2000-06-12 2007-08-28 Quad Research High speed information processing and mass storage system and method, particularly for information and application servers
US20020004849A1 (en) * 2000-06-22 2002-01-10 Elink Business Fault tolerant internet communications system
US6728897B1 (en) * 2000-07-25 2004-04-27 Network Appliance, Inc. Negotiating takeover in high availability cluster
US7277956B2 (en) 2000-07-28 2007-10-02 Kasenna, Inc. System and method for improved utilization of bandwidth in a computer system serving multiple users
CA2457557A1 (en) * 2000-08-10 2002-02-21 Miralink Corporation Data/presence insurance tools and techniques
US6804819B1 (en) 2000-09-18 2004-10-12 Hewlett-Packard Development Company, L.P. Method, system, and computer program product for a data propagation platform and applications of same
US7386610B1 (en) 2000-09-18 2008-06-10 Hewlett-Packard Development Company, L.P. Internet protocol data mirroring
US6977927B1 (en) 2000-09-18 2005-12-20 Hewlett-Packard Development Company, L.P. Method and system of allocating storage resources in a storage area network
US6871271B2 (en) 2000-12-21 2005-03-22 Emc Corporation Incrementally restoring a mass storage device to a prior state
US6941490B2 (en) * 2000-12-21 2005-09-06 Emc Corporation Dual channel restoration of data between primary and backup servers
US6862692B2 (en) * 2001-01-29 2005-03-01 Adaptec, Inc. Dynamic redistribution of parity groups
US7054927B2 (en) * 2001-01-29 2006-05-30 Adaptec, Inc. File system metadata describing server directory information
US6990667B2 (en) 2001-01-29 2006-01-24 Adaptec, Inc. Server-independent object positioning for load balancing drives and servers
US20020138559A1 (en) * 2001-01-29 2002-09-26 Ulrich Thomas R. Dynamically distributed file system
US6606690B2 (en) 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
EP1374080A2 (en) 2001-03-02 2004-01-02 Kasenna, Inc. Metadata enabled push-pull model for efficient low-latency video-content distribution over a network
US8244864B1 (en) 2001-03-20 2012-08-14 Microsoft Corporation Transparent migration of TCP based connections within a network load balancing system
US7296194B1 (en) 2002-03-28 2007-11-13 Shoregroup Inc. Method and apparatus for maintaining the status of objects in computer networks using virtual state machines
US7028228B1 (en) 2001-03-28 2006-04-11 The Shoregroup, Inc. Method and apparatus for identifying problems in computer networks
US7197561B1 (en) * 2001-03-28 2007-03-27 Shoregroup, Inc. Method and apparatus for maintaining the status of objects in computer networks using virtual state machines
US7065672B2 (en) 2001-03-28 2006-06-20 Stratus Technologies Bermuda Ltd. Apparatus and methods for fault-tolerant computing using a switching fabric
US6971043B2 (en) * 2001-04-11 2005-11-29 Stratus Technologies Bermuda Ltd Apparatus and method for accessing a mass storage device in a fault-tolerant server
US6928579B2 (en) * 2001-06-27 2005-08-09 Nokia Corporation Crash recovery system
US7072911B1 (en) * 2001-07-27 2006-07-04 Novell, Inc. System and method for incremental replication of changes in a state based distributed database
US6920579B1 (en) 2001-08-20 2005-07-19 Network Appliance, Inc. Operator initiated graceful takeover in a node cluster
US7137026B2 (en) * 2001-10-04 2006-11-14 Nokia Corporation Crash recovery system
US7177267B2 (en) * 2001-11-09 2007-02-13 Adc Dsl Systems, Inc. Hardware monitoring and configuration management
US6954877B2 (en) * 2001-11-29 2005-10-11 Agami Systems, Inc. Fault tolerance using logical checkpointing in computing systems
US7296125B2 (en) * 2001-11-29 2007-11-13 Emc Corporation Preserving a snapshot of selected data of a mass storage system
US7730153B1 (en) 2001-12-04 2010-06-01 Netapp, Inc. Efficient use of NVRAM during takeover in a node cluster
US6802024B2 (en) 2001-12-13 2004-10-05 Intel Corporation Deterministic preemption points in operating system execution
KR100441712B1 (en) * 2001-12-29 2004-07-27 엘지전자 주식회사 Extensible Multi-processing System and Method of Replicating Memory thereof
US7007142B2 (en) * 2002-02-19 2006-02-28 Intel Corporation Network data storage-related operations
US7039828B1 (en) 2002-02-28 2006-05-02 Network Appliance, Inc. System and method for clustered failover without network support
GB0206604D0 (en) * 2002-03-20 2002-05-01 Global Continuity Plc Improvements relating to overcoming data processing failures
US7571221B2 (en) * 2002-04-03 2009-08-04 Hewlett-Packard Development Company, L.P. Installation of network services in an embedded network server
US7058849B2 (en) * 2002-07-02 2006-06-06 Micron Technology, Inc. Use of non-volatile memory to perform rollback function
US7885896B2 (en) 2002-07-09 2011-02-08 Avaya Inc. Method for authorizing a substitute software license server
US8041642B2 (en) 2002-07-10 2011-10-18 Avaya Inc. Predictive software license balancing
AU2003259797A1 (en) * 2002-08-05 2004-02-23 Fish, Robert System and method of parallel pattern matching
US6782424B2 (en) * 2002-08-23 2004-08-24 Finite State Machine Labs, Inc. System, method and computer program product for monitoring and controlling network connections from a supervisory operating system
US7698225B2 (en) 2002-08-30 2010-04-13 Avaya Inc. License modes in call processing
US7228567B2 (en) * 2002-08-30 2007-06-05 Avaya Technology Corp. License file serial number tracking
US7216363B2 (en) * 2002-08-30 2007-05-08 Avaya Technology Corp. Licensing duplicated systems
US7681245B2 (en) 2002-08-30 2010-03-16 Avaya Inc. Remote feature activator feature extraction
US7966520B2 (en) * 2002-08-30 2011-06-21 Avaya Inc. Software licensing for spare processors
US7707116B2 (en) 2002-08-30 2010-04-27 Avaya Inc. Flexible license file feature controls
US7051053B2 (en) * 2002-09-30 2006-05-23 Dinesh Sinha Method of lazily replicating files and monitoring log in backup file system
US20040078339A1 (en) * 2002-10-22 2004-04-22 Goringe Christopher M. Priority based licensing
US7171452B1 (en) 2002-10-31 2007-01-30 Network Appliance, Inc. System and method for monitoring cluster partner boot status over a cluster interconnect
US7890997B2 (en) 2002-12-26 2011-02-15 Avaya Inc. Remote feature activation authentication file system
US7155638B1 (en) * 2003-01-17 2006-12-26 Unisys Corporation Clustered computer system utilizing separate servers for redundancy in which the host computers are unaware of the usage of separate servers
US7149923B1 (en) * 2003-01-17 2006-12-12 Unisys Corporation Software control using the controller as a component to achieve resiliency in a computer system utilizing separate servers for redundancy
US7246255B1 (en) * 2003-01-17 2007-07-17 Unisys Corporation Method for shortening the resynchronization time following failure in a computer system utilizing separate servers for redundancy
US7260557B2 (en) * 2003-02-27 2007-08-21 Avaya Technology Corp. Method and apparatus for license distribution
US7231489B1 (en) 2003-03-03 2007-06-12 Network Appliance, Inc. System and method for coordinating cluster state information
US7373657B2 (en) 2003-03-10 2008-05-13 Avaya Technology Corp. Method and apparatus for controlling data and software access
US20040181696A1 (en) * 2003-03-11 2004-09-16 Walker William T. Temporary password login
US20040184464A1 (en) * 2003-03-18 2004-09-23 Airspan Networks Inc. Data processing apparatus
JP2004295465A (en) * 2003-03-27 2004-10-21 Hitachi Ltd Computer system
US7127442B2 (en) * 2003-04-01 2006-10-24 Avaya Technology Corp. Ironclad notification of license errors
US7739543B1 (en) 2003-04-23 2010-06-15 Netapp, Inc. System and method for transport-level failover for loosely coupled iSCSI target devices
US7260737B1 (en) 2003-04-23 2007-08-21 Network Appliance, Inc. System and method for transport-level failover of FCP devices in a cluster
US7194655B2 (en) * 2003-06-12 2007-03-20 International Business Machines Corporation Method and system for autonomously rebuilding a failed server and a computer system utilizing the same
US20050039074A1 (en) * 2003-07-09 2005-02-17 Tremblay Glenn A. Fault resilient/fault tolerant computing
US7593996B2 (en) * 2003-07-18 2009-09-22 Netapp, Inc. System and method for establishing a peer connection using reliable RDMA primitives
US7716323B2 (en) * 2003-07-18 2010-05-11 Netapp, Inc. System and method for reliable peer communication in a clustered storage system
US7467191B1 (en) 2003-09-26 2008-12-16 Network Appliance, Inc. System and method for failover using virtual ports in clustered systems
US7447860B1 (en) * 2003-09-29 2008-11-04 Emc Corporation System and method for managing data associated with copying and recovery procedures in a data storage environment
US7096331B1 (en) * 2003-09-29 2006-08-22 Emc Corporation System and method for managing data associated with copying and replication procedures in a data storage environment
US7222143B2 (en) * 2003-11-24 2007-05-22 Lenovo (Singapore) Pte Ltd. Safely restoring previously un-backed up data during system restore of a failing system
US7966294B1 (en) 2004-01-08 2011-06-21 Netapp, Inc. User interface system for a clustered storage system
US7340639B1 (en) 2004-01-08 2008-03-04 Network Appliance, Inc. System and method for proxying data access commands in a clustered storage system
US7353388B1 (en) 2004-02-09 2008-04-01 Avaya Technology Corp. Key server for securing IP telephony registration, control, and maintenance
US7272500B1 (en) 2004-03-25 2007-09-18 Avaya Technology Corp. Global positioning system hardware key for software licenses
US8621029B1 (en) 2004-04-28 2013-12-31 Netapp, Inc. System and method for providing remote direct memory access over a transport medium that does not natively support remote direct memory access operations
US7328144B1 (en) 2004-04-28 2008-02-05 Network Appliance, Inc. System and method for simulating a software protocol stack using an emulated protocol over an emulated network
US7478263B1 (en) 2004-06-01 2009-01-13 Network Appliance, Inc. System and method for establishing bi-directional failover in a two node cluster
US7496782B1 (en) 2004-06-01 2009-02-24 Network Appliance, Inc. System and method for splitting a cluster for disaster recovery
US7363366B2 (en) * 2004-07-13 2008-04-22 Teneros Inc. Network traffic routing
US20060015764A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Transparent service provider
WO2006017199A2 (en) * 2004-07-13 2006-02-16 Teneros, Inc. Autonomous service backup and migration
US7321906B2 (en) * 2004-07-23 2008-01-22 Omx Technology Ab Method of improving replica server performance and a replica server system
US7613710B2 (en) * 2004-08-12 2009-11-03 Oracle International Corporation Suspending a result set and continuing from a suspended result set
US7415470B2 (en) * 2004-08-12 2008-08-19 Oracle International Corporation Capturing and re-creating the state of a queue when migrating a session
US7587400B2 (en) * 2004-08-12 2009-09-08 Oracle International Corporation Suspending a result set and continuing from a suspended result set for transparent session migration
US7502824B2 (en) * 2004-08-12 2009-03-10 Oracle International Corporation Database shutdown with session migration
US7743333B2 (en) * 2004-08-12 2010-06-22 Oracle International Corporation Suspending a result set and continuing from a suspended result set for scrollable cursors
US7707405B1 (en) 2004-09-21 2010-04-27 Avaya Inc. Secure installation activation
US7747851B1 (en) 2004-09-30 2010-06-29 Avaya Inc. Certificate distribution via license files
US7965701B1 (en) 2004-09-30 2011-06-21 Avaya Inc. Method and system for secure communications with IP telephony appliance
US8229858B1 (en) 2004-09-30 2012-07-24 Avaya Inc. Generation of enterprise-wide licenses in a customer environment
US7434630B2 (en) * 2004-10-05 2008-10-14 Halliburton Energy Services, Inc. Surface instrumentation configuration for drilling rig operation
KR100651388B1 (en) * 2004-11-25 2006-11-29 삼성전자주식회사 Method for setting receiving tone in wireless terminal
JP4182948B2 (en) * 2004-12-21 2008-11-19 日本電気株式会社 Fault tolerant computer system and interrupt control method therefor
DE102004062116B3 (en) * 2004-12-23 2006-05-11 Ab Skf Bearing arrangement for computer tomography has bearing with inner ring, which stores construction unit, and outer ring, which is connected with damping element, fitted as single element and contain hollow cylindrical basic outline
US7496787B2 (en) * 2004-12-27 2009-02-24 Stratus Technologies Bermuda Ltd. Systems and methods for checkpointing
US9176772B2 (en) * 2005-02-11 2015-11-03 Oracle International Corporation Suspending and resuming of sessions
US8073899B2 (en) * 2005-04-29 2011-12-06 Netapp, Inc. System and method for proxying data access commands in a storage system cluster
US7743286B2 (en) * 2005-05-17 2010-06-22 International Business Machines Corporation Method, system and program product for analyzing demographical factors of a computer system to address error conditions
US20070028144A1 (en) * 2005-07-29 2007-02-01 Stratus Technologies Bermuda Ltd. Systems and methods for checkpointing
US20070038891A1 (en) * 2005-08-12 2007-02-15 Stratus Technologies Bermuda Ltd. Hardware checkpointing system
US7814023B1 (en) 2005-09-08 2010-10-12 Avaya Inc. Secure download manager
US7370235B1 (en) * 2005-09-29 2008-05-06 Emc Corporation System and method for managing and scheduling recovery after a failure in a data storage environment
US7401251B1 (en) * 2005-09-29 2008-07-15 Emc Corporation Architecture for managing failover and recovery after failover in a data storage environment
US7793329B2 (en) 2006-02-06 2010-09-07 Kasenna, Inc. Method and system for reducing switching delays between digital video feeds using multicast slotted transmission technique
JP4585463B2 (en) * 2006-02-15 2010-11-24 富士通株式会社 Program for functioning virtual computer system
CN100353330C (en) * 2006-03-10 2007-12-05 四川大学 Disk mirroring method based on IP network
JP4808524B2 (en) * 2006-03-17 2011-11-02 株式会社日立製作所 Data processing method, data processing system, and data processing program
WO2007140475A2 (en) * 2006-05-31 2007-12-06 Teneros, Inc. Extracting shared state information from message traffic
US7725764B2 (en) * 2006-08-04 2010-05-25 Tsx Inc. Failover system and method
AU2012202229B2 (en) * 2006-08-04 2014-07-10 Tsx Inc. Failover system and method
US7734947B1 (en) 2007-04-17 2010-06-08 Netapp, Inc. System and method for virtual interface failover within a cluster
US7958385B1 (en) 2007-04-30 2011-06-07 Netapp, Inc. System and method for verification and enforcement of virtual interface failover within a cluster
US8130084B2 (en) * 2007-04-30 2012-03-06 International Business Machines Corporation Fault tolerant closed system control using power line communication
US8346719B2 (en) 2007-05-17 2013-01-01 Novell, Inc. Multi-node replication systems, devices and methods
US8818936B1 (en) 2007-06-29 2014-08-26 Emc Corporation Methods, systems, and computer program products for processing read requests received during a protected restore operation
US8421614B2 (en) * 2007-09-19 2013-04-16 International Business Machines Corporation Reliable redundant data communication through alternating current power distribution system
US7870374B2 (en) * 2007-09-27 2011-01-11 International Business Machines Corporation Validating physical and logical system connectivity of components in a data processing system
US8489554B2 (en) 2007-10-05 2013-07-16 Ge Intelligent Platforms, Inc. Methods and systems for operating a sequence of events recorder
US9201745B2 (en) * 2008-01-23 2015-12-01 Omx Technology Ab Method of improving replica server performance and a replica server system
US8688798B1 (en) 2009-04-03 2014-04-01 Netapp, Inc. System and method for a shared write address protocol over a remote direct memory access connection
CN102216726A (en) * 2009-04-24 2011-10-12 株式会社东京技术 Method of measuring an involute gear tooth profile
US9256598B1 (en) 2009-08-19 2016-02-09 Emc Corporation Systems, methods, and computer readable media for copy-on-demand optimization for large writes
CN103081378B (en) * 2010-07-22 2015-05-27 Lg电子株式会社 Method and device for transmitting and receiving downlink data for no-mobility mobile station in idle state
GB201016079D0 (en) * 2010-09-24 2010-11-10 St Microelectronics Res & Dev Apparatus & method
US8706834B2 (en) 2011-06-30 2014-04-22 Amazon Technologies, Inc. Methods and apparatus for remotely updating executing processes
EP2701065B1 (en) * 2012-08-24 2015-02-25 Siemens Aktiengesellschaft Method for operating a redundant automation system
US9251002B2 (en) 2013-01-15 2016-02-02 Stratus Technologies Bermuda Ltd. System and method for writing checkpointing data
US10185631B2 (en) * 2013-07-04 2019-01-22 Data Deposit Box Inc. System and method of performing continuous backup of a data file on a computing device
US9965363B2 (en) * 2013-12-14 2018-05-08 Netapp, Inc. Techniques for LIF placement in SAN storage cluster synchronous disaster recovery
WO2015102875A1 (en) 2013-12-30 2015-07-09 Stratus Technologies Bermuda Ltd. Checkpointing systems and methods of using data forwarding
EP3090345B1 (en) 2013-12-30 2017-11-08 Stratus Technologies Bermuda Ltd. Method of delaying checkpoints by inspecting network packets
WO2015102873A2 (en) 2013-12-30 2015-07-09 Stratus Technologies Bermuda Ltd. Dynamic checkpointing systems and methods
BR112016023577B1 (en) * 2014-04-14 2023-05-09 Huawei Technologies Co., Ltd APPLIANCE AND METHOD TO CONFIGURE REDUNDANCY SOLUTION IN CLOUD COMPUTING ARCHITECTURE
CN104137482B (en) * 2014-04-14 2018-02-02 华为技术有限公司 A kind of disaster tolerance data center configuration method and device under cloud computing framework
US9558143B2 (en) 2014-05-09 2017-01-31 Micron Technology, Inc. Interconnect systems and methods using hybrid memory cube links to send packetized data over different endpoints of a data handling device
US9830237B2 (en) * 2015-09-25 2017-11-28 Netapp, Inc. Resynchronization with compliance data preservation
RU170236U1 (en) * 2016-09-19 2017-04-18 Федеральное государственное бюджетное образовательное учреждение высшего образования "Томский государственный университет систем управления и радиоэлектроники" (ТУСУР) RESERVED MULTI-CHANNEL COMPUTER SYSTEM
RU2683613C1 (en) * 2018-03-30 2019-03-29 Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) Pos-terminal network control system
US11947465B2 (en) 2020-10-13 2024-04-02 International Business Machines Corporation Buffer overflow trapping

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4471429A (en) * 1979-12-14 1984-09-11 Honeywell Information Systems, Inc. Apparatus for cache clearing
US4530052A (en) * 1982-10-14 1985-07-16 Honeywell Information Systems Inc. Apparatus and method for a data processing unit sharing a plurality of operating systems
US4590554A (en) * 1982-11-23 1986-05-20 Parallel Computers Systems, Inc. Backup fault tolerant computer system
US4615001A (en) * 1984-03-29 1986-09-30 At&T Bell Laboratories Queuing arrangement for initiating execution of multistage transactions
US4658351A (en) * 1984-10-09 1987-04-14 Wang Laboratories, Inc. Task control means for a multi-tasking data processing system
US4979108A (en) * 1985-12-20 1990-12-18 Ag Communication Systems Corporation Task synchronization arrangement and method for remote duplex processors
SE454730B (en) * 1986-09-19 1988-05-24 Asea Ab PROCEDURE AND COMPUTER EQUIPMENT FOR SHORT-FREE REPLACEMENT OF THE ACTIVITY FROM ACTIVE DEVICES TO EMERGENCY UNITS IN A CENTRAL UNIT
US4959768A (en) * 1989-01-23 1990-09-25 Honeywell Inc. Apparatus for tracking predetermined data for updating a secondary data base
DE69021712T2 (en) * 1990-02-08 1996-04-18 Ibm Restart marking mechanism for fault tolerant systems.

Also Published As

Publication number Publication date
BR9106875A (en) 1993-07-20
NO302986B1 (en) 1998-05-11
EP0550457B1 (en) 1997-04-23
CA2091993A1 (en) 1992-03-25
AU8431091A (en) 1992-04-15
AU660939B2 (en) 1995-07-13
FI101432B1 (en) 1998-06-15
NO931062D0 (en) 1993-03-23
WO1992005487A1 (en) 1992-04-02
FI931276A (en) 1993-05-21
ATE152261T1 (en) 1997-05-15
FI101432B (en) 1998-06-15
NO931062L (en) 1993-05-24
US5157663A (en) 1992-10-20
DE69125840T2 (en) 1997-10-23
DE69125840D1 (en) 1997-05-28
RU2108621C1 (en) 1998-04-10
JP3156083B2 (en) 2001-04-16
EP0550457A4 (en) 1995-10-25
KR0137406B1 (en) 1998-07-01
JPH06504389A (en) 1994-05-19
US5455932A (en) 1995-10-03
FI931276A0 (en) 1993-03-23
EP0550457A1 (en) 1993-07-14

Similar Documents

Publication Publication Date Title
CA2091993C (en) Fault tolerant computer system
Borg et al. A message system supporting fault tolerance
US5668943A (en) Virtual shared disks with application transparent recovery
JP2505928B2 (en) Checkpoint mechanism for fault tolerant systems
US5434975A (en) System for interconnecting a synchronous path having semaphores and an asynchronous path having message queuing for interprocess communications
US5357612A (en) Mechanism for passing messages between several processors coupled through a shared intelligent memory
JP3694273B2 (en) Data processing system having multipath I / O request mechanism
US6625639B1 (en) Apparatus and method for processing a task in a clustered computing environment
US6622259B1 (en) Non-disruptive migration of coordinator services in a distributed computer system
US5442785A (en) Method and apparatus for passing messages between application programs on host processors coupled to a record lock processor
JP2000222368A (en) Method and system for duplication support of remote method call system
WO1997022930A1 (en) Transparent fault tolerant computer system
JPS59133663A (en) Message transmission between task execution means for systemof allowing fault in decentralized multiprocessor/computer
JPH0926891A (en) Method for maintenance of network connection between applications and data-processing system
US6785840B1 (en) Call processor system and methods
JP2000510976A (en) Method for synchronizing programs on different computers of an interconnect system
CA1304513C (en) Multiple i/o bus virtual broadcast of programmed i/o instructions
US6393503B2 (en) Efficient transfer of data and events between processes and between processes and drivers in a parallel, fault tolerant message based operating system
US6032267A (en) Apparatus and method for efficient modularity in a parallel, fault tolerant, message based operating system
JP2772068B2 (en) Data assurance processing method for inherited information
JPH11120017A (en) Automatic numbering system, duplex system, and cluster system
Schoeffler Organization of software for multicomputer process control systems
JPS62107343A (en) Method for securing completeness of data for computer system
WO1984004190A1 (en) Multi-computer computer architecture
Borg et al. Fault tolerance in distributed UNIX

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry