CA2084298A1 - Integrated software architecture for a highly parallel multiprocessor system - Google Patents

Integrated software architecture for a highly parallel multiprocessor system

Info

Publication number
CA2084298A1
CA2084298A1 CA002084298A CA2084298A CA2084298A1 CA 2084298 A1 CA2084298 A1 CA 2084298A1 CA 002084298 A CA002084298 A CA 002084298A CA 2084298 A CA2084298 A CA 2084298A CA 2084298 A1 CA2084298 A1 CA 2084298A1
Authority
CA
Canada
Prior art keywords
multithreaded
program
parallel
multiprocessor system
operating system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002084298A
Other languages
French (fr)
Inventor
David M. Cox
Timothy J. Cramer
James C. Rasbold
Robert E. Ii Strout
Steven G. Olson
Steve S. Chen
Jon A. Masamitsu
Don A. Van Dyke
George A. Spix
David M. Barkai
Richard E. Hessel
David A. Seberger
Diane M. Wengelski
Gregory G. Gaertner
Giacomo G. Brussino
Ashok Chandramouli
Jeremiah D. Burke
Keith J. Thompson
Kelly T. O'hair
Stuart W. Hawkinson
Linda J. O'gara
Mark D. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cray Research LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2084298A1 publication Critical patent/CA2084298A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30145Instruction analysis, e.g. decoding, instruction word fields
    • G06F9/30149Instruction analysis, e.g. decoding, instruction word fields of variable length instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8053Vector processors
    • G06F15/8076Details on data register access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8053Vector processors
    • G06F15/8092Array of vector units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/43Checking; Contextual analysis
    • G06F8/433Dependency analysis; Data or control flow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/445Exploiting fine grain parallelism, i.e. parallelism at instruction level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/45Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30021Compare instructions, e.g. Greater-Than, Equal-To, MINMAX
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30094Condition code generation, e.g. Carry, Zero flag
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/3834Maintaining memory consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3838Dependency mechanisms, e.g. register scoreboarding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3856Reordering of instructions, e.g. using queues or age tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

ABSTRACT OF THE DISCLOSURE

An integrated software architecture for a highly parallel multiprocessor system having multiple tightly-coupled processors (10) that share a common memory (14) efficiently controls the interface with and execution of programs on such a multiprocessor system. The software architecture combines a symmetrically integrated multithreaded operating system (1000) and an integrated parallel user environment (2000). The operating system distributively implements an anarchy-based scheduling model for the scheduling of processes and resources by allowing each processor (10) to access a single image of the operating system (1000) stored in the common memory that operates on a common set of operating system shared resouces (2500). The user environment (2000) provides a common visual representation for plurality of program development tools that provide compilation, execution and debugging capabilities for multithreaded user programs and assumes parallelism as the standard mode of operation.

Description

q~ 2~
wo 91/tO033 PCI~/US91/~066 I

INTEGRATED SOPl-WARE ARCHITECIIJRE FOR A
~DlGHLY PARALLEL MUI.~PROCE5SOR SYSTEM

This invention relates generally to the field of operating system software and program development tools for computer processing systems. More particularly, the present insren~on relates to an integrated ~oftware architecture for a highly parallel multiprocessor system having multiple, tightly-coupled processors tha~ share a common memory.

It is well recognized that one of the major impediments to the effective utilization of multiprocessor systems is the lack of appropriate software adapte :I to operate on something other than the traditional von Neuman computer architecture of the types having a sin~sle sequential processor with a single memory. Until recently, the vast majority of scientific programs written in the For~an and C prograIluning lan~uages could not talce advantage of the increased parallelism being offered by new multiprocessor systems, particularly the high-speed computer processing systems whîch are sometimes referred to as supereomputers. It is particularly the laek of operating system software and program development tools that has prevented present multiprocessor systems from achieving significantly increased performance without the need for user application software to be rewritten or customized to run on such systems.
Presently, a limited number of operati3lg systems have attempted to solve some of the problems associated with providing support for parallel software in a multiprocessor system. To ~etter understand the problems ;` "

~og1/20~3~ P~ s~g? ~4~66 asso~ated with supporting parallel software, it is necessary to establish a common set of deQnitions for the terms that will be used to describe the creation and execution of a program on a multiprocessor system. As used within the present invention, the term program refers to either a user 5 application program, operating syste~ program or a software development program referred to hereinafter as a software development tool. A first set of terms is used to describe the segmenting of the program into logical parts that may be executed in parallel. These terms relate to the static condition of the program and include the concepts of threads and 10 multithreading. A second set of terms is used to describe the actual assigrunent of those logical parts of the program to be executed on one or more parallel processors. l`his set of terms relate to the dynamic condition of the program during execution and include the concepts of processes, process images and proces~ g~oups.
A thread is a part of a program that is logically independent from ano~er part of the program and can therefore be executed in parallel with other threads of the program. In compiling a program to be run on a multiprocessor system, some compilers attempt to create multiple threads for a program automatically, in addition to those threads that are explicitly 20 identified as portions of the program specifically coded for parallel execution. For example, in the UNICOS operating system for the Cray X-MP and Y-MP supercomputers from Cray Research, Inc., the compilers (one for each programming langauge) attempt to create multiple threads for a program using a process referred to by Cray Research as 25 Autot~sking~9. In general, however, present compilers have had limited success in creating multiple threads ~at are based upon on analysis of the program structure to determine whether multithreadling is appropriate and that will result in reduction in executiorl time of the multithreaded program in proportion to t~e number of additional processors applied to 30 the multithreaded progra~.
The compller will produce an object code file for each program module. A program module contains the source code version for all or part of the program. A program module may also be referred to as a program source code flle. The ob3ect code files from different program 35 modules are linked toge~er into an executable fjle for the program. The linking of programs together is a common and important part of large 2 ~ ~
Wo 91/20033 pcr/uss1/o4o66 scale user application programs which may consist of many program modules, sometimes several hundred program modules The executable form of a multithreaded program consists of mul~iple ~reads that can be executed in parallel. In the operating system, the representation of the executable form of a program is a process. A
process executes a single thread of a program during a single ~dme period.
Multiple processes can each execute a different thread or the same thread of a multithreaded program. When multiple processes executing multiple threads of a multithxeaded program are simultaneously executing on lû multiple processors, then parallel processing of a program is being performed. When multiple processes execute multiple threads of a multithreaded program, the processes share a single process image and are referred to as shared image processes. A process image is the representation in the operating system of the resources associated with process. The process image includes the instructions and data for the process, along with the execution context information for the processor (the values in all of the registers, both control registers and data registers, e.g., scalar registers, vector reg~sters, and local registers) and the execu~ioncontext in~ormation for operating system routines called by the process.
In present multiprocessor systems, the operating system is generally responsible for assigning processes to the different processors for execution. One of the problems for those prior art operating systems that have attempted to provide support ~or multithreaded programs is that the operating systems themselves are typically centralized and not multithreaded. Although a cent~alized, single threaded operating system can schedule multiple processes to execute in multiple processors in multiprocessor systems having larger numbers of processors, the centralized, single threaded operating system can cause delays and in~oduce bottlenecks in the opera~on of the multiprocessor system.
One method of minimizing the delays and bottlenecks in the centralized operating system utilizes the concept of a lightweight process.
A lightweight process is a thread of execution (in general, a ~read from a multithreaded program) plus ~e context for the execution of ~he ~hread.
The term lightweight refers to the relative amouI t of context information for the thread. A lightweight process does not have the full context of a -~ process (e.g., it often does not contain the full set of registers for the processor) A lightweight process also does not have the full flexibility of a 3 2 ~ U~n/~66 .

proeess. The execution of a process,can be interrupted at any time by the opera~ng system. When the operating system stops execution of a process, for example in response to an interrupt, it saves the context of the currently executing process so that the process can be restarted at a later 5 time at the same point in the process with the ~ame context. Because of the limited context information, a lightweight process should not be interrupted at an arbi~ary point in its execution. .A lightweight process should only be interrupted at a specific point in its execution. At these specific points, the amount of context that must ~e saved to restart the 10 lightweight process is known. The specific points at wllich the lightweight process may be interrupted are selected so that the amount of context that must be saved is small. For example, at certain points in the execution of a lightweight process, it is known which registers do not have values in them such that they would be required for the restart of the lightweight 15 process.
Lightweight processes are typically not managed by the operating system, but rather by code in the user application program. Lightweight pr~cesses execute to completlon or to points where they cannot continue without some execution by other processe~. At that point, the lightweight 20 processes are interrupted by the code in the user's application program and another lightweight process that is ready to execute is sta3 ted (or restarted).The adYalltage of present lightweight processes is that the switching between the lightweight processes is not done by the operating system, thus avoiding the delays and bottlenecks in the operating sys~em. In 25 addition, the amount of context information necessary for a lightweight process is decreased, thereby reducing the time to switch in and out of a lightweight process. Unfortunately, the handling of lighweight processes must be individually coded by the user application program.
Another problem for prior art operating systems that have 30 attempted to provide support for multithreaded programs is that the operating systems are not designed to minimize the overhead of different types of context switchin~ ~at c~ occur in fully optimized multiprocessor system. To understand the different types of context switching that can occur in a multiprocessor system, it is necessary to define additional terms 35 that describe the execution of a group of multithreaded pro~esses.
Process (~;roup - For IJnix6) and other System V operating systems, the kernel of the operating system uses a process group ID to identify :

WO 91/'~0033 PC'r/lJS91/04066 groups of related processes that ~hou~d receive a co m m on signal for certa~n eYents. Genera~y, ~he processes that execute ~he threads of a single progrann are re~erred to a proce~s group.
Process Innage - Associated wi~h a process is a process im age. A
process ~ age de~nes ~he system resouroes ~h~t are attached to a process ~Lesoulces ind ude mem ory being used by ~he process and files that the process culTently has open for input or out~ut.
Shared Innage Processes - 11hese are processes that share the same process ~ age (~he saIne menno~y space and f~e systems). Signals (of ~he traditional System V7 variety3 and semaphores synchronize shared image processes. Sigplals are handled by the individual process or by a signal processing group leader, and can be sent globally or targeted to one or more processes. Semaphores also synchro~e shared image processes.
Multithreading - Multiplg ~reads execute in the kernel at any time.
Global data is protected by spin locks and sleeping locks (Dijkstra semaphores). The type of lock used depends upon how long the data has to be protected.
Spin Locks - Spin locks are used during very short periods of protection, as an example, for memory references. A spin lock does not cause the lodcing or wai/dng process to be rescheduled.
Dijkstra Semaphores - Dijks~a s~naphores are used for locks whic~
require an exogenous event to be released, typically an input/output completion. T~ley cause a waiting process to discontinue running until notifica~on is receiYed that dle Dijkstra semaphore is released.
Intra-Process Context Switch - a context switch in which the pro~essor will be executing in the same shared process image or in the opera~ng system kernel.
Inter-Process Context Switch - a context switch in which the processor will be executing in a different shared process image.
Conseq~ently, the amount of context information that must ~ saved to effect the switch is increased as the processor must acquire all of the context information for the process image of the new shared image process.
Lightweight Process Context Switch - a context switch executed under control of a user program that schedules a lightweight process to be executed in another processor and provides only a limited subset of the in~a-process context info~nation. In other words, ~e lightweight process ' ' . ` ,~

%~
wo gl/?0033 pcr/us9~ 1o66 context switch is used when a process has a small amount of work to be done and will return the results of the work to the user program that schedu~e the lightweight process. -Prior art operating systems ~r minimally parallel supercomputers 5 (e.g., UNICOS) are not capable of efficiently implementing contextswitches because the access time for ac~quiring a shared resource necessary to perform a context switch is not bounded. In other words, most prior art supercomputer operating systems do not hlow how long it will take to make any type of context sw~tch. A~ a result, the operating system must 10 use the mosS conservative estimate for the access time to acquire a shared resource in determining whether to schedule a process to be executed.
This necessarily implies a penalty for the creation and execution of multithreaded programs on such systems because the operating system does not efficiently schedule the multithreaded program~. Consequently, l5 in prior art supercomputer operating systems a multithreaded program may not execute sigluficantly faster than its singl~threaded counter part and may actually execute slower.
Other models for operating systems that support multithreaded programs are also not effective at minimi~ing the different types of contex~
20 switching overheads that can occur in fully optimized multithreaded programs. For example, most mini-supercomputers create an environment that efficiently ~upports intra-process context switching by having a ~nultiprocessor system wherein the processors operate at slower speeds so that the memory access times are the same order of magnitude 25 as the register access times. ~n this environment, an intra-process context suritch among processes in a process group that shares the same process image ~ncurs very little context switch overhead. Unfortunately, because the speed of ~e processors is limited to the speed of the memory accesses, the system incurs a significant context switch overhead in processing 30 inter-process context switches. On the other hand, one of the more popular operating systems that provides an efficient model for inter-process context switches is not capable of performing intr~-process context switches~ In a virtual machine environment where process groups are divided among segments in a virtual.memory, inter-process 35 context switches can be efficiently managed by the use`of appropriate paging, look-ahead and caching schemes. However, the lack of a real memory environment prevents the effective scheduling of inka-process G~ J ~ ~J
wo 91~20033 P~r/uss~/040~6 context switches ~ecause of the lon~ delays ~n upda~ing ~ al memory and the problems in managing cache coherency.
One example of an operating system tha~ schedules multithreaded programs is Mach, a small singl~threaded monitor available from 5 Carnegie Mellon IJniversity. Mach is attached to a System Y-type operating system and operates in a virtual memory environment. The Mach execu'dve rnutine attempts to schedule multithreaded programs;
however, the Mach executive routine itself is not multithreaded. Mach is a centralized executive routine that operates OIl a standard centralized, 10 single-threaded operating system. As such, a potential bottleneck in the operating sy~tem is created by rel~g on this singl~threaded executive to schedule the multithreaded programs. Regardless of how small and efficient the Mach execu~ive is made, it still can only schedule multithreaded programs sequentially.
Another example of a present operating system that attempts to support multithreading is the Arnoeba Development, available from Arnersterd am University. The Amoeba Development is a message passing-based operating system ~r use in a distributed network environment Generally, a distributed computer network consists of computers that pass messages among each other and do not share memory. Because the typical user ~pplication program (written in ~ortran, for example) requires a processing model that includes a shared memory, the program cannot be executed in parallel without significant modifica~on on computer processing systems that do not share memory.
The Networlc Livermore Time Sh~ing System (NLTSS) developed at the Lawrence Livermore Nation~ aboratory is an example of a `` message passing, multithreaded operating sys~em. NLTSS supports a distributed computer network that. has a shared memory multiprocessor system a~ one of the computers on ~e network. Multiprocessing that was done on the shared memory multiproces60r ~ystem in the distributed network was modifiéd to take advanta~e of the shared mem~ry on that system. Again, however, ~e actual scheduling of the mult;threaded programs on the shared memory multlprocessor system was accomplished using a singl~threaded monitor similar to the Mach executive that relies on a critical region of code for scheduling mul~ple processes.
The Dynix opera'dng system for the Sequent Balance 21000 available from Sequent Computer Systems, Inc. is a multlthreaded operating system , .
., .
.-: .
;

s~
WO 91/20033 P~r/US91/0~66 that uses bus access to common m,emory, rather ~an arbl~a~ion access Similarily, the Amdahl Syste~ V-based UTS operating system available from Amdahl Computers is also multi~readed; however, 1~ uses a full cross bar switch and a hierarchical cache to access common memory.
5 Although both of these operating system are multithreaded ~n that each has multiple ent~y points, in fact, both operation systems use a critical reginn, like the single-threaded monitor of Mach, to perform the scheduler allocation. Because of ~he lack of an effective lock mechanism, even these supposedly multithreaded operaling systems must perform 10 scheduling as a locked activity in a critical region of code.
The issue of creating an efficient environment Çor multiprocessing of all types of processes in a multiprocessor system relates directly to the:
cornmunication time among processors. If the time to coIIununicate is a significant ~raction of the time it takes to execute a thread, then ~5 multiprocessing of the threads is less beneficial in the sense that the time saved in execut~ng the program in par~llel on multiple processors is lost due to the communication time betw~en processors. For example, if it takes ten seconds to execute a multlthreaded program on ten processors and only fifteen seconds to execute a singl~threaded version of the same 20 program on. one processor, then it is more efficient to use the multiprocessor system to execute ten separate, singl~threaded programs on the ten processors than to execute a single, multithreaded program.
The issue of communication time among processors In a given `` multiprocessor system will depend upon a number of factors. First, the 25 physical distance between processors directly relates to the time it takes for the processors to communicate. Second, the architecture of the multiprocessor system will dictate; how so~e types of processor communication are performed. Third, the types of resource allocation -, mechanism~ available in the multiprocessor (e.g., semaphore operators) 30 determlne~ to a great degree how processor communication will take place. Pinally, the type of processor communication (i.e., ~zller-process conte~ct switch, ~n~a-process context switch or lightweight process) usually determines ~te amount of context information that must be stored, and, hence, ~e t~ne required for processor co~amunication. When all of these 35 factors are properly understood, It wlll be ~ apprecla~èd that, for a multiprocessor system consisting of high performance computers, the speed of the processors requires that lightweight context switches have ' .`
`: :

3 ,~ J

WO 91/20033 PCr/VS91/O~U~6 small communication times in order to efficiently multiprocess these lightweight processes. lllus, for high perfo~nance multiprocessors, only tightly-coupled mul~processor systems having a coxnmon shared memory are able to perform efficierlt multiprocessing of small granularity threads.
Another cons;deration in successfully implementing multiprocessing, and in particular lightweight processing, relates to the level of multithreading that is performed for a program. To mirLim~ze the amount of customizaiion necess~ry for a program to e~ficiently execute in parallel, the level of multithreading that is pe~formed automatically is a serious consideration for multiprocessor systems where the processors can be individually scheduled to individual processes.
Still another problem in ~e prior art is that some present operating systems generally schedule multiple processes by requesting a fixed number N of processors to work on a process group. I~is works well if the number N is less than the m2mber of processors availa~le for work;
however, this limihtion complicates the scheduling of prooesses if two or more process group are simultaneously requesting multiple processors.
For example, in the Alliant operat~g system, the operating system will not begin execution of any of the processes for a shared image process group until all N of lhe requested processor are available to the process group.
An additional problem in present multiprocessor operating systems is the lack of an efficient synchroni~ation mechanism to allow processors to perform worlc during synchronization. Most prior art syn~hronization mechanisms require that a processor wait until synchronization is complete before continuing execution. As a result, t~he time spent waiting for ~e synchronization to occur is lost ~me for the processor.
In an effort to increase the processing speed and flexibility of supercomputers, the cluster architecture for highly parallel multiprocessors described in the previously filed parent application entitled CLUSTER ARCHITECllJR3i FO:R A HIGHLY PARALLEL
SCALAR/VECTOR MULTIPROCESSOR SYSTI~M, PCT Serial No.:
PCT/US90/07655, provides an ar:hitecture for supercomputers whereill multiple processors and external interfaces can malce multiple and simultaneous requests to a common set of shared hardware resources, such as main memory, global registers and Jnterrupt mechanisms.
Although this new clu~ter architecture offers a number of solutions that % ~ S3 ~
W(~ 91/to033 PCr/U~1/0~066 can increase the parallelism Qf supercomputers, these solutions will not be ut~ by the vast majority of users of sudl systems without software that implemellts parallelism by defau~t ul the user enviromnent and provides an operating system that is fully capable of supporting such a user 5 environment. Accordingly, it is desirable to have a software ~rchitecture for a highly parallel multiprocessor system that can ta*e advantage of the parallelism in such a system.

The present invention is an integrated software archltecture that efficiently controls the interface with and execution of programs on a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory. The software architecture of the present invention combines a symmetrically integrated multithreaded 15 operating system and an integrated parallel user environment The operating system distributively implements an anarchy-based scheduling model for the scheduling of processes and resources by allowing each processor to access a single image of the operating system ~tored in the common memory that operates on a common ~et of operating system ~0 shared resources. The user environment provides a common visual representation for a plurality of program developmerlt tools that provide compilation, execution and debugging capabilities for parallel user application programs and assumes parallelism as the standard mode of -, operation.
The major problem with the present software associated with :` multiprocessor systems is that the prior art for high performance multiprocessor systems is still relatively young. A~ a result, the software problems associated with such systems have been only partlally solved, either as an after-thought or ir~ a piec~meal, ad,hoc manner. This is especially true for the problems associated with parallel execution of software programs. The present invention approaches the problem of software for multiprocessor systems in a new and fully integrated manner.
llle parallel execution of sofh~rare programs in a multiprocessor system is the primary ob~ective of the software architecture of the present invention.
In order to successfully lmplement parallelism by default in a multiprocessor system It is deslrable to maxlmlze the processin~ speed and .
. ~ ~

r~ ~J ~

WO 91/20033 PCI/IlS91/~:1066 '- 11 flexibili~ of the multiprocessor system. A~ a result, a balance must be-maintained among the speed of the processors, the bandwidth of ~e memory interface and the input/output interhce. If the speed or bandwidth of any one of ~ese components is signi~cantly slower than ~e other components, some portion of the computer processing system will st~e for work and another portion of the computer processing system will be backlogged wi~h work. If this i5 the case, there can be no allocation of resources by de}ault because the user must take contro~ of the assignment of resources to t~re2d~ in order to opti~ the performance of a particular thread on a particuiar system. 171e software architecture of the present invention integrates a symmetrical, multithreaded operatirg system and a parallel user environment that are matched with the design of the highly parallel multiprocessor system of the preferred embodiment to achieve the desired balance that optimizes performance and flexibility without the need for user interven~on.
l~e integrated software architecture of the present invention decreases overhead of context switches among a piurality of p~ocesses that comprise the multithreaded programs being executed on the multiprocessor system. Unlike prior supercomputer operating systems, user application pro~ams are not penalized for being multithreaded. The present invention also decreases the need for the user application programs to be rewritten or customized to execute in parallel on the par~dcular multiprocessor system. As a result, parallelism by default is implemented in ~e highly parallel multiprocessor system of the preferred embodiment. `:.
Ihe present invention is capable of decreasing the context switch overhead for all types of context switches lbecause of a highly bounded switching paradigm of the present invention. The ability t~ decrease context swit~hing in a supercomputer is much more difficult than for a lower perhrmance multiprocessor system because, unlike context switching that t~kes place in non-~upercomputers, the highly parallel multiprocessor of the present invention has hundreds of registers and data locations that must be saved to truly save t~e "context" of a process within a processor. To accommodate the lar~e amount of informatlon that must be saved and still decrease the context switch~ overhead, the operating system operates with a caller saves pa~adigm where ea~h routine saves its context on a activation record stack like an audlt trail. Thus, to .

wo 9l/20033 pcr/us91/o4~66 .

restore the entire context for a process, the operating ~ystem need only save the coniext o~ the last rou~ne and men unwind the ac~ivation record stack. The caller saves paradigm represents a philosophy implemented throughout the multiprocessor syst~n of never being in a situat~on where it is necessary to save all of those hundreds of registers ~or a context switch because the operating system did not know what was going on in the processor at the time that a context switch was required.
In addition to decreasing the overhead of context switches, the preferred embodiment of the present in~rention ~creases the efficiency of all types of context switches by solving many of the scheduling problems associated wi~ scheduling multiple processes in multiple processors. The present invention implements a distributed, anarchy-based scheduling.
model and improves the user-side scheduling to takes advantage of an . innovation of the present invention referred to as microprocesses ~5 (mprocs). Also new to the preferred embodiment is the concept of a Vser-Side Scheduler (IJSS) that can both place work in the request queues in the OSSR and look for work to be done in the same request queue~. The order of the work to be done in the ~equest queue is determined by a priorit~ation of processes.
The User-Side Schedulel (USS) is a resident piece of object code within each multithreaded program. Its purpose is manyfold: 1) request shared image processes from the operating system and schedule them to wai~ng ~reads inside the multithreaded program, 2) detach shared image processes from threads that bloclc on synchronization, 3) reassign these shared image processes to waitîng thread~, 4) provide deadlock detection, 5) provide a means to maximize efficiency of thread execution via its scheduling algorithm, and 6~ rehlrn processors to the operating system when they are no longer needed.., . " ., , The present invention improves the user-side scheduler to address 30 these issues. The USS requests a processor by incrementlng a shared resource represeIIting a request queue using an atom~c resource allocation mechanism. Processors In the operating system detect ihis request by scanning the request queues in the . shared resources across Ihe multiprocessor system~ When a request fs detected that the processor can 35 fulfill, it does ~o and concurrently decrements the request ~ount using the same atomic resource allocation mechanism. The USS also uses this request cuunt when reassigning a processor. The tequest count is checked ~J3~
wo ~l/20033 P~r/ussl/o9o66 : 13 and de~emented by the USS. This check and decremerlt by the processors in the operating system and the USS are done atomically. Thi~ allows a request for a processor to be retracted, thereby reducing the unnecessary scheduling of processors.
The improvement to the USS is particularly useful with the scheduling of microprocesses (mprocs~. Micropro esses are a type of lighhveight process that have a very low context switch overhead because the context of the microprocess of ~e present invention i5 discardable upon exit. In other words, micToprocesses are created as a means for dividing up the work to be done into very small segments that receive . only enough context information to do the work required and return only the result of the work ~th no other context information. In ~his sense, the mprocs can be thought of as very tiny disposable tools or building bloclcs t~at can be put together in any fashion to build whatever size and shape of problem-solving space is required.
Another impnrtant advantage of the mprocs of the present invention is that, while they are disposable, they are also reusable before being disposed. In o~her words,if~he USS requ~ts a processorto b2 setup to use a mproc to perform a first smail se~ent of work, ~e USS (and for that matter, any other requestor in the system via the operating system) can use that same mproc to perform other small segments of work until such time as the prooessor with ~e mproc dest~oys the mproc because it is scheduled or interrupted to execute another process.
Another way in which the scheduling of the operating system o~ the present invention is impro~ed is that the operating system considers shared im~ge process groups when scheduling processes to processors. For example, if a process is executing, its process image is in shared memory.
The operating system may choose to preferentially schedule other processesfrom ~he ~ame group ~o make better use of ~he process ~nage. In this sense, any process from a process grsup may be executed without requlring ~at all of the processes for a process group be executed. Because .:` of the way in which the anarchy-based scheduling model uses the request queues and the atomlc resource allocation mecbanism, and tlle way in .~ which the operating systen- considers shared ima~e process groups, the 35 present invention does not suffer from a lockout condition in the event that more than one shared image process group is requesting more than the available number of processors.

r .. . .

~l~8L~
wo 91~20033 Pc~r/vssl/o4o66 The supercomputer symmetrically integr~ted, multithreaded operating system (SSI/mOS) contrnls the operation and execution of one or more user application programs and software development tools and is capable of supporting one or more shared image process groups that comprise such multithreaded programs. SSI/mOS is cs)mprised of a multithreaded operating system kernel for processing multithreaded system services, and an input/output section for processing distributed, multithreaded ir.put/output se~vices.
The operating system of this invention differs from present operat;ng systems in the way in which interrupts and system routines are handled. In addition to the procedure (pr~c) code within the kernel of the operating system, the kernel also includes code ~r multithreaded parallel:
interrupt procedures (iprocs~ and multithreaded parallel system procedures (kprocs). In the present invention, Interrupts (signals) are scheduled to be handled by the iproc through a level 0 interrupt handler, rather than being immediately handled by the prooessor. ~is allows idle or lower priority processors to handle an interrupt ~or a higher priority processor. Unlike prior art operating systems, the kprocs in the present invention are not only multithreaded in that mul~iple processors may execute the system procedures at the same time, but the kprocs are themselves capable of parallel and asynchronous execu~ion. In this sense, kprocs are treated just as any other type of procedure and can also ~a~ce advanhge of the parallel scheduling innovations of the present invention.
The operating system kemel indudes a parallel process scheduler, a parallel memory scheduler and a multiprocessor operating support module. The parallel process scheduler schedules multiple processes into multiple processors. Swapping prioritization is determined by first swapping the idle processors and then the most inefficient processors as determined by the accounting support. ~e parallel mernory scheduler allocates shared memory among one or more shared image process groups and implements two new concepts, partial swapping of 3ust one of the four memory segments for a processor, and part~al swapping within a single segment. The parallel memory sc}leduler also takes advan~age of ~he extremely high swap bandwidth of the preferred multiprocessor system that is a result of the distributed input/output architecture of the system which allows for the processing of distributed, multithreaded 2~
wo 91/20033 pc~r/us~1/oqo66 input/output services, even to the same memory segment for a processor.
The multiprocessor operating support module provides accounting, control, monitoring, security, ad~unistrative and operator information about ~e processors.
The input/output software section includes a file manager, an input/output manager, a resourse scheduler and a network support system. The file manager manages direetories and files containing both data and instructions for the programs. lhe input/output manager distributively processes input/output requests to peripheral devices attached to the multiprocessor system. The resource scheduler schedules processors and allocates input/output resources to those processors to optimize the usage of the multiprocessor system. I~e network support system supports input/output requests to other processors that may be intercormected with the multiprocessor sys~em. i ;
The program development tools of the integrated parallel user environment includes a program manager, a compiler, a user interface, and a distributed debugger. The program manager controls the development environment for source code files representing a software program. The compiler is responsible for compiling the source code file to create an object code file comprised of multiple ~reads capable of parallel execution. An executable code file is then derived from the ob~ect code Sle. The user interhce presents a common visual representatioal of the status, control and execution options available for monitoring and controlling the execution of ~e executable code ~ile on the multiprocessor system. The distributed debu~ger provides debugging information and control in response t.o execution of the executable code file on the multiprocessor systen.
~tThe compiler indudes one or more front ends, a pair of optimizers and a code generator. The front ends parse the source code files and generate an intermediate language representation of the source code file referred to as HiForm (~). The optimizer includes means for performing machin~independent restructuring of the HP intermediate language representation and m eans for producing a LoForm (LF~ ~ntermediate "language representation that m ay be op~ mized Oh a m achine-dependent :35 basis by ~he code generator l~he code generator creates an object code file based upon the LF internnediate language representation, and includes m eans for performing m achine dependent restructuring of the LF

' , f~f~2~
WO 91/;!0033 PC'r/US91/04066 ~6 intermediate language representation. An assembler for generating object code from an assembly source code program may also automatically perform some optimiæation of the assembly language program. The assembler generates LoForm which is translated by the code generator into object code (machine instructions). The assembler may also generate H~
for an assembly language program that proYides information so that the compiler can optimiæ the assembly language programs by restructuring the LF. The ~: generated assembly language code can also be useful in debugging assembly source code because of the integration between the HP
representation of a program and the distributed debugger of the present invention.
The user interface provides means for linlcing, executing and monitoring the program. The means for linking the objest code version combines the user application program into an executable code file that can be executed as one or more processes in the multiprocessor system.
The means for executing the multithreaded programs executes the - processes in the multiprocessor system. Finally, ~e means for monitoring and tuning the performance of the multithreaded programs includes means for providing the status, control and execution options available for the user. Ln the preferred embodiment of the user interface, the user is visually presented with a set of icon represented functions for all of the - information and options available to the user. In addition, an eqllivalent set of command-line functions is also available for the user.
The disllibuted debugger is capable of debugging optimlzed parallel object eode for the preferred multiprocessor system. It can also debug distributed progr~ns a~oss an entire computer network, including the multiprocessor system and one or more remote systems networked together wlth the multiprocessor syctem. It will be recogniæd that the ~ optimized parallel object code produce by the compiler will be ; 30 substantlally different than the non~ptimized single processor code that a user would normally expect as a result of the compilation o~ his or her source code. In order to accomplish debugging in this type of environment, the distributed debugger maps the source code file to the optin~æd parallel ob3ect code file of the software program, and v~ce versa.
The primary mechanism for integrating the ~nultithreaded operating system and the parallel user environment is a set of data structures referred to as the Operating System Shared Resources (OSS~) .; .

wo 91/20033 Pcr/us~)/o4o66 which are defined in relation to thDe various hardware shared resources, particularly the common shared main memory and the global registers.
The OSSRs are used primarily by the operating system, wi~h a limited subset of the OSSRs available to the user envirorunent. Unlike prior art opera~ng ~ystems for multiprocessors, the OSSRs are accessible by both the processors and external interface ports to allow for a distributed input/output arch~tecture in the preferred multiprocessor system. A
number of resource allocation primitives are supported by the hardware shared resources of the preferred embodlment and are utilized in managing the OSSRs, including an atomic resource allocation mechanism that operates on the global registers.
An integral component of the parallel user environment is the intermediate language representation of the source code version of the application or development software prugram referred to as HiFo~m (HF).
The representa~on of the software programs in HF allows the four components of the parallel user environment, the program management module, the compiler, the user interhce and the dis~bu~ed debugger to access a single colrunon representa~on of the software program, regardless of the programming langauge in which the source code for the software progr~n is wsitten.
As part of the compiler, an improved and integrated Inter-Procedural Analysis ~IPA) is used by the parallel user environment to enhance the value and utilizatlon of the HF representation of a software program. The IPA analyæes t~e various relationships and dependencies among the procedures in the HF repre~entation of a multithreaded program to be executed using the present invention.
It is an objective of the present invention to provide a software architecture ~or implementing parallelism by dehult in a highly parallel multiprocessor systern having multiple, tightly~coupled proce~sors that share a common memory.
It is another objective of the present invention to provide a software architecture that is fully integrated across both a symmetrically integrated, multithreaded operating system capable of multiprocessing support and a parallel user environment having a visual user interface.
It is a further objective of the present invention to provide an operating system that distributively implements an anarchy-based scheduling model for the scheduling of processes and resources by 2 ~ Y~
Wo 9~/20033 Pcr/uss1/o~a66 allowing each processor to access a,single image of the operating system stored in the common memory that operates on a common set of operating system shared resources.
It is a still further objec~ve of the present invention to proltide a 5 software architecture with a parallel user environment that offers a common representation of the status, control and execution options available for user application progra~s and software deve1Opment tools, including a ~isual user interface having a set of icon-represented f~mctions and an equivalent set of command-line functions.
10These and other objectives of the present invention will become apparent with reference to the drawings, the detailed description of the preferred embodiment and the appended claims.
. ' . . , 12B~RIEllQN QE l~ l~lrYGS
15Figs. la and lb are simplified schematic representations of the prior art attempts at multischeduling and multischedu~ing in the present invention.
Pig 2 is a simplified schematic representation showing the multithreaded operating system of the present invention.
~0Fig. 3 is a representation of the relative amount of corltext switch - information required to perform ~ context switch in a multiprocessor system.
Figs. 4a and 4b are simplified schema~c representations of the prior art lightweight scheduling and microprocess scheduling in the present ~5 invention.
Fig. 5 is a block diagram of the preferred embodiment of a single multiprocessor cluster system for executing the software architecture of the present invention.
Figs. 6a and 6b are a block diagram of a four cluster implementation 30 of the multiproc~ssors cluster system shown in Fig. 5.
Fig. 7 is a pictorial representation of a four cluster implen entation of ~e mu~processors cluster system shown ~n Fig3. 6a and 6b.
.Figs. 8a and 8b are an overall block diagram of the software architecture of the . present invention showing the symmetrically 35 integrated, multithreaded operating system and the inte~rated parallel user environment.
'' '.

~ o~
wo 91/20033P~r/lJs91/o4O66 lg Figs. 9a and 9b are a block diagram showing ~e main components of the operating system kernel of the present invention.
Figs. lOa and 10b is a schematic flow chan showing ~e processing of context sw~tches by ~e intelTupt handler of the present invention.
5Fig. 11 is a simplified schema'dc diagram showing how background processing conbnues durirg an intelTupt.
Fig. 12 is a block diagram of ~e scheduling states for the dispatcher of the present inYention.
Fig. 13 shows one embodLment of an array ~ile system using the 10 presenting inven~ion.
Fig. 14 is a block diagram of a swapped segment.
Fig. 15 is a block diagram of memory segment functions.
Fig. 16 is a sc}lematic diagram showing the selection of adjacent swap out candidates.
15Fig. 17 is a schematic diagram showing the process of splitting memory segments.
Fig. 18 is a schematic diagram showing the process of coalescing memory segments.
Fig. 19 is a schematic diagram showing the process of splitting memory segments.
Pig. 20 is a schematic diagram showing the oversubscription of the SMS.
Fig. 21 is a schematic diagra~n showing a version of STREAMS
based TCP/IP implemented using the present invention.
Fig. 22 is a block diagram showing the kernel networking environment and support of the present invention.
Figs. 23a and 23b are a pictorial representation of the programming environment as seen by a progr~nmer.
Fig. 24 is a simplified block diagram of the pre~erred design of the ToolSet shown in Fig. 23 as implemented on top of present software.
Fig. 25a is a block diagram of the compiler of the present invention.
Figs. 25~1, 25~2,~!5~3 and 25b~ are a pictorial representation of a common user interface to the compiler shown in Fig. 25a.
Figs. 26a and 26b are functional and logical representat;ons of an example of the basic unit of optimization in the presènt invention refç~red to as a basic block.

WO 9l/20033 PC~/~JS91/04~66 , Figs. 27a and 27b show two exarnples of how control flow can be used to visualize the flow of control between basic blocks in the prograrn unit.
Figs. 28a, 28b, 2~c, 28d and 28e are tree diagrams of the constant ~olding optimi~ation of the co~piler of the present inYention.
Figs. 29a, 29b, 29c and 2~d are pictorial representatioll of a multiple window user interface to the distributed debugger of the present invention.
Fig. 30 is a schematic representation of ~e information utilized by the distributed debugger as maintained in various machine environments.
; :
. W:~
To aid in the understanding nf the present invention, a general overview of how the present invention differs from the prior art will be presented. In addition, an analogy is presented to demonstrate why the present inventian is a true software architecture for generating and executing multithreaded programs on a highly parallel multiprocessor system, as compared to the loosely organized combination of individual '~ 20 and independent software development tools and operating system software that presently exists in the prior art.
Referring now to Fig. la, a schematic representation is shown of how most of the prior art operating systems attempted multischedulin~ of multiple processes into multiple processors. The requests for multiple processes contained in a Request Qweue are accessed sequentially by a single Exec Scheduler executing in CPU-0. As a result, the multiple - processes are scheduled for execution in CPU-1, CPIJ~2, CPU-3 ~nd CPU~
in a serial fashion. Ln contrast, as shown ln Fig. lb, the present invention, all of the CPU's (CPU-0, (:PU-1, CPU 2, CPU-3, CPU~ and CPU-5) and all of the I/O controllers (I/~1 and I/~) have access to a co~unon set of data structures in the Operating System Shared Resources (OSSR), including a Work Request Queue. As a result, more than one CPU can .-simultaneously execute a shared image of the operating system (OS) code to perform operating system funciions, including the multithreaded -scheduling of processes in the Work Request Queue. ~lso unlilce the prior art, the present invention allow the I/O controllers to have access to the OSSRs so that the I/O controllers can handle Input/output operations J ~
wo gl/20033 PCr/US9l/04066 without requiring intervention from a CPU. This allows I/0-2, for example, to also execute the Multi-Scheduler routines of the Operating System to perform s~heduling of UlpUt/OUtpUt serYicing.
Because the operating system of the present invention is both 5 distributed and multithreaded, it allows the mul~dprocessor system to assume the configuration of resources (i.e., CPU's, I/O controllers and shared resources) that is, on average, the most effiaent util~ation of those resources. A~ shown in Fig. 2, the supercomputer, symmetrically integrated, multithreaded operating system (SSI/mOS) can be executed by 10 each of ~he CPU's alld the I/O controllers from a common shared image stored in main memory (not shown) and each of the CPlJ's and the I/O
controller ~an access ~e common OSSR's. In the software architecture of the present invention, additional CPU's (e.g., CPU-1) and I/O controllers (e.g., IOC-1) can be added to the multip~ocessor system without the need to 15 reconfigure the multiprocessor system. ~is allows for greater flexibility and extensibility in the control and execution of the multiprocessor system because the sof~a~e architecture of the present invention uses an anarchy-based scheduling model that lets the CPU's and IOC's individually schedule their own worlc. If a resource (CPU or IOC) should be 20 unavailable, either because it has a higher priority process that it is executing, or, for example, because ar~ error has been detected on the resource and maintenance of the resource is required, that resource does not affect the remaining operation of the multiprocessor system. It will also be recognized that additional resources may be easily added to the 25 multiprocessor system w~thout requiring changes ln the user application prog~ams executing on the system.
IReferring now to Fig. 3, a simplified representation of the relative amounts of context switch information is shown for the three types of context switches: lightweight processes, intra-process group switches and 30 inter-process group switches. Based upon this representation, it is easy to understand that the best way to minimize total context switch overhead is to have the majority of context switches involve lightweight processes.
Unfortunately, as shown in Fig. 4a, the prior art scheduling of lightweight processes is a clunbersome one-way technique wherein the user program 3S determines the type of lightweight processes it wants to have scheduled based on its own independent criteria using data structures in the main memory that are unrelated to the other operating system scheduling that æ~
WO 91/~0033 P~r/US~l/0~066 may be occurring in the multiprocessor. Because the user-side scheduling of such lightweight processes and the opera~ng ~ystem are not integrated, the context switch overhead for lightweight process context switches is increased. In the present invention, shown in Fig. 4b, both the user-side 5 scheduler and the operating system operate on the same set of OSSR's that use both shared common memory and global registers. As a result, there is a tw~way communication between the operating system and the user-side scheduler that allows the present invention to decrease the context switch overhead associated with lightweight proeesses, and in particular, 10 with a new type of lightweight pr~cess referred to as a microprocess.
An analogy that may be helpful in understanding the present invention is to visuali~e the software architecture of the present invention in terms of being a new and integrated approach to construc~ing buildings. In the prior art, construction of a building is accomplished by 15 three different and independent entities: the customer with the idea for the type of building to be built, the architect who takes that idea and turns it into a series of blueprints and work orde~s, and the eontractor who uses the blueprints and work orders to build the building. By analogy, the user application program is the customer with the idea and requirements for 20 the program to be built, the program development tools such as the compiler are the architect for creating the blueprints and work orders for building the program, and the operating system is the cont~actor using the blueprints and work orders to build (execute) the program.
Presently, the customer, architect and contractor do not have a 25 common language for communicating the Ideas of the customer all the way down the work orders to be perfoxmed by the ons~uction workers.
The customer and the architect talk verbally and may review models and written specifications. The architect produces writteII blue prints and work orders that n~ust then be translated back into verbal work 30 instructions and examples that are ultimately given to the construction workers. In additlon, the communicatiorl process ~s inefficient because of the time delays and lack of an integrated, distributed mechanlsm for communication among all of the people involved. Por example, assume that the foreman who is respo21sible for scheduling all of the work to be 35 performed on a 3Ob site has extra sheet rock workers or~ a given day because a shipment of slleet rock did not arrive. It is not easy for the foreman to reschedule those sheet ro.k worlcers, either within the 2 ~ ~

WO 91~20033 PCT/U$91/~4066 2~
foreman's own ~b site or maybe to apother ~b site also being ~ns~ucted by the same conh~actor. If ~e sheet rockers can only do sheet rocklng, it is not possible to have them do other work on the job site. To move the workers to another site will take time and money and coordina~on with 5 the contractor's central office and the foreman at the other job site. The end result is that often it is easier and more "efficient" to just let the workers sit idle at the present job ~ite, than it is to find "other" work for them to do. 5imilar1y, the lack of effiaent communica~ion may mean that it could take weeks for a deusion by the customer to change part of the 10 building to be communicated to tl~e workers at the construction site.
The present invention i5 an entirely integrated approach to construction that has been built from the ground up without having to accommodate to any ~sting s~ucture or requirements. All of the entities in this invention are completely integrated together and are provided 15 with a common communication me~hanism that allows for the most efficient communication among everyone and the most efficient ufflizalion of the resources. Ln this sense, the present invention is as if the custome~, architect and contractor all worked tagether and are all linked together by a single communication network, perhaps a multiprocessor 20 computer system. The customer communicates her ideas for the building by entering them into the network, the architect modifies dle ideas and provides both the customer and the contTactor ~th versions of the blue prints and work orders for the building that are interrelated and the each party can understan~. The contractors workers do not have a centralized 25 foreman who schedules work. ~tead, each worker has access to a single job list ~or each of the job sites which the contractor is building. When a workex is idle, the worker examines the job list and selects the next job to be done. The pb list is then automatieally updated so that no other workers will do this pb. In addition, if a worker finds out ~at he or she 30 needs additional help in doing a job, the worker may add jobs to the job list. If there are no more a~railable jobs for a given pb site, ~e worker can immediately call up the job list for another pb site to see if there is work to be done there. Unlike the prior situation where the foreman had ~o first communicate with the central office and then to another job site anci 35 finally back to the foreman at the first job slte before lt was possible to know if there was work at the second 5ite, the present invention allows the worker to have access to the job list at the second site. If the worker ~8~
wo 91/20033 PCr/VS91/04066 ~4 feels that there is su~ficient work at,the second pb site to justify ~aveling back and forth to that job site, then the worker can independently decide to go to the second ~ site.
As with tlhe integrated communicatioll network and dis~ibuted job list in the construction analogy, the present invention provides a similar integrated communication network and distributed job list for contrulling the execution of programs on a multiprocessor system. As the architect, the integrated parallel user environment of the present invention provides a commoII visual representation for a plurality of program deYelopment tools that provide compilation, execution and debugging capabilities ~or multithreaded programs. Lnstead of relying on the present patch-work of program deYelopment tools, some which were developed before the onset of parallelism, the present invention assumes parallelism as the standard mode of operation for all portions of the sofiware architecture. As the contractor, the operating ~ystem of the present invention distributively schedules the work to be done using an anarchy-based scheduling model for a common work request queue maintained in the data structures that are part of the OSSR's resident in the shared hardware resources. The anarchy-based schedu~ing model is extended not only to the operatlng system ~the contractor and ~oreman), but also to the processes (the workers) in the form of user side scheduting of microprocesses. Efficient interface to the request queue and other OSSRs by both the processes and the operating system is accomplished by the distributed use of a plurality of ato~ic resource all~cation mechanisms Ihat are implemented in the shared hardware sesources. The present invention uses an Intermediate language referred to as HiForm (H~) as the common language that is understood by all of the participants in the software architecture. The end result is ~hat the presenlt invention approaches the problem of software ~r multiprocessor systems in a new and fully integrated manner with the primary objective of the software architecture beirlg the implementation of parallelism by default for the paral~el execu~don of software programs in a multiprocessor system.
I~e~
Although it will be understood that the software ar~hitecture of the present invention is capable of operating on any number of multiprocessor systems, the preferred embodiment of a multiprocessor 2~?~98 W0 gl/20033 Pcr ~5 cluster system for executing the sof~vare architecture of the present invention is briefly presented to provide a common reference for understanding the present invention.
Referring now to Fig. S, a single mu~tiprocessor cluster of the 5 preferred embodiment of the mul~iprocessor duster system for executing the present invention is shown having a plurality of high-speed processors 10 sharing a large set of shared resouroes 12 (e.g., main memory 14, global registers 16, and interrupt mechax~ism~ 18). In this preferred embodiment, the processors 10 are capable of both vector and scalar 10 parallel processing and are connected to the shared resources 12 through an arbi~ation node means 20. The processors 10 are also connected through the arbitra~on node means 20 and a plurality of external interface ports 22 and input/output concentrators (IOC) 24 to a variety of external data sourc~ 26. The external data sources 26 may include a secondary 15 memory system (SMS) ~8 linked to the input/output concentrator means 24 via one or more high speed channels 30. The external data sources 26 may also indude a variety of other peripheral devices and interfaces 32 linked to the input/output concentrator via one or more standard channels 34. The peripheral device and interhces 32 may include disk 20 storage systems, tape storage systems, ter~als and workstations, printers, and commurueation networks.
Referring now to Figs. 6a and 6b, a block diagram o~ a ~our cluster version of the multiprocessor system is shown. E~ach of the clusters 40a, 40b, 40c and 40d physically has its own set of processors 10, shared 25 resources 12, and external interface ports 22 ~not shswn) that are associatedwith that duster. The clusters 40a, 40b, 40c and 40d are interconnected through a remote cluster adapter means (not shown) that is an integral part of each arbitration node means 20 as explained in greater detail in the parent application. Although the dusters 40a, 40b, 40c and 40d are 30 physically separated, the logical organization of the l~lusters and the physical intercvnnection through the remote cluster adapter means enables the desired symmel~ical access to all of ~e shared resources 12 Referring now to Fig. 7, the packaging architecture for the four^cluster version of tlle pre~erred embodiment will be described, as it 35 concerns the physical positions of cluster element caliinets within a computer room. The physical elements of the tnultiprocessor system include a ma~nframe 50 housing a single cluster 40, a clock tower for ~ .
~ 1 J ~ ~; 2 ~ 8 WO 91/~Od33 pc~r/3Js91/o4o66 providing distribution of ciod~ signals to the multiprocessor system, an Input/Output Collcelttrator (IOC) 52 for housing the input/output concentrator means 24 and a Seconday Memory System ~torage 53 for housing the SMS 28. In the preferred embodiment, an input/output 5 concentrator means 24a, 24b, 24c and 24d in the IOC 52 and a SM~ 28a, 28b, 28c and 28d in the SMS storage 53 are each associated with two of the clusters 40a, 40b, 40c and 40d to provide redlmdant paths to those external resources The multiprocessor cluster system of the preferred ernbodiment 10 creates a computer processing environment in which parallelism is hvored. Some of mechanisms in the multiprocessor cluster system which aid the present invention in coordinating and synchronizing the parallel resources of such a multiprocessor system include, without limitation: the distributed inputJoutput subsystem, ~ncluding the signaling mecharlism, 15 the fast int~rrupt mechanism, and the global registers and the atomic operatiorls such as TAS, ~M, FCA and SWAP that operate on the global registers; the mark snstructions, the loadf instruction, the accounting registers and watchpoint addresses; a~d the various mechanism that suppor~ the pipelined operation of the processors 1û~ including the 20 instruction cache and the ~eparate issue and initiation of vector instructions. Together, and individually, these mechanisms support the symmetric access to shared resources and the multi-level pipeline operation ofthe preferred m ultiprocessor system.
Referring now to Figs. 8a and 8b, the software architecture of the 25 present invention Is comprised of a SSI/mOS 1000 capable of supporting shared image process groups and an integrated parallel user environment 2000 having a common visual user interface. ~e software architecture of the present invention makes use of the features of the preferred multiprocessor system in implement~ng parallelism by default in a 30 multiprocessor environment. It will be recognLzed that although the present invention can make use of the varlous features of the preferred multiprocesso~ system, the software architecture of the present invention is equally applicable to other types of multiprocessor systems that may or may not lncorporate some or all of the hardware features described above 35 for supporting parallelism In a multiprocessor system.
~ e SSI/mOS 1000 controls the operation and execution of one or more appllcation and cievelopment 90ftware programs and is capable of %~8~7J~
wo 91/20033 PCr/U,'i9l/~4~66 supporting one or more multithreaded programs that comprise such software programs. The SSI/mC)S 1000 is compnsed of a multithreaded operating system kernel 1100 for processing multithreaded system services, and an input/output section 1200 for processing distributed, 5 multithreaded input/output services. A single image of the SSI/mOs 1000 is stored in the main memory 14 of each duster 40.
The operating system kernel 1100 includes a parallel process scheduler 1110, a parallel memory scheduler 1120 and a multiprocessor operating support module 1130. The parallel process scheduier 1110 10 schedules multiple processes into multiple processors 10. The parallel memory scheduler 1120 allocates shared memory among one or more multiple processes for the processor 10. The n-ultiprocessor opera~ing support module 1130 provides accounting, control, monitor, security, administrative and operator information about the prs: cessor 10.
15 Associated with the operating system kernel 1100 is a multithreaded interface library (not shown) for storing and interfacing common multithreaded executable code ~iles that perform standard programming library functions.
The input/output section 1200 includes a file manager 1210, an 20 input/output manager 1220, a resource scheduler 1230 and a network support system 1240. The file manager 1210 manages ~lles contairuI~g both data and instructions for the softw are progra m s. The input/output manager 122û dist~ibutively processes input/output requests to peripheral de~nces 32 attached to the multiprocessor system. ~1he resource scheduler 25 1230 schedules processes and allocates input/output resources to those processes to optimize the usage of the multiprocessor system. The network support system 1240 supports input/output requests to other processors (not shown) that may be interconnected with the multiprocessor system. In the preferred embodiment, the file manager 30 1210 includes a memory array manager 1212 for managing virtual memory arrays, an array file manager 1214 ~or managirlg array files having superstriping, and a file cache managerl216 for managing file cachin~.
l~e integrated parallel usel environment 2000 is used to develop, compile, execute, monitor and debug parallel software code. It will be 35 understood th~t with the integrated parallel llser envlronment 2000 of the present invention the entire progr~m need not be executed on a multiprocessor system, such as the dusters 40 previously described. For 2~8~
wo gl/20033 Pcr/vs~/o4a6~
.

example, the development of the parallel software code may occllr using a distributed network wi~ a plurallty of workstations, each workstation (not ~hown) capable of executing that port~on of ~he integr~ted parallel user environment necessa~y to develop the ~ource code for the parallel software code. Similarly, if the source code for a particular sofh~rare program is not large, or if compilation ~me is not a critical factor, it may be possible to compile ~e source code using a workstalion or o~er front-end processs)r. Other types of software programs may have only a portion of the source code adapted for execution on a multiprocessor system.
Consequently, the user application program may simultaneously be executin~ on a workstat;on (e.g., gathering raw data) and a multiprocessor system (e.g., processing the gathered data~. In this situation, it is necessary for the execution, monitoring and debugging portions of the integrated parallel user environment 2000 to be able to act in concert so that both portions of the software program can be properly executed, monitored and debugged.
l~e integrated parallel user environment 2noo includes a pro~am manager 2100, a compiler 2200, a user interface 2300, and a d~stributed debugger 2400. The program manager 21û0 cont~ols the development environment ~r a source ode file representing a software program. The compiler 2~00 is responsible for compiling the source code file to create an object code file comprised of one or more threads capable of parallel execu~ion. The user interface 2300 presents a common visual representation to one or more users of the status, control and execution options available for executing and monitoring the executable code file during the time that at least a portion of the executable code file is executed on the multiprocessor system. The dist~ibuted debugger 2400 provides debugging information and control in response to execution of the object code file on the multiprocessor system.
l~e compiler 2200 includes one or more front ends 2210 ~r parsing the source code file and for generating an intermediate language representation of the source code file, an optimizer 2220 for optimizing the parallel compilation of the source code file, including mean~ fo~
generat~ng machine independent optimizations based on the intermediate language representation, and a code generator 2230 for gene~atlng an obJect code file based upon the intermediate language representation, lncluding means for generating machine dependent optlmizations.

2~
wo 91/20~33 pcT/vss1~o4~66 The user interface 23ûO inc!u,des link means 2310 for linking the object code version of the user application software program into an executable ~ode ~le to be executed by ~e multiprocessor system, execution means 2320 ~or executing the multithreaded executable code file in the multiprocessor system, and monitor means 2330 for monitoring and tuning the performance of the multithreaded executable code files, including means for providing ~e ~t~tus, con~ol and execution options available for the user. In the prefelTecl embodiment of ~he user inter~ace 2300, the user is Yisually presented w~ a set of icon-represented functions for all of the ~forrnation and options available to the user. In addition, an equivalent set of command-line func~ons is also available for the user.
The dis~ibuted debugger 2400 is capable of debu~ing opti~ed parallel executable code a~oss an entire computer network"ncluding the multiprocessor system and one or more remote processors networked together with the multiprocessor system. It will be recogTuzed that the optimized parallel object code produce by the eompiler 2200 will be substantially di~ferent ~an the non~ptimized single processor object code ~at a user would normally expect as a result of the comp~ation of his or her source code. In order to accomplish debugging in this type of distributed environment, the distributed debugger 2400 includes ISrst map means 2410 for mapping the source code file to the oplimized parallel 0cecutable code file of the soft~rare program, and secolld map means 2420 for mapping the op~uzed parallel executable cocle file to ~e source code file of the software program.
The primary mechanism for integrating the multithreaded operating system 1000 and the parallel user environment 2000 is a set of data structures referred to as the Oper~ting System Shared Resources (OSSR) 2500 which are defined in relation to the various hardware shared resources 12, particularly the common shared main memory 14 and the glob~l registers 16. The OSSR 2500 is a set of data structures within the SSI/mOS 1000 that define the allocation of global registers 16 and main memory 14 used by the operating system 1000, the parallel user environment 2000, the ciistributed input/output architecture via the external anterfaces ~ and ~e main memory 14.
When a shared image process group is created, part o~ context of the shared image process group is a dynamlcally allocated set of global registers ~at ~he shared image process group will use. Each shared ima8e process % ~
WO 9l/2û033 Pcr/us9l/o4o66 group Is allocated one or more work request queues in the set of global registers. In the preferred embodi~ent, the sets of global registers are defined by the operating system in terms of absolute addresses &o the glob~l registers 16. One of the global registers is desigrlated as the total of all of the outstanding help requests for that shared image process group. By convention, the help request total is assigned to G0 in all sets of global registers. In the situation where the processor looking for work is execu~ng a microprocess or a process that u assigned to the same shared image process group as the global register with the help request total (i,e., intra-process context switch), the resulting switch overhead is minimal as no system related context expense is required to perform the requested work. If the processor looking for work in a given help request total (G0) is executing a microprocess not assigned to the same shared image process . group, the processor executing ~e microprocess must first acquire the necessary microprocess context of the shared image process group ~or this global register set before examining the help request queues.
In the preferred embodiment, the OSSR 2500 is accessible by both the processors 10 and the external interface ports 22. l~e accessibility of the OSSR 2500 by the external interface ports 22 enables the achievement of a distributed input/output architecture for the preferred multiprocessor clusters 40. While it is preferred that the multiprocessor system a11Ow the external interface ports 22 to access the OSSR 2500, it will also be recognized that thP OSSR 25~0 may be accessed by only the processors 10 and still be within the scope of dle presenS invention.
An integral component of the parallel user environment 2000 is the intermediate language representation of the object code version of the application or development software program referred to as ~iForm (HIP~
2600. The representation of the software programs in the intermedilate langauge HF 2600 allows the four components of the parallel user environment, the program management modu1e 2100, ~he compiler 2200, the user interface 2300 and the distributed debugger 2400 to access a single common representatiorl of the software program, regardless of the programming langauge in which ~e source code for the software program is written.
As part of the compiler 2200, an enhanced Inter-Proce`dural Analysis (IPA) 2700 is used by the parallel user environment 2000 to increase the value and utilization of the HP representation 2500 of a software program.

2 ?~ i?3 wo 91/20033 PC'I/U591/Oq~6 The IPA 2700 analyzes the various relationsh~p and depealdencies among ~e procedures that compsise the HF representation 2500 of a software program to be executed using the present invention.
Unlike prior art operating systems, the present invention can perform repeatable accounting of parallel code execution without pen~ing users for producing parallel code. Also, unlike prior art user interfaces, the present invention provides a parallel user environmen~
with a common visual user interface ~at has ~e capability to ef~ectively monitor and control the exeeut'~on of parallel code and effectively debug such parallel code. The en~ result is that the software archit~ture of the present invention ca~. p~ov.de consistent and repeatable answers using traditional applicat.ion programs wi~ both increased performance and throughput of th~: multiprocessor system, without ~e need for extensive rewriting or optin~ing the application programs. Irl other words, the ~5 software architecture implements parallelism by default for a multiprocessor system Because of the complexity and length of the preferred embodiment of the present invention, a table of conten}s identifyin~ ~he remaining section headings is presented to aid in understanding the des~ption of ~0 the preferred embodiment.

1.0 OPERATING SYSl'EM
1.1 SSI/mOS Kemel Overview 12 Prooe~ ~ageme~t 1.2~1 Eleme~t~ of 8y~tem V P~oc~e~
1.2.2 Archltectu~ I~plicatlo~
1.2.~ ~3vmO~; Impl~aentatlo~ of P~oce~e~
109~ ~YIe ~s~em~t ol Eleme2l~ of 8~st~m V ~Ylo ~eme~
1.~.2 ~h~tec~sal Impllc~tlon~
1.~.3 8S~/mO~ ~[mplemle~ o~ of File~
1.4 ~emor~ ~geme~1:
1.4.1 ~:leme~t~ o~Sy~tem V ~emory l~;ageme~
1.4.2 nllanageme~t of Dd~ ~emoxy 1.4.~ ~agement of 8econda~y ~61emo~r Storago 1.~ ~p~/Qutput ~nageme~t 1.~.1 IÇ~oment~ of S~tem V L~p~t/Output ~nag~me~t wo gl/20033 ~cr/uss1/0~066 1.~.2 Anhltectu~ral ~Opllcatlo~s l.E~.S ~/mO5 ~[nput/Output ~gemellt 1.~ Re~ourc~ Ma~a~ement a~d ~hedul~n~

l~lBo2 Role of th¢ N~ ueu~ 8~t~u l.g~.:3 Re~ource Catego~le~
1.~.4 Re~ource P~geme~t Resource 8chedull ReqLulreme~
1.7 Networ~; ~3upport 1.8 ~iml~tratlve ~d Ope~tor ~3upport 1.~ Gue~t Opelatl~ ~;ystem g~uppor~

2.0 P~A~EL U5E}~ lENV~OP~MENT
2.1 U~er Inter~ac~
22. Progranl ~ageme~t 2.~ ~mpller 209~ ro~t 2.:~.2 P~
2.~.3 HlFo~ tHF) ~t~med~nto Lan~ge 2.3.4 Optib~llzer 2.3.4.2 Control Flow Graph 2.3.4.3 I.oGal q~ lon~
. .3.4.4 Glob~l Optlmlzatlon~
2.3.~ ctor~ lo~
2.3.4.~ tomatl¢ ~ultlthread~ng 2.~.4.7 I~-ll~ng 2.3.4.8 Re~ter a~d In8~31Gt10D. ~lte~ltlDI~
2.3.4.~ I,oo~ Ahead ~chedull~
~ .4.10 Pol~ter~aly~
2.~.4.11 ~n~ta~t Foldlng 2.~.4.1~2 P~
2.3.4.13 Var~le to :Reglste~ ~applng 2.~ terprocedu~al A~alysl~
2.3.~ Gompll~tlo~ ~dvl~or 204 Debugge~

~ #~ S~

WO 91/~0033 P~r/lL)S~1/0'1066 2.41.1 Di~t~buted :De~ ~or l~ebu~er 2.4..2 U~e of ~e~flter ~p~lng by Deb~ger 2.4.~ plpi~ 80u~ ~do to E~ecut~bte Cod~
2.4.~a Debug~g I~ed Proc~dure~
2.4.~ D~l ~e~el P~lDg 1,0 ~ THE OPERATING SYST~
The opera~dng system component of the software architechlre of the present invention is a SSI/mOS ~at is fully integrated and capable of 10 multithreading support. The preferred embodiment of the operating system of the present invention is based on a Unix System V operating system, AT&T Unix, System V, R~ease X, as validated by the System V
Validation Suite (SVVS). For a more detailed under~tanding of the operation of the standard AT&T Unix operating system, reference is made 15 to Bach, M., The Design of the Unix QE?eratin~ System (Prentice Hall 19~8).
Although lhe prefelTed embodiment of the present invention is desibed in terms of its applicat;on to a System ~7-based operating system, it will be recognized that ~e present inYenlion and many of the compc~nents of the present invention are equally applicable to other types of operating 20 systems where parallelism by default in a multiprocessor operation is desired.
Traditional System V operating systems are based on a kernel concept. The extensions to the t~aditional System V kernel that comprise the operating system of the present invention include kernel 25 enhancements and optimizations to support multiple levels of parallel processing. The operating system of the present invention also contains additions required for the mana8e~nent and adminis~ation of large mul~iprocessor systems. Por example, the operatin~ system can manage large productiorl runs that use significant amounts of system resources 30 and require advanced scheduling, reproducible accounting, and administrative tools. Each processor 10 in an cluster 40 runs under the same Supercomputer Symmetrically Integrated, multithreaded Operating System (hereinafter referred to as SSI/mOS). There is one instance of SSI/mOS stored in ~e main memo~y 14~ por~ons of which can execute on 35 any number of processors 10 at any one time. For increased`efficiency in a multi-cluster embodiment of the preferred embodiment, a copy of the WO 91/20û33 PCI~/US91/04066 instance of SSI/mOS is maintained ln the physical portlon of main memory 14 ~or each cluster 40.
SSI/mOS fu13y supports p~allel processing, multithreading, and automatic muitithreading. Its multithreaded Icemel ef~iciently sc}ledules multiple parallel processors 1û and synchronizes their access to shared resources 12. Additions to the System V kernel include extended conc~ ency and sevelal new type~ of processes; shared image processes, cooperating processes, multithreaded, parallel system processes (kprocs), interrupt processes (iprocs), and microproce~ses ~mprocs). The SSI/mOS
kernel protects internal data structures while kernel operations are occurring simultaneously in two or more processors 10. As a result, individual system requests can take advantage of multiple processors 10, and system functions can be distributed among the available prooessors 10.
SSI/mO5 also significantly extends the System V memory scheduling mechanism by implementing a selective swapping feature.
The selective swapping feahlre of the present invention reduces swapping overhead by swapping out only those processes which will facilitate swapping in another proce~s. As described in greater detail hereinafter, partial swapping allows mixing of Yery large memory processes with smaller ones. This happens without causing undue system overhead when large processes are completely swapped.
In the distributed input/output architeciure associated with the preferred embodiment of SSI/mOS, device driver softurare connects the peripheral devices and interfaces 32 such as networks, tape units, and disk drives, to the multiprocessor duster 40. Operaling system driver code also communicates wîth various networlc interfaces. l~e SSI/mOS supports Terminal Communication Protocol/Inter Process (TCP/IP) for connections to other systems supporting TCP/IP. SSI/mOS provides a Network File System for efflcient file sharing across systems. While the operating system driver code is fully integrated into the SSI/mOS operating system, all device drivers in the preferred embodiment are based on established software technology.

1.1 SSI/mOS Kernel Ovel~rigw 3S Referring now to Figs. 9a and ~b, the main compbnents in the SSI/mOS 1100 are shown in relatjon to traditional System V-like functions. In this block diagram, the user erlvironment 2000 is wo 91/20033 P~r/US~1/0406~

represented at the top of ~e diagram and the hardware associated wi~h the preferred embodiment of the multiprocessor sys~em represented at the bottom, with t~he o~ra~ng system 1000 shown in between, The operating system kernel 1100 is generally shown on the right of SSI/mOS I000 and 5 the input/output section 1200 is shown on the left of SSI/mOS 1000.
The executable code file SSI/mOS opera~ng system kernel 110û is always resident in ~e main memory 14. In those situations where the user application programs requlres an oper~dng system function, it is necessary to per~rm a context switch from the useI application program 10 to the operating system kernel 1100. There are a limited number of situations when the program flow of a user application program mnning in the processor lD will be switched to the SSI/mC)S kernel 1100. Three events can cause a context switch from an application program into the SSI/mOS kernel 1100: interrupts, exceptions, and t~aps.
IntelTupts are events which are outside the control of the cu~rently execu~ng program, and which preennpt the processor 10 so that it may be used for other purposes. In the preferred embodiment, an interrupt may be caused by: (1) an input/output device; (2) another processor, via the signal instruction; or (3) an interval timer (IT~ associated with the 20 processor 10 reaching a negative value. In the preferaed processor 10, interrupts may be masked via a System Mask (SM) register. If so, pending interrupts are held at the processor ulltil the mask bit is deared. If multiple interrupts are received before the first one takes effect, the subsequent interrupts ~do not have any additional effect. Interrupt-25 handling software in the SSI/mOS kemel 1000 determines via softwareconvention the source of an interrupt from other processors 10 or from external interface ports 22. In the preferred embodiment, the SSI/mOS
kernel 1100 supports both event-driven and pollin~-derived interrupts.
An exception terminates the currently executing program because of 30 some irregularity in its execution. As described in greater detail in the parent application, the various causes for an exception in the preferred embodiment are: (1) Operand Range Error: a data read or write cannot be mapped; (2~ Progra~ Range Error: an instruction ~etch cannot be mapped;
(3) Write Protect violation: a data write is to a protected segrnent; (4) 35 Double bit ECC error; (5) Floating-point exception; ~6) Instruction protection violation: an attempt to execute certain privileged instructions from non-privileged code; ~7) Instruction alignment error: a tw~parcel c~ 2~
wo 9l/20033 Pcr/us~t/o4~6 .

ins~uction in the lower parcel of a,word; and (8) Inval~d value in ~he SM
(i.e., the valid bit not set.) In general, exception~ do not take effect immediately; several instructions may execute after the problem instruc~ion before the context sr~tch takes place. In the preferred processor 10, an excep~don will never be ta3cen between two on~parcel instructions in the same word. Some exceptions may ~e controlled by bits in the User Mode register. If masked, the condition does not cause an exception.
A voluntary context switch into the SST/mOS ~ernel 1100 can be made via the trap instruction. In the preferred embodiment, a System Call Address (SCA) register provides a base address for a table of entry points, but ~e entry point within the table is selected by the 't' field of the instruction. Thus, 256 separate entry points are available for operating system calls and other services requiring low latency access to privileged code. The SSI/mO~ kernel 1100 takes advantage of d~is hardwAre feah~re to execute system calls with a minirnum of oYerhead due to context saving. Some system calls can be ~apped such that context is saved. Traps also facilitate a Fastpath to secondary memory. Unlike interrupts and exceptions, a trap is exact; that is, no instructions after the ~ap will be executed before the t~ap takes effect. The operating system returr~s to the program code via the trap return. The trap return operation, caused by the rtt instruc~on, is also used whenever the opera~g system wishes to cause a context switch to do any of ~e following: (1) Restart a program that was interlupted or had an exception; (2) Return to a p~ogram that executed a trap instruction; (3) Initiate a new user program; and (4) Switch to an unrelated system or user mode thread.
An lnte~upt takes precedence over an exception if: (1) an interrupt occurs at the same time as an exception; (2) an interrupt occurs while waiting for current instructions to complete after an excepti~n; (3) an exception occurs while waiting for instructi~ns to complete after an interrupt. In these cases, the cause of ~e exception will be saved in the ES
(~xception Status3 register. If the ~nterrupt handler in ~e SSI/mOS kernel 1100 r~enables exceptions/ or executes an rtt instructlon, w3~ch re-enables exceplions, the exception will be talcen at that time.
There is a COmmOI ~ethod of responding to ~nterru~ts, exceptions, and traps. Figs. 10a and 10b show how a handler routine 1150 handles a context switch. At step 1151, the handler routine 1150 saves the reg~sters .
;

WO 9l~20033 PCr/US9a/0~06 in the processor 10 that ~e handler rou~ine 1150 is to use, if it is to reh~rn to the suspended program with those registers intact. ~n the pref~rred embodiment, ~his includes either a selected group of registers or all of the registers for the processor, depending upon ~he ~ype of process executing in the processor 10. At step 1152, the handler routine 1150 waits ~or a word boundary or completion of a delayed jump. ~at is, if the next instruction waiting to issue is the second parcel of a word, or îs a delay instruction following a delayed jump, it waits until it issues. (This step is not done for trap instructions.) At step 1153, the handler rouline 1150 moves the Program Counter (PC) register (adjusted so that it points to the next inst~ction to be executed) into the Old Program Counter (OPC) register, and the System Mask (SM) register into the Old System Mask (OSM) register. At step 1154, the handler routine 1150 loads the PC register from the Interrupt Address (IAD) register, ~e Exception Address (EAD) register, or the System Call (SCA) register, depending upon which type of context switch is 'Deing processed. (If the SCA register is selected, the shifted 't' field in the instluction to form one of 256 possible entry points). At step, 1155, the SM register are set to all ones. lhis disables interrupts and exceptions, disables mapping of instructions and data, arld sets privileged mode. At step 1156, execu~on is resumed at Ihe n~w address pointed to by ~e PC register.

12 Process Management Section 1.2 describes processes and proces~ management under SSI/mOS. 17~is information is presented in three sections. Section 1.2.1 briefly describes the standard functions and characteristics of System V
processes ~nd their management retained in SSI/mOS. Section 1.2.2 lists those features and functions of the cluster architecture of the preferred embodiment of the multiprocessor system that impose special operating system requirements for processes and process management. Section 1.2.3 describes the additions and extensions developed within SSI/mC)S as part of the objectives of the present invention.

1.2.1 Elements of System V Processes In addition to being validated by the System V Validation Suite tSWS), SSI/mOS provides System V functionality for processes. A single thread runs throu~h each process. A process has a process image, 2 ~ g ~ $
wo ~l/20~33 PCr/US9~/04066 memory, and files. ~ach standard process has a unique hardware context;
registers and memory are not sl ared except during inter-process communic~tions (IPC~. Standard process states exist (user, kernel, sleeping). Finally, System V IPC elements are used.
12.2 ~chitectural Implication~
The design of ~e cluster architecture of the preferred embodiment focuses on providing th~ most efficient use of system resources.' SeYeral architectural features have direct implication~ for processes and their 10 management. For example, multiple processors 10 are available per cluster 40 to do work on a single program using the mechanisms of microprocesses and shared image processes. One or more processors work on one or. more microprocesses initiated by a single program. The processQrs 10 are tightly coupled processor and share a common main ~s 15 memory 14 to enhance communications and resource sharing among different processes.
Another important architectural feature is that multiple -i input/output ~vents go on within a single pr~cess image. The concurrent processing of interrupts is an example. As shown in Pig. 11, an inte~Tupt 20 causes the processor to branch to a computational path while the interrupt '~is processed.~ Although the processor is idled (sleeps~ during the actual data transfer, there is no switch, computations continue and the new data is available and used after the paths are synchronized. Input/output ~ -events are initiated in par~llel with each other and/or with other 25 computational work.
The present invention ~llows ~or processes at a small level of "!
granularity to~ obtain the most effective use of the system's multiple t processors, architecture, and instruction set. ~7or example, small granularity threads are scheduled into small slots of available processor 30 time, thereby~ maximizlng utilization of the processors. This is -accomplished ~y the use of the mprocs as described in greater detail hereinafter.
The cluster architecture of the preferred embodiment also allows the operating system of the present invention to save a number of context - -35 switches by minimizing the size of context interrupts an~ by delaying tcontext switches. Major context switches are deferred to process swltch -; ~
~c~
wo 91t20033 PCr/USslto4~66 times. The amol1nt of context saved at trap (system call) or inferrupt time is minimized.

12.3 SSVn OS Implementation of Proce6se~
To support a multiprocessing kernel, SSI/mOS redefines several System V process-related elements. In addition ~o ~he types of processes and process-related elements previously defined, the present invention implements several new process related elements, as well as improving several pres~t process related ele~ents, including:
Microprocess ~mproc) - A microprooess is created by a help request from an existing process. A typical example of a microprocess is a thread of execution being initiated by the user-side scheduler (USS). To ~e overhead, a microprocess does not sleep ~i.e., is not rescheduled by System Y), because it is expected to have a relatively short life span.
When an event occurs that requires a microprocess to go to sleep (such as a blocking system call), then the system converts ~e znicroprocess to a full context process and reschedules it via the usual kernel process scheduling mechanisms. After a microprocess begins execution on a processor, its context consists primarily of ~e current contents of ~e processor registers.
As previously stated, SSI/mOS kernel code executed on behalf of a microprocess will force its conversion into a full context process should the mi :roprocess blocl~ for any reason.
Shared Image Processes - Ln addition to the defini~on previously set forth, it will be recognized ~at both processe~ and microprocesses can be ~5 shared image processes. Processes have full context as opposed to microprocess~s that have a minimum context.
Cooperating Process - This term is used to identify those prooesses that are sharing (and are thus synchron~zing through~ a single set of global registers. This means the value in the global register control register is ~he same for each cooperating process. By default, each microprocess is a cooperating process with its respective initiating process. Shared image processes may or may not be cooperating processes, although by default they are. Through the use of system calls, non-shared image processes can become cooperating processes.
Processor Context - Each process has processor context. In ~he preferred embodiment, processor context includes the scalar, vector, and global registers being used by the process or microprocess, plus the control 2~8~2~3 wo 91/20033 Pcr/US~l/04066 ~0 register settings that currently dictate the execution environment. To allow the process to continue executing at its next scheduling interval, a subset of this processor context is saved across ~nterrupts, exceptions, and traps. Exactly what is saved depends on the so~rce of the event triggering 5 the context switch.
Switch Lock - Switch loclcs are used ~or longer locks in the kernel proper, but not for locks that require an interrupt to be released. A swftch loek causes a waiting process to stop execu~ng but places it on the run queue for immediate rescheduling.
Autothreads - Autothreads are part of the automatic parallelization that is a product of the compiler as discussed in greater de~ail hereinafter.
An autothread within compiled code makes a SSI/mOS Icernel request for specified numbers of microprocesses. The num~er g~ven is based on the . currently available number of processors. A processor can serially run several autothreads in the same microprocess with~ut going back to the autothread request stage. This is very efficient since it results in fewer kernel requests being made. If an autothread requests system work which requires a context switch, then the autothreads are scheduled into shared image processes. Short-lived, computation-only autothreads do not assume the overhead of process initialization. Minimizing overhead provides additional support for small granularity parallel perfonnance.
The operatir.g system can automatically convert autothreads into shared image processes, depending on the functions and duration of the autothread.
System Process (kproc) ~ A kproc is a process that facilitates the transmission of asynchronous system calls. When system call code is running in another processor, or has been initiated by user code via the system call interhce, kprocs enable system call code and user code to run in parallel.
Inte~upt Process (iproc) - An iproc may be a process that acts as a kernel daemon. It wakes up to process the work created when an interrupt occurs, such as a series of threads that must be performed in response to an interrupt sent by an external processor or device.
Alternatively, an iproc is initiated when an interrupt occurs.
Traditionally, this interrupt processing has been done b~ input/output interrupt handlers.

, 2~J3~2~
WO 91/20033 PCr/US9l/04066 In the present invention, microproces~es are created as an au~omatically multithreaded program is executed. An existing process posts a request in a global register aslcing tha~ a microprocess or microprocesses be made available. At this point, any available processor 5 can be used as a microprocess. It will ~ noted tha~ System V mecha~usms can also create microprooesses, iprocs, lcprocs, arld shared image processe~
as well as traditional System V pro~esses using the present invention.
When an exceplion occurs in SSI/mOS, the user c~n control the termination of multiple processes. In the preferred embodiment, the 10 default is tl-e traditional System V procedure, that is, to terminate all processes on an exception.
The SSI/mOS scheduler is a multithreaded scheduler called the dispatcher 1112 (Fig. 9). There is no preferential scheduling of processes.
The scheduling uses an anarchy based scheme: an available processor 15 automatically Icoks ~or work. As a result, several processors may be trying to schedule work for themsel~es at any one time, in parallel.
The dispatcher 1112 manages the progress of processes through the states as shown in Fig. 12. Processors 10 use the dispatcher portion of SSI/mOS to check for the highe~t priority process or microprocess that is 20 ready to run. Kprocs, iprocs, and mprocs will eac h be a separate scheduling class. Requests by a prooess ~usually a shared image process group) for work to be scheduled will increment a value in one of the global registers 16 that is associated with that process. The specified global register is chosen by convention as described in greater cletail hereinafter and will be 25 referred to for the descriptioal of Fig. 12 as the Help Kequest Register (HRR). The increment of the HRR is an atomic action accomplished by use of one of the atomic resource allocation mechanisms associa~ed with the C)SSR's. At state 1163, the opera~ng system 1000 has a processor 10 that can be scheduled to do new work. Based on site selectable options, the 30 opera~ng system can either ~1) always choose to schedule processes first to state 1162 ~or traditional process scheduling and only if no executable process is found check the HRR in state 1165; or (2) always schedule some portion of the processors 1û to check the E~RR in state 1165 to support parallel processing (and, in particular, the processing of mprocs) and 35 schedule the remainder of the processors 10 to state 1162`for tradi~ional process scheduling. rhi5 assignment balance between state 1162 and 1165 is modified in real time in accordance with a heuristic algoritlun to opiimiæ

2~S.`3~2~8 WO 91t20033 PCr/US9~/04~66 ~2 the use of the multiprocessor sy~tem based on predictive resource req ~arements obtained from the accounting registers in the processor 1a.
For example, all other conditions ~eing equal, an available processor wsll be assigned to threads executing at the highes~ computation rate, i.e. the 5 most efficient processors.
Processes that are sent to state 1165 and do not have any context to be saved at the point they reach state 1165 can become ~croprocesses. In state 1165, the n~icroprocesses examine eaeh group of global registers assigned to a shared image process group and, specifically, exa~rune the lû HRR global register for that shared image process group. If the reg~ster is positive, then the shared image process group has requested help. The microprocess automatically decrements the count in the HRR:
(thus indicating that one of the request made to the HRR has been < satisfied) and proceeds to state 1169 for scheduling by the User-Side 15 Scheduler.
Shared image processes in a multithreaded program that have completed their current thread of execution will also check the ~ for additional ~reads to execute. Such processes dlat do not iIrunediately find additional threads to execute will continue to check the HRR ~or a period 20 of time that is set by ~e system, but modifiable by the user. In essence, this is the period of time during which it is not efficient to perform a ~ontext switch. If no requests to execute threads are found in the HRR for the shared image process group to which the process is presently scheduled, the process returns to the operating system through state 1164 and into 25 state 1165 for normal process scheduling.
It will be noted that a multithreaded program will generally have different numbers of threads during different point in the execution of that program and therefore will be able to utilize different numbers of processors during the entire period of execution of the program. A feature 30 of the present invention is the ability to efficiently gather additional processors from the operating system to be applied to a multithreaded program when that program has more threads than processors and to retum processors to the operating system when there are more processors than ~reads. When a processor enters a region where addit~onal threads 35 are available for execution, the processor 10 makes a requesf for add~tional processors 10 by Incrementing the HRR and then proceeds to start executing the threads for which it requested assistance. Processors that are ~8~2'~
wo ~I/20033 ~cr/vssl/04û66 executing in the same shared image ~rocess group and that are available to execute another thread check the vallle of ~he HRR to determine what available threads e~as~ for execution in that shared image process group.
Microprocesses ~n the operating systern will also examine ~he HR~ for all S of the sh~red irnage process groups executing in the multiprooessor system looking for microprocess threads to execute. As previously mentioned, microprocesses have no context that must be saved because they are destructible upon exit and also require only a IIunimum amount of context ~n order to join in ~e execution of a mul~threaded program as a 10 microprocess. Microprocesses can thus be quiclcly gathered into the execution of a multithreaded program that has available work requests present in the HRR. Processors that are executing a multithreaded program but have no threads to execute will continue to look for additional threads in the shared image process group for the selectable 15 period of time previsusly describedl. ~ a processor does not find additional threads in tlle allotted time, the processor performs a lightweight context switch to retutn to the operating system for the purpose of becoming available to execute microprocesses ~or o~er shared image process groups.

20 1.3 File Management Section 1.3 describes files and file management under SSI/mOS.
This information is presented in three sections. Section 1.3.1 briefly describes 1he System V Ale functions and eharacterist~cs retained in SSI/mOS. Sec~on 1.3.2 lists those ~atures and functions of the cluster 25 ar~hitecture ~hat impose special operating syste~ requirements for files and file management. ~ection 1.3.3 desc~ibes ~e additions and extensions developed within SSI/mOS to satisfy cluster architectural re~quirements for ales and file management.

1.3.1 Element~ of System V File Management SSI/mOS implemellts the System V tree file ~ystem by supporting file access/transfer to all standard networks supporting standard character and block de~rice drivers 13.2 ~rchltectur~l Implications The duster architecture supports multiple input/output streams, thus supporting disk striping and multiple simultaneous paths of access to 2~t',~?J~3 wo 9l/20033 Pcr/US91/04066 ~4 the &condary Memory 5ystem (SMS). The input/output concentrator (IOC) 24 distributes work across processors 10 and input/output logical deviaes 30. Low level primi~ves in ~e IOC 24 expose memory 14, SMS 28, and the global registers 16 to the system's device controllers. All the SMS
S transfer channels can be busy at the same ~me. I~e cluster architecture also provides an optional expanded caching ~acility through the high bandwid~ SMS, using the SMS to cache.

1.3.3 SSVmOS Implementati on ~f File8 SSI/mOS has two types of file systems~ Ln addition to a System V
tree system referred to in section 1.3.1, an array file system may also be implemented. The second type of file system is s~uetured as an array file system. By adding a high performance array file system, SSI/mOS takes advantage of the multiple input/output streams provided in the cluster architecture, allowing optimal configllration~ of storage based on application characteristics. The array file system allows users to re~quest, through the resource manager, enough space to run their applications, configured as to allow maximum input/output throughput. Other features include: support for large batch users; support for a large number of interactive users; and enhanced support for parallel access to multiple disks within a single file system, i.e. disk striping.
Referring now to Fig. 13, one embodiment of the SSI/mC)S array file system is shown. The size of the fîle system block is 32 kilobytes.
Allocation of space is controlled by the resource manager. Users can access data via System V read and write calls. The present invention also supports disk striping, whereby large blocks of data can be quickly read from/written to disk through multiple concurrent data transfers.

1.4 Memory Management Section 1.4 describes memory and memory management under SSI/mOS. This information is presented In thr~e sections. Section 1.4.1 briefly describes the standard functions of memoly managennent fhat are retained in SSI/mOS. Section 1.4.2 describes the additions and extensions developed within SSI/mOS to satisfy cluster architechlral requirements for the management of main memory. Section 1.4.3 describes the additions and extensions developed within SSI/mOS to satisfy cluster 2 ~
w~ 93/20033 ~CI/US9l/~4066 architectural requirements for the ~nanagement and utilization of the Secondary Memory System 28.

1.4.1 Elemenf:~ of System V l\Iemo;ry M~agement S Although many tables and other memory-related elements are reta~ned, the basic System V memory managing sc~eme has been replaced.

1.4.2 MaIIagement of Main Memory Major changes have been made in the allocation and scheduling of memory in SSI/mOS as compared to standard System V. Before implementing a memory manager in the SSI/mOS kernel, the paging code in ~e original System V kernel is removed. This code is replaced by a memory manager which assumes a nat memory architecture. The memory manager tracks the current status of memory by mapping through the segment table entries. A set of routines are used to change that status.
The System V swapper has been optimized to reduce swapping overhead and to make use of the multiple data control registers. l~e role of swapper is to determine which process images will be in memory at any given ~dme. As shown in Fig. 14, the swap map parallels ~he memory map.
Segment code manage5 the sections of memory ~at contain a user's text, data, and shared memory. The segment code splits the segments to effectively use the multiple data segments in the control registers. If not enough contiguous memory is availalble to satisfy a request, then multiple data segments of smaller size are used. Segments are doubly linked by location and size. Fig. 15 shows how memory segments function.
Memory is managed via three doubly linked lists: (1) sloc- a dummy node heading a list of all memory segments whether active or available; ordered by location; (2) savail - a dummy node heading a list of memory segments available to be allocated, ordered by descending si~e;
and (3) sactive - a dummy node heading a list of allocated memory segments, ordered by descending size. It will be noted that ravai~ and ractive are mutually exclusive.
Referring now to Fig. 16, the selection of swap out candidates will be described. The swapping overhead is reduced by making intelligent choices of processes to swap out. The System V swapping algorithm swaps 2 ~ J ~
wo gl/20033 P~/US9l/0~066 out processes based strictly on priority and age, regardless of whether enough memory will be ~reed for the incon~ing process. The SSI/mOS
swapper only swaps out processes which free the needed amount of memory. This is done by adding location and si~ to the criteria used for determining swap out candidates. Multiple adjacent processes may be swapped out if together they free the a~ount of memory needed. System load and processor usage are criteria for choosing swap candidates.
Nonnally it is not efficient to swap out a very large process. However, if the system load is light, multiple processors can speed swapping out a large process so that many smaller processes can be efficiently swapped in and run. Processes that are not efficiently using their processors will be chosen to be swapped out before an equal priority process ~at is efficien~ly using its processors.
Referrlng now to Fig. 17, the process of splitting memory segments will be described. The swapper is modified to take advantage of the multiple data cont~ol registers. If enough memory is available for a given data segment, but is fragmented so that contiguous space canno~ be allocated for the se~nent, the segment may be split into multiple piece~
that can be allocated. ~e extra control re~siers are used to map ~e splits so that the user still see~ one contiguous segment.
Fig. 18 shows the process of coalescing memoxy. l~e memory splits created above are merged back into one contiguous segment when they are swapped sut. This process allows the segment to be resplit according to the configuration of memory at the time it is swapped in.
Referring now to Fig. 19, the concept of dual memory segrnents is illustrated. The swapping overhead is also reduced by keeping dual images of the process as long as possible. The swap lmage of the process is removed when the entire process is in memory, or when swap space is needed for another process to be swapped out. Dual image processes are prime candidates for swap out because their memory image may be freed without incurring the overhead of copying it out to the swap device.
Partial swapplng is accomplished in SSI/mOS by the swapper routine. Partial swapping allows a portion of a very large Job to be swapped out for a smaller process that is to be swapped In. When the smaller process finishes or is swapped out, the swapp`ed portion is returned to its oaiginal location so ~at ~e larger 3Ob can proceed.

.
, ~ - ~

2~ J~
wo 91/20033 pcr/llssl/o4û66 1.4O3 Management of S~condOa~ Iemo~y Storage SSI/mOS provides an assortment of simple primitives that allow applicalions, in conjunction with tlle compiler ~nd runtime libraries, to hllly use t:he SMS 28. SSI/mOS provides a range of support for 5MS usage in: a standard ~le system resident on secondary memory; an extended memory functionality hr exceptional users; ~upport for virtual arrays;
support for mapped files; fi~e staging f~om high performance disk to SMS;
and file staging from archival storage to high performance disk.
Some applications need ~he SMS 28 to function like a disk with a file/inode o~ientation and System V interfaces. The reso~lrce manager ~llocates space for a file system Oll a per job basis. Optional disk image space is also available as are the write-through attributes that make such space useful.
Other applications need the Sh~S 28 to function as an extended memory. This type of large allocation access to the SM5 28 is at ~e libra~y level and very fast, exposing the power o~ the hardurare to the user.
Con~equently, there is a need to get away from the file~inode orientation.
As an e~ctended memory, the latency between the SMS 28 and the main memory 14 is several mic~oseconds. Compared to disk, SMS 28 is microseconds away rather than seconds away.
A secondary memory data segment (SMD53 has been aclded to the SSI/mOS process model. An SMI:)S is a virtual ~ddress space. When a process is created, a data segment of zero length is created for it. llle data segment defines some amount of area in secondary memory. Although the length of the originally issued dat~ segment is 0, the programmer can use system calls to grow the data segmerlt to the req~ured size. Limits to the size of a data segment are con~olled by the operating system and are sit~tunable. The new system calls developed ~or SM5 are describecl below in the Syst~n Calls section.
Since the SMS 28 in the preferred embodiment i~ volat;le, that is vulnerable to system power and cormection failures, users can specify a writ~through to disk. The files that are specified as writ~through are first transferred to the SMS 28 and then ~tten onto disk. Secondary memory is attached to a process in the same fashion as ~s main memory, and operates in much the same way. The llser can aiter and access SMS data segments ~rough a series of new system calls.

'~J ~ ?~ ~ ~
WO 91/20033 P~r/~9l/o~6 ~8 New versions of system break,~lls (i.e., the brl~ and sbrk calls~ ailow pr~cesses to dynamically change the si~e of the SMS data ~egment by reset~ing ~he proce~s's SMS break value and allocating the appropriate amount of space. The SMS brealc value is the address of the first byte S beyond the end of the secoIldary memory data segment. The amount of alloca~ed space increases as ~he break value inc~eases. Newly allocated space is set to æro. If, however, ~e same memory space is reallocated to the same process, its contents are undefined. SMSbrk can decrease the amount of allocated space.
One set of put/get system calls move data between the expanded SMS data segment and main memory in the normal way. A second set of put/get ealls uses the Fastpath mechanism to transfer data between a buffer and an SMS data segment. Pastpa~ allows a data t~ansfer to occur without the process having to give up the processor 10, and is used for 15 small trallsfers when ~rery low latency is required; both processes and microprocesses can h ansfer via Fastpatlh.
Mapped files are supported through ~e ~unap system call. Because there are a limited number of base and bounds registers, this may place some restrictions on the number of shared memory segments that a 20 program can access at one time. Otherwise, these call~ are supported as defined. lllis is System V except ~or the shared segznent~ restric~on.
Because the cluster architecture of the pre~rred embodiment does not h~ve a /irb~al memory, the process text and data segments for all ~obs have to fit into memory. To do this, the present invention provides 25 mapped files and virtual arrays. As shoum in Fig. 20, use of vlrtual fiie syste~s allows oversubs~iption of the SMS 28 and keeps oversubscriptlon manageable. Because the preferred multiprocessor system does not have hardware paging, software places a named object ~an array or common block) into virtual memory. On multiprocessor systems with SMS 28, 30 software pages this object ~rom SMS. Via this mechanism, the program and the virtual array/common blocks may exceed ~e size of memory. If a multiprocessor system lacks an SMS 28, paging is ac~omplished from a file.
This is an extension to the C and l~ortran languages and is non-System V.
The SMS 28 can be used to stage Bles. Users first stage their files to 35 the SMS 28, then they ~ssue reads from SMS ~8. For exa'mple, a file is moved from the disk subsystem into an SMS buffer. Files written to the SMS 28 from maln memory 14 can be staged for archival stores into fast WO 91t20033 Pcr~ussl/0~066 disk. Archival file staging is also available. Data is moved from remote archival storage to the SMS 2%, and from ~ere to high performance disks.

1.5 Input/Output Managem~nt Section 1.5 describes the managemellt of input/output under SSI/mOS. This information is presented in thr~ sections. ~ction 1.5.1 brie~y describes the standard elements of input/output management retained in SSI/mOS. Section 1.5.2 lists those features of ~he cluster architecture that impose special input/output requirements. Section 1.5.3 describes additions and extensions developed within SSI/mOS to satisfy architectural requ;rements f~r input~output management.

1.5.1 Elements of System Y InpuV13utput ManagemeAt System V input/output management has been retained as follows:
support for the standard block and character deYice in~erfaces; support for STREAMS connections; support for standard networking protocols, specifically TCP/lP; and support for ttys through rlogin and telnet; use~
logins are fairly standard, but the dis~ibuted input/output architecture realistically precludes the notion of a directly connected `dumb' tenninal in the preferred multiprocessor system.

1.5.2 Architectural Implicatior~
The distributed input/output architecture of the preferred embodiment places certain requirement~ on the way peripheral devices interact with the operating system.
The primary connection to the clusters 40 i~ through a 100 or 20û
megabyte/sec HiPPI (High Performance Parallel Interface) chaxmel adaptor.
This adaptor serves as the interface between the cluster 40 and the ANSI
standard HiPPI protocol. A special optical fiber version of the lHiPPI
adaptor allows peripherals to be cornected in units of miles rather than feet. This adaptor is implemerlted accord;ng to the HiPPI optical fiber specification. Differences in adaptor requirements are handled by changes to on-board microcode rather ~an changes to ~ystem softw~e.
As a result of the HiPPI channel implementation, peripheral devices have a pipe directly into main memory that is similar to a DMA
component. Devices also have direct access to the multiprocessor system's global registers and operations, and they are able to directly read/write da~a 2~ 2~
wo gl/20033 PCT/US91/04066 ~0 to the SMS system. As a result, software running on peripheral devices can in most cases be considered as a logical extension of the operating system. Some implications of the input/output channel design are that peripheral device controllers have to ~e intelligent and programmable, 5 and they must implement a low level HiPPI c~mmand protocol.

1.5.3 SSllmOS Implementation of InputlOutpu Management A number of enhancements are made to SSI/mOS to exploit the distributed ~nput/output and channel architecture.
A second level cache is part of the SSI/mOS buffer caching scheme.
This implementation provides two levels of caching to keep a maximum amount of data in close proximi~ to the central processors Level one is ~e System V buffer caching scheme. LeYel two is comprised of larger, slower buffers on the SMS 28. The operating system directs ~e cache level used by an application according to the nature of its input/output requirements, The two~level cache would be transparent to the application The distributed device drivers of ~e present invention are programmed so that some typical driver functions will execute in the peripheral device cont~olle~. This model is sometimes preferable, and is made possible by the ~aet that peripheral device corlt~ollers have access ~o kernel signals, main memory, secondary memory, and global ~egister~, Additional elements of System V input/output systems that have been modified in the present invention tD achieve parallelism by dehult indude: STREAMS, protocol implementations, and device drivers which are all multithreaded in 5SI/mOS.
The peripheral controllers in the preferred embndiment can no longer be passive devices and are custom designed, l~evice controllers need to manipul~te and access operating system data in main memory For example, an intelligent device controller is able to process a ~ignal instruction from the operating system, read aII operating sys~em command block in main memory, and then write completion status into the command block upon completion of the operatlon. The main memory address of the controller's commanà block is installed during system boot and ini~alization As shown ill Fig, 21, SSI/mOS provides capabillties to build networklng services using the STREAMS facilities, System Y STREAMS
have been multithreaded~ adding multiprocessor support. ~e SSI/mOS

.

2~8'1~9~
wo 91/20033 P~r/vs91/

version of the STREAMS product ~ovides network capability wl~h the ne~ork modules shown in Fig. 21. The TCP/IP code proYides drivers, libraries and user conunands. Host network cormectior~ are supported by linking device specific drivers to the protocol modules using STREAMS.
5 1.6 Re60u~ce ManageInentand Scheduling 1.6.1 Int~oduction.
The resource manager is a set of utilities that schedule jobs and allocate resources to them in such as way as to op~miæ the usage of the multiprocessor clus~er 40. The resource manager allows system resources 10 to be overcommitted without deadlocking and wi~out aborting jobs. Jobs that do not have enough resources are held until resources become available. In this con~ext, a pb is a user's request to execute one or more processes any of which may require system resources such as tape or disk drives, memory, processors, and so on.
The resource manager has two requirements of users~ the amount of resources ~at will be needed are sp~fied in advance ~sta~stics provide~ by the resource manager help users estimate ~e required resources); and (2) additional resources will not be requested until all cu~rently held resources are released.
1.6.2 Role of ~e Networlc Queuing Sy~tem (NQS) Traditional supercomputer operating ~ystems create a batch user and execution environment. Because System V operating systems aea~e an interactive user and execution environment, their implementation on 25 supercomputers requires the addition of support for the traditiorlal batch user.
NQS is a suite of programs for the submission of jobs in a bateh manner that can be done from any site across a network. NQS runs on a variety of hardware platforms (e.g., IBM 3090 computers, Sun 30 workstations~ DEC Vax computers). If all platforms are running NQS, then anyone at any node can submit a batch job from a terminal. In this way NQS creates a networked batch uses environment that complements the System V networked interactive environment. NQS and any asso~ated resource management support jobs ~at require lots of compute 35 time and many resources. Power users will want large amounts of the m~chine in off hollrs, and if five or six power users demand 75~O of the system at once, it is the function of the resource manager to make sure WO 91/20033 PCr/US~1/04066 5~
that each user gets what they want; And to ~o do in su~h a ~.anner that guarant~es no deadlocks. Other kinds of jobs, 3Obs ~at do not require resources above a certain leYel, will run in the normal System V manner.

1.6.3 Re~ourceCategorie~
Private resources are not shared. Examples include: tapes, cartridges, graphics terminals, specialized imaging hardware (medical, radar, ultrasonic), and other highly specialiæd devices. Se~-private Resources are those sudl as optical disks and drives, and c~mpact disks.
Public Resources are fully shared. Examples include processors, System V
Ble systems, disks, main memory, secondary memory, and input/output channels. The resource manager is concerned primarily with non-shared resources.

1~6.4 Resou~ce Management A resource database map can ~e allocated dynamically. The resource database is alterable on ~e fly, and ~is bit~wise database map that ls a part of the OSSR 2500 is owned by the resource manager and is usable only at privileged levels. l~e opera~g system and the lesource manager share the database. Either one can add or delete system resources during operation. Large sections of each disk (areas other than those required by the file system and swap area) are used like memory. Tapes, memory, processors, input/output devices and channels are allocated or d~allocated as necessary. Disk space is allocated by specific user request, rolmded up to the nearest 32K block size for effidency. Space is released as required or upon ~bort and the area is patterned with a constant pattern.
The only time that a job will need to be restarted is due to the physical failure of a resource, for example, when a disk or tape drive is disabled. Access failures are treated similarly, an~ may require rescheduling of a pb. The resource manager provides for d~allocatloll in case any resollrce is temporarily tmavailable.

1.6.5 Re60urce Schedulins Some jobs, usually small, may be ~inished in nninutes or a few hours.
These jobs are scheduled interactively. High priority Jobs, large or small, are scheduled when the user does not care how much it costs and is willing ts p~y any amount to get the job completed. If the user does not 2~2~8 Wo 91/20033 Pcr/US9~

use the resource manager, thes,e ~obs ma y not run optimally.
Deadline~sche~:luled jobs may be scheduled hours, days, or weeks in advance. This kind of scheduling works well with Dijkstra's "Banker's Algorithm," where it is Icnown in advance the amount and killds of 5 resources required.
Several scheduling assumptions are usecl by the resource manager, including: some threads are purely computational and do not explicitly requ~e automatic scheduling of resources, and are scheduled ~qrst; among jobs scheduled through the resource manager, ~horter jobs requiring fewer 1Q resources generally run first; the largest jobs, i.e., those the scheduler postRones, run only when the system has enough resources, and smaller jobs are interleaved whenever possible; and priority of deadline-scheduled jobs increases as the deadline approaches, assumillg that the deadline is reasonable.
To the degree possible, the resource manager schedules the mix of jobs that uses the maximum number of resources at all times. All pbs entered via the resource manager are tracked ~or both accounting and security purposes. The record keepillg feature provides cost/use information on a per-user and perjob basis. The resource manager allows 20 the user or authorized persom~el to cancel ~ alter the stah~s of a job, for example, to restart a job or change its priority. It also allows the system administTator to increase the priority of a job in emergency circumstances.
As described in greater detail hereinafter, a screen-orientated interface is standardly, available to help the user easily use the resource 25 manager for scheduling jobs and for the presentation of any data requested by the user ~at involves his job.

1.6.6 Requ~rements Using a resource manager sharply increases the performance for all 30 jobs, especially large jobs. Therefore, the resource manager must be easy to use and have the features users need. The resolLTce manager supports device requests in batch mode and batch jobs requirin~ no devices, such as processor-intensi~re pbs. The resource manager allows users ~o interac~
with their jobs and allows the system admin;strator to add or delete of 35 resources as necessary. Subsystem databases and binary and iiSCII header files that are a part of the OSSR 2500 are subsequently updated. The resource database subsystem will allow the addition or deletion of .

f~ J ~ ~J

WO 91/2û033 , PC~/US91/04066 5~
resources wlthout interfering wit~h the continued executi~n of the resource mana~er. Either the operator of the subsystem or the ~ystem administrator may restart, modify, prioritize pbs, or schedule controlled resources using the resource mana~er. The resource manager supports 5 queue access restrictions, deadline scheduling, and networked output, where t~e stdout and stderr output of any request can be returned to any station. A status report of all machine operatiorls to trace activity, for debu~g~ng, accounting, and security purposes. I~e resource manager also supports clearing any temporarily reserved media such as disk, CDs, 10 memory for security.
In the preferred embodiment, the resource manager runs across all nodes, workstations, and all networked machines. If a cluster shuts down when jobs are running, the processes are suspended or switched to other clusters. l~e user has the option, in most cases, to decide to suspend his 15 process or to switch it to another cluster. The resource manager can display network status and use information upon request or every five minutes. This information is logged and available to any user. Any security-sensitive thread is listed in a protected manner and will not appear in the status report. Consistent with the common visual interface 20 2300, the resource manager provides a screen-oriented~ visual interface.
Shell scripts a~e an alternate means of defining the jobs for the resource manager. In the preferred embodiment, the resource manager uses ~he NQ~based algorithms that guarantee consistency of files across network configurations in the event of any systcm failure.
1.7 Networlc Support . .
RefeFring now to Fig. 22, the operating system networking environment 2800 is described. l~e bloclc 2820 below the System tt kemel 2810 represents the various physical access methods, most of them are 30 High Performance Parallel Interfaces (HiPPI) adapters that are data links to front end networks such as UltraNet. The blocks 2830-2870 above the kernel represent user level ~eatures and facilit;es which utilize System V
networking. XDR and RPC module 2850 are included in this group. XDR
2860 encapsulates data so it can be understood by heterogenous machines.
35 RPC 2850 invokes procedures on remote machines. Together, these two modules provide a faulity for distributing applications in a heterogenous network environment.

~0 ~l/20033 Pcr/us~lt~qo~6 Within the kernel 2810 are RFS and NFS. They provide transparent file access across a network. Both NFS and RFS come standard with AT
&T UnL~c System V release 4Ø In addition to RFS, NFS, RPC, and XOR, there are the standard Berkeley networking utilities 2840 which depend 5 upon the kernel's Berkeley Sockets facility. These include rlogin (remote login), rcp (remote copy), and inetd (the ~nternet demon that watches for users trying to access the preferred multiprocessor and spawns various networking s~ices accordingly).
Bloc~c 2830 ~epre~ents netw~rking u~lities which depend upon the 10 AT&T Unix Transport Level Interface (TLI) library. The TLI is the main interface between user level programs and s~reams based networking facilities. Two of the utilities in ~is block are FI-P (File Transpor~ Protocol)and Telnet (the ARPANET remote login ~acility~, both of which may also exist as ~erlceley utilities and therefore run on top of the socket interface 15 rather than the lLI interface. The network modules provide high level input/output support for the external device drivers. These modules supply the drivers with a list of transfers to perform. When the data arrives, the driver notifies the module and the data is then sent `upstream' ~o ~e user level.
~0 1.8 Adm~trative and Operator Support The administrative and operator support in the present invention indudes support for accoun~ng, security, adII~inis~ative scheduling of the multiprocessor system and support for the operators of the multiprocessor 25 installation.
In the preferred embodiment of the present invention, there are two methods of accounting. One method of accounting is based on the time of processor assignment/ using the real tirne clock. The second clerives from a work metric calculated from the processor activity counts 30 (memory references, instructions issued and functional unit results). Bo~
accounting methods are needed to develop reproducible charges and to collect performance statistics that enable code optimization and system tuning. In addition to System V accounting, the present inventio provides for dev/sessionlog and dev/systemlog reporting. Dev/sessionlog 35 includes a history with accounting stamps, a ur~ique record per login, batch run, shared image process group, and a history of shell-emitted transactions and other iterns of interest. Dev/systernlog includes a history i 2~38~98 WO 91~20033 PCT/IJS91/04066 ~6 with accounting stamps, unique~er security level. The types of account~ng st~ps include: begin/encl timestamp, all metrics, process name, some arguments, and the IJke.
Multilevel security is implemented at several levels: network 5 security, user-level security, and administra~don utilities and commands to support secu~ty.
In addition to ~e standard administrative support available for any large computer processing system or ~upercomputer, the present invention provides for "fair share" scheduling and predictive, heuristic 10 scheduling of processes to meet administratively defined utilization goals for the multiprocessor system, including: input/output, memory, remote procedure call (RPC), interactive response time, and the like.
The operator environment is a superset of the user's environment as described in greater detail hereinafter. ~t requires standard capabilities 15 plus display of "operator" log(s~ and dyna~nic display replacement for "ps".
Ps is a visual metaphor for multiprocessor execution or run-time environment and indudes multiwindow, point and cliclc for more or less information with "real time" refresh. The displays include: queue displays, process displays, displays for a processor in a runnlng process, ~0 memory displays, and resource allocation ~ables. lhe queue displays include: input, swapped (and why), running~ output. The processes displays include status, resources assigned, resource status ~ds (path, size, posi~on, function, ...), duster and processor usa~, memory size, the last few sessionlog entries, the current command, the next few commands f 25 script, and cooperating shell), and an "operator" message area. The displays for a processor in a running process indude the PC register of the processor, the processor registers (hex ~ ascii) and the last system call. The memory displays include user area~, system areas and system tables. The r~source allocation tables have invesRgative access showing all open files 30 and all connection activities..
As users to their own processes, operators require commands that provide the capability to change priorities, limits, suspend, abort, arbitrary s~gnal, checkpoint, change memory, set sense switches, communlcate wi~h process requesting operator action or response, insert "next" c~mmand 35 (requires cc~operating shell~, and insert comments in user/sy~tem log(s).

1.9 Guest Operating Systelll Support , WO ~lt20033 PC~tUS91tO~66 ~ e present invention containg support for executing itself or other operating systems a~ a guest, i.e. as a user program. This capability includes the establishment of virtual processes and virtual external devices as well as facilities to emulate the ~ap, interrupt and exception 5 capabilities of the preferred multiprocessor system. lhe g~est operating system support feature takes advantage of the facilities describes in Sections 1.2.3, SSI/mOS Implementation of Processes; 1.4.3, Management of Main Memory, and 1.7, Network Support.

2.0 - PARALLEL USER ENVIRONMENT
The architecture of the present invention accomplishes an integrated hardware and software system. A major part of the software system is the programming environment package of integrated development tools designed to bring the full capabilities of the clusters 40 15 of the preferred multiprocessor to the programmer. Referring again to Pigs. 8a and 8b, the four major components of the parallel user environment of the present invention are shown. The program management module 2100 conl~ols the development environment for a source code file representing a software program for which parallel 20 softu~are code is desired. The compiler æoo pro~ides support for al~ of the features that allow for parallelism by default to be implemented in the present invention. The user interface 2300 presents a common visual representation to one or more users of the status, control and execution options available for executing and monitoring the executable code file 25 during the time that the executable code file is executed. User interface 2300 indudes a common set of visual/icon flmctions and a common set of command line functions and common tools for showing a dynanuc view of user application program and system performance. The user interhce 2300 also supports a limited subset of the online process scheduling 30 functions to the user and reconfiguration of shared resources and system parameters in coordination with the resource manager in the operating system 10ûO. Pinally, the debugger 2400 allows ~or effective and distributed debugging of parallel program code.

35 2.1 Yi~ual User 1nterface Referring now to Figs. 23a and 23b, a plctorial representation of the programming environment as seen by a programmer is shown. The f~.~

WO 9l/20033 PCr/VSgllO4066 ~8 ' programming enviromnent that comprises the visual user interface 2300 of the present ~nvention provides for a common windowed interface to a ToolSet 2351, that is a complete set of utililies ~at facilitate ~e production of e~ficient, bug-free software programs. The major ~os)ls in tlle set are 5 available through windowed interfaces ~n a desktop environmen~
accessible on a disl~ibuted network. In the preferred embodiment, the TnolSet 2351 organ~es the tools into drawers. Whetl open, a drawer displays the set of icon for the tools it contains. Programmers may organize the ToolSet 2351 as desired and may also add their own tool icons 10 as desired. All the tools are also available with the command line interface.
In the embodiment shown in Fig. 23, a programmer may select any of the icons ~epresenting one of the tools of the ToolSet 2351 to open a window 2352 associated with that particular tool. The Filer icon 2353 15 allows the programmer to see his files in icon form. The Compiler icon 2354 allows the programmer to elect compile options and compile the software program. The Compilation Advisor icon 2355 allows the progra~uner to interact with the compiler 2200 through an interactiYe interhce that enables the programmer to conveniently supply additional 20 optimization information to the module compiler 2200. The compilation advisor icon 2355 c~n also be used to display dependency ~nforma~on that i5 gathered by the compiler. Dependencies in a program inhibit optimization and the programmer can use the compilation advisor to study the dependencies for possible removall. The prograrn analyzer (not 25 shown in the figure) gathers interproce~ural information about an entire program which ;s used to support the optimization by the compiler 2200 and to checlc that procedure interhces are correct..
The Defaults icon 2356 allows the programmer to set the vari~us defaults for the ToolSet 2351, including invoking different levels of 30 interprocedural analysis and selecting link options. For example, the programmer can select a default option to start a text editor if there is an er~or during compilation, with the text editor beginning where the compilation error occurred. The programmer might also tell the ToolSet ~o automatically start ~he debugger if an error occurs during program 35 execution.
The Debugger icon 2357 invokes the debugger 2~00. The Performance Analyzer icon 2358 allows the programmer to optionally , ~ .

WO 91/20033 PCI~/US91/~4066 collect performance data on the execution of a particular software program. The Help icon 2359 invokes a help menu that provides online assistance and documentation for the programmer. The Graphtool icon 2360 allows the p~ogrammer to display information in graphical form.
5 The programmer can use the CallGraph feature as described in greater detail hereinafter to control the interprocedural an~lysis done on a program. The Debugger icon 2357 can be used to visualize a representation of the threads ~at have been created in ~e program and to monitor the processes and processor~ that are executing the program. An 10 assembler tool (not shown) enables the programmer to use a symbolic language to generate object code. Finally, the Resource Mgr icon 2361 allows the programmer to identify the system resources that will be required by the software programmer. ~is ~nformation is also used by ~e operating system 1000 as previously described. A progra~uner can also 15 use a command line interface to perform the activities without the using the icons for ToolSet 2351.
After editing a program, ~e programmer can compile, link, and then execute a program by selecting the appropriate icon. The programmer may tailor the ToolSet 2351 to his or her particular needs.
20 The Tool5et 2351 builds on existing System V utilities such as the macro processor and text editors. The ToolSet also allows the prs)grammer to create separate input windows and output windows for the program. This is useful when a program generates a large amount of output and the user needs to see the program input. Traditionally, m~ssing procedures are not ~5 detected until a program is linked. The programmer can use the Program Analyzer (not shown~ to determine if all procedures are available before t~ying to link the program.
Referring now to liig. 24, the preferred design of the ToolSet 2351 as built on top of standard software is shown. The ToolSet 2351 features are 30 implemented in an OPEN LOOK-style user interface based on the X-Windows System available from MII. The parallel user environment as implemented through the ToolSet 2351 is integrated aceording the InterClient Communication Convention Manual (ICCCM) specifieation with a limited nurnber of extensions that are described elsewhere.
2.2. Program Management ~Jt~

wo 91/20033 pcr/uss1/û~o66 l~e program management module 2100 con~ols modifications to source code Ales that comprise a software program The sofh~rare program may result in elther ser~al or parallel software code. A fine level of ~ontrol of modifications to source code files i~ desirable for an efficient 5 development envi~onment This ls especially ~ue for large source code programs, For example, every ~ne a one-l~ne change i~ made to a source code program, it would be very inefficient to recompile and relink the entire source code program The program management module 2100 interacts with the compiler 2200 and the IPA 2700 to determine which 10 procedures in the source code have been changed and/or affected by a change and, as a result, which procedure will need to be recompiled, reoptimized and/or relinked In this sense, the program management module 2100 is similar to the make utility in System V, except that the ~ con~ol of recompilation is on the procedure level instead of on the file 15 level.
Interprocedural optimization lntroduces dependencies among procedures and makes minimum recompilation a concern For example, the module compiler uses information about procedure B when compL'iing procedure .A Later, procedure B is modified and recompiled 20 Does module A also need to be recompiled~ The compilation system keeps information about procedure dependencies and recompiles procedure A only when needed Recompiling only the necessary set of procedures saves time.
The programmer can use the interprocedural assembler support 25 tool of the program management module 2100 to add interprocedural information about assembly language programs. Th;s information includes the number and type of formal parameters and ~e local use and definition of formal parameters and global variables~
l~e program composer of the program management module 2100 30 aids a programn er in maintaining different versions of a program. As discussed above, use of interprocedural information for optimization introdu~es dependencies among procedures. To generate correct executable programs, the correct ~ersions of all procedures must be linked Into that program, l~ls inkoduces the need to unlquely identify different versions 35 of a procedure The program composer makes this v~rsion control available to the programmer for the maintenance of dlfferent versions of a program.

l 2 ~

wo gl/20033 Pcr/ussl/o~o66 2.3 Compilelr Referring now to Fig. 25a, the compiler 2300 of the present inven~on will be described. A plurality of front~nds modules interface 5 the compiler 230~ with a varie~y of presen~ly available program~ing l~nguages. ~e preferred embodiment of the compiler 2300 provides a C
front end 2701 ~nd a Fortran front end 2702. I~e ~ont ends 2701 and 2702 8enerate a representation of the source cod¢ in a single common intermediate language referred to as HiForm ~HF) 2703. Ihe HF 2703 ~s 10 user by the optimizer 2704 and the code generator 2705. The optimizer 27~4 performs standard scalar op~nizations, and detects sections of code that can be vectorized or automatically threaded and performs those opt~tions. Fig. 25b is a pictorial representation of a common user interface to the compiler 230û.
2.3.1 Front Ends The C compiler front-end 2701 is based on the ANSI X 2.159-1989 C:
language standard. Extensions to the C compiler front~nd 2701 provide the same functions to which System Y programmers are aecustomed in 20 other C: compilers. Additional extensions, in the form of compiler directives, benefit CPU-intensive or large engineering/scientific applications. The C compiler front-end 2701 performs macro processing, saving the defini~dons of macros for debugging.
The Fort~an compiler front-end 2702 is based on ANSI Fortran 77 25 and contains several extensions for source compatibility with other vendors' Fort~an compilers. All extensions can be used in a progran~
unless there is a conflict in the extensions provided by two different vendors Because the C and Fortran compilers ~ront-ends share the optimizer 30 2704 and back end, the programmer may easily mix different programming languages in the same application. Compiler front-ends for additional languages can conveniently be added to the compiler 2200 and will share ~e optimizer 2704 with exlsting compiler front~nds.

232 Parsin~ ~
Parsing determines the syntactic correctness of source code and tlanslates the source into an intermediate representation. The front ends S~ 8 wO 9t/20033 Pcr/ussl/o4o66 parse the source code into an intermediate language HiPorm (~). ~e parsers in the present invention utilize well known methods of vec~or parsing tables, including opti~uzed left-right parsing tables specifically adapted for the preferred multiprocessors system executing the software architech~re of the present invention.

2.3.3 HiFor~ (HF) Intermedlate Lang~age The objective of the front-ends 2701 and 2702 is to produce a representatioll of the source code for a software program in a common, intermediate language referred to as HiForm (HF).
One of the central components of HF i~ the DeQnition Use Dependencies (DUDes). Definition-use information relates a variable's definition to all the uses of the variable that are affected by thal definition ~ Us~definition infolmation relates a variable's use to all the deflnitions of the variable that affect that use Definition definition information relates a variables definition with all definitions of the variable that are made obsolete by that definition The present invention incorporates definition-use, use-def~nition and definition-definition inhrmation for single and multiple words variables, equivalenced variable, pointer and procedure calls tincluding all potential side e~fects) into a single representation (DUDes) that i5 an integral part of the dependence analysis done for vectorization and multithreading.
2~3.4 Optimizer The optirn~er 2704 improves the intermediate HP code 2703 so that faster-running object code will result by performlng several machine-independent optimizations. The optimizer 2704 performs aggresslve optimizations, which indude automatic threading of source code, automatic vectoriza~ion of source code, interprocedural analysis for better optimizations, and automatic in-lining of procedures.
l~e optimizer 2704 performs advanced dependence analysis to identify every opportunity for using the vector capabilities of the preferred multiprocessor ~ystem The same dependence analysis is used to do multithreading, which makes it posslble to concurrerltly apply mul~iple processors to a single prograrn The optimizer also applies a wide range of scalar optîmlzations to use the scalar hardware In the most efficient manner. Scalar loop optimizations, such as strength reduction, Induction variable ellmination, and invariant expresslon holsting are performed on ~ ~3 ~ ~

wo 9lt20033 PC~/US91/M0~6 loops that cannot be vectorized or au~omatically multithreaded ~lobal optimizations are performed over an entire procedure. They include:
propagation of constants, elimination of unreached code, elimination of common subexpressions, and conversion of hand-cn:led IF loops to 5 structured loops. In-lining of procedur@s ~u~omatically pulls small, frequent~y used procedures inline to eliminate procedure call overhead.
The process of translating the intermediate HF code to machine dependent instructions also performs machine-dependent optinuzations. These optimizations attempt to make optimwn use of 10 registers, such as keeping the most comrnonly used variables in registers throughout a procedure. Other optimizations are as follows. The inskuction scheduler seeks to simultaneously use the multiple functional units of the machine and minimize the time re~quired to complete a collection of instructions. Wnkage tailoring minimizes procedure call 15 overhead. Post-scheduling pushes back memory loads as early as possible and performs bottom load~g of loops. Loop unrolling duplicates the bs)dy of the loop to minimize loop overhead and maYin~ize resource usage.
Optimization i5 a tim~ and spac~intensive process, even when using efficient algorithms. Selected parts of optimizatiorl may lbe turned 20 off to provide some of the benefits withou~ all of the cost. For example, performing vectorization does not require performing scalar global optimization. However, without the global transformation, some opportunities for vectorizatioIl may be missed. Or, in situations where it is necessary to have quick compilation, the optimi~ation phase may be 25 skipped by using a command line option. However, the execution time of t~he us~'s program will be greater.

2.3.4.1 Scala~ Opt~izations Scalar optimization reduces execution time, although it does not 30 produce the dramatic effects in execution time s:~btained through vectorization or automatic multithreading. However, the analysis and transformation of scalar optimization often increase ~e effectlveness of vectorization and automatic multithreading. The basic unit of scalar optimization is called the basic block as shown in Figs. 26a and 26b. The 35 Basic Block is a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching except at the end. This segment of code can be .

% ~ J ~ ~

wo 91/20033 Pc~r/us9l/o4~66 entered at only one point and exited at only one point. Local (or basic block) optimizations are confined to a basic block. Global optimizations have a scope of more than one basic block.

~.4~ Control Flow Graph A con~ol flow graph indicates the flow of control between basic blocks in the program unit. Once the basic bloclcs in the program have been formed and the contIol flow connections have been indicated in the eontrol flow graph, hlrther optimization processing can take place. Fig.
lû 27a is a pictorial representation of a control statement in HiForm and Fig.
27b is the call graph of a program.

23.4.3 Local C)ptimizations The local optimizations that are performed by the optimizer 2220 are listed. The order of these optimizations does not imply an implementat;on order.
Common Subexpression Elimination - I~ there are multiple iden~ical expressions whose operands are not ch~ged, the expression can be computed the ~irst tirne. Subsequent references to tllat expression use the value originally computed.
Fo~ward Substitution - If a variable is defined within the block and then referenced without an intervening redefinition, the reference can be replaced with the right-hand side (RH5) expression of the deQnition.
Redundant Store Eliminatiort - If a variable is deQned more than once in a basic block, all but ~e last definition can be eliminated.
Constant Folding - If the values of the operands of an expression are known during compilation, the expresslon can be replaced by its evaluation.
Algebraic Simplifi~ations - There are se~eral algebraic simplifications; for example, remov~ng identity operations, changing exponentiation to an integer power to multiplies. Other simplifications (e.g., chang~ng integer multiplies to shifts and adds) are performed by the code gellerator.

2.3.4.4 Global Optimization~

WO 9~/2~33 P~/~S~ 40~6 Like local optimization, a brief description fc~r ~he global optimi~ations perhrmed by the optimizer 2~20 is ~et forth. Again, an implementation order is not implied by the order given below.
Transformation of IF ~oops to HF Loops - The con~ol flow graph is 5 analyæd to find loops constructed from IF and GOTO statements. If possible, these loops are transformed into the same HiForm representation as DO loops have. This is not a u~hll scalar optimization, per se. It inc~eases the opportunit;es for vectorization and automatic multithreading.
Constant Propagation - If a va~iable is defined with the value of a constant, then when ~hat variable is referenced elsewhere in ~he program, the value of the constant can be used instead. This is, of course, only true on paths on whic~ the variable has not been redefixled.
Dead Store Elimination - If a v~able is def~ed, but it is no~ an 15 output vanable ~e.g., a dummy argument (formal parameter), common variable, or saved variable~, its final value need not be stored to memory.
Dead Code Elimination - Code, the result of which is not needed, may be eliminated from the inte~nediate HF text. This can be as little as an expression or as much as a basic blo~
~0 Global Common Subexpression Elimination - Global common subexpressions are like local common subexpressions except ~he whole program graph is examined, rather than a single basic block.
Loop Invariant Expression ~oisting - An explession inside a loop whose operands are loop invariant may be calculated outside tlle loop.
25 The result is then used within the loop. This eliminates redundant calculations inside the loop.
Induction Variable Strength Reduction ~ Induction variables, whose values are changed in ~e loop by a multiplica~ion opera~ion, may sometimes be calculated by a related addition operation. Generally, these 30 Icinds of induction variables are not found in the source code, but have been created to pe~orm address calcula~ons for multidimensional arrays.
Induction Variable Elimination - If there is an inductior, variable I
within a loop and there is another induction variable J in the same loop and each ~me J is assigned, rs ~alue is the same linear function of the 35 value of I, it is often possible to use only one induction variable instead of two. Again, this kind of situation most frequently arises due to address calculation.

2 ~ 3 ~ rJ ~

W~:) 91/20033 P~/US91tO4066 2.3.4.5 V~ctorizalion During vectorization, loops are arlalyzed to determine if the use of vector instructions, rather thaII $calar instructions, may change the S semantics of the loop. If there is no change to the loop's ~emantlcs, the loops are marked as vectorizable. Some obvious cons~ucts make loops nonvectorizable, such as calls with side effects and m~st input/output operations. Subtle, less obvious construets are recurrences, which make loC)p5 nonvectorizable. A recurrence occurs when the computation of ~he 10 value of a data item depends on a computation performed in an earlier iteration of the 100p.
The te~n "dependence" is used in the vectorization and auto~natic multithreading sections. A dependence is a constraint on the order of two . data item references (use or definition~. When two elements of the same 15 array are referenced, tlle subscripts must be analyzed to determine dependence Statement Reordering Certain recurrences may be elinunated if the order of the statements ~n the loop i~ changed. Statements may be reordered only ilf the change maintains the loop's semantics. The loop 20 shown in Fig. 26b may be vectorized if the statements are reordered.
Loops with IF statements - The presence of one or more IP
statements in a lonp shall not by itself inhibit vectorization of the loop.
(Note that ~is includes loops with mulliple exits.) The user will be able to inform the compiler that the IF construct ean be more efficiently run with 25 masked, hlll V'L operations, or compressed operaldons.
Partial VectQrization/Loop Splitting ~ Loops that cannot be vectorized as a whole can sometimes be split into several loops, some of whlch can be vectorized. Loop splitting is the term often used when entire statements are moved into a vector or scalar loop. Partial v¢ctorizatlon is 30 the term generally used when parts of statements are moved into the vector or scalar loop.
I.oop Reordering - ~Yithin a loop nest, loops (~.e., DO statements) may be reordered to provide better performance. The dependence of a particular subscript expression is due to a certain loop. An innermost loop 35 may have a recurrence and be unvectorlzable, but if an 'ouger loop is moved inward, the recurrence may disappear, allowing vectorlzatlon. Of course, ~1 dependencies must be preseTved when loop~ are reordered.

.

wo 91/20~33 P~r/Us~1/0~06 Loops may also be reordered ~when an inner loop has a short vector length, or to allow better vector register allocation. This last reason is getting rather machine-dependent, but will provide significant performance improvements on loops like a ma~ix multiply.
Although a reduction is a recurrence, t~ese may be vectorized to some extent. It is really handled by partial vectorization, but deserves special mention because it's been a special case to so many compilers.

6A~utomatic ~ulti~read~ng Analysis of loop dependencies determines if a loop must be run sequentially or not. If a loop does not have to be run sequentially, it can be run equally well USilsg vectvr instructions or multiple processors, although synchronization may be required when multiple processors are used.
Vectorization, rather than automatic multithreading, will be chosen for inner loops berause the loop will execute ~aster. An outer loop of a vector or a scalar loop will be autothreaded ~ the dependence analysis allo ws it and if there seem s to be enough work inside the lc~p(s). Exactly how much is enough is dependent on the loop and the automatic multithreading implementation. The faster the processes can be created, the more loops that will benefit from automatic multithreading.

2.3.g.7Intrin~ic Punctio~
l~his section sets forth the functional spe~ication for the In~insic Functions Library, a set ofroutines ~hat are "special" to ~he co m piler. Code for so m e of ~hese routines are generated inline by the compiler, others are called with para m eters in registers, still o~hers may be called with the standard linkage conventions. For access fronn For~an, any one of the intrinsics is available from Fortran simply by making a cali. For access 30 from C, any of the intrinsics is available from C through the directive:
#pragma ssi intrinsic (name), where name is the specific name of the des~red inttinsic. It will be noted ~at the names of many of the standard C
mathematical functions agree with the specific names of intrinsics. In all such cases, the C math function and the corresponding intrinsic are 35 implemented by the same code sequence, so identical`results will be obtained whether the C math hmction or the intrinsic is called.

~ ~g 3 ~
WO 91/20033 Pcr/US91/~o66 6~
2~.4.8 Register As~3gnment ~nd Instrllction Scheduling Integration Integration of register assignment and instruction scheduling is also aYailable as an optimiza~ion. The instruction scheduler notes when the 5number of available registers drops below a certain threshold, and schedules instructions to increase that number of registers. This is done by looking forward in the resource dependence graph and picking ~he instruction sequence which frees ~e most registers.
The tnstruction scheduler l~ies to reorder the instruction sequence 10in order to maximize ~e usage of the various functional units of the machine. Blindly chasing this goal causes the scheduler to lengthen the lifetimes of values in registers, and in some cases, causes a shortage of registers available for expression evaluation. When the number of free . registers available reaches a low threshold, the instruction scheduler stops 15reordering to maximize pipeline usage and begins to reorder to free the most registers. This is done by examining potential code sequences (as restricted by data dependencies) and choosing sequences that kee the greatest number of registers.
The instruction scheduler and the look ahead scheduler described 20in Section 2.3.4.9 also use the mark instructions of the preferred embodiment of the processor to schedule work to be done during the time that the processor would otherwise be waiting for a synchron~ation to occur. Unlike prior art schemes for marking data as unavailable until a certain event occurs~, the Data Mark mechanism of the preferred 25embodiment separates the marking of a ~hared resource 12 (mark or gmark) from the wait activity that follows (waitmk). This separation allows for the scheduling of non-dependent activity in the interim, thereby minimizing the tlme lost waiting for marked references to commit.
3023.4.9 Look Ahead Scheduting Vector instructions are scheduled according to their initiation times while scalar Instructions are scheduled with respect to their issue times.
Even though a vector instruction may issue immediately, the scheduler may delay l~i issue to nunimize the interlocks caused by the in~t queue.
35In the preferred embodiment of ~e processor 10, vector instructions can be issued before a functional unit for the vector instruction is available. After a vector instruction is issued, instructions followin~ It can :
:. .

WO 91/20033 pcr/us91/M~66 be ~ssued. A vector ins~uc~don that ~ been ~ssued i~ put into queue un~dl it can be assigned to a functional UNt. The instru~tiolu in the "init queue" are assigned to a functional w~it on a ISrst in, first out basis. When the irLstruction is assigned to a hmctional ur~t, it is ~aid to be initialized.
5 An ~ ion may be held in ~e init queue due to a variety of hardware interlocks. Once the queue is full, no more vector instructions can be initialized to that functional unit. I~e vector instruction scheduler reco~s ~e depth of the ~utialization queue and the interlocks that may cause the ~s~uction to hold initialization and delays the issue of a 10 vector instruction if the vector instruction cannot be ini~ialized on a functional lmlt until it can initialize without delay.

10 Po~er ~la]y6i8 Pointer analysis is performed for vectorization and parallelization 15 dependence analysis of all ~orms of pointers, including those within a structure or union; as well as global pointers, subroutine call side effects on pointers, non-standard pointer practice, and directly/indirectly recursive pointers. Pointers are a type of object that is used to point to another object in a pro~. Dere~erencing a poir.ter references th2 object 20 to which the pointer points. In the most general case, the dereference of a pointer can reference any object in the program. Dependence analy5is for Yectorization and parallelization requires info~nation ab~ut what objects are being referenced (typically within a loop). Without infolma~don about the objects to which a pointer points, the derefe~ence of a pointer must be 25 considered a reference to any object and thus lnhibits vectorization and parallelization because of the dependence information is impreciseO
Pointe~ analysis attempts to determine which object or objects a pointer points to so as to provide more precise information for dependence analysis.
2.3.4.11 . Constant Folding Constant ifolding and algebraic simpliflcation with intrinsic h~nc'don evaluation is also performed by ~e present inYention. Constant folding and algebraic simplification are done toge~er so that expressions 35 such as S ~ ( x + 12 ) are simplified and ~olded to 5 ~ x + 60. Intrinsic functions involving constants are also simplified. The lnvention for the constant folding and algebraic simplification relies on ~he internal ~8~S~8 WO 9l/20033 Pcr/US9~ 4066 representation of the compilation Imit which is ~eing compiled (HiForm or HF). lllis lnternal representation represents program statements and expressions as trees and potentially as DAGs (directed acyclical graphs).
The constant folder and algebraic simplifier are combined into a single module, which runs down the tree OI DAG using recursive descent, and works bottom up, folding each expression as it rewinds itself back up the root of the statement/expression. As an example, the statement: i = 2 +; +
6 + k -15, would be represented in tree form as (root of the tree at the top) as shown in Fig. 28a. T~e tree would be algebraically simplified/folded as shown in Figs. 28b - 28e.
Some of the unique things that are simplified/folded by this optimization pass are that all expressions and operators written in 'C' or 'FORTRAN77' which involve constant arguments. These are folded and . many algebraic simplifications are performed. Thi5 gives the front-ends the added flexibility of allowing ali of these potentially folded operators to appear in statements or declarators where a constant expression is required (e.g. data statements, parameter statements, auto initializers, etc.).

2.3.4~12 Path Instruction rhe scheduler estimates issue, initiali;~e, and go 'dmes of all vector instructions. It inserts a "path" instruetion before a vector instruction which will normally dependent initlalize on one functional uni~ but will have a~ earlier go on another unit if it doesn't dependent initialiæ on the firs~ unit. The architecture of the processor 10 has multiple functional units of ~he same kind. When a vector instruction is issued and that instruction can execute on more than one functional unit, the vector instruction is normally initialized on the functional unit that least recently initialized a vector instruction. ~e path instruction wlll steer a vector instruction to iIlitialize on a particular functional unit. The vector schedular lnserts a path instruction when it determines that a vector instruction will normally dependent initialize orl one functic)nal unit but w~uld actually start earlier on another funetional unit and therefore should be steer to 1hat latter functional unit.

2.3.4.13 I Yariable to Re~ister Mapping `' Ranges during which the value of a varlable is kept in a register (as opposed to the memory location of the variable) are maintained by the wo 91~20033 pcr/us91/Q4o66 compiler for use by ~e debugger. ~s provides ~e debugger with the location of the currerlt value of a variable. In each basic block in a procedure a variable is assigned to at most one reg~ster. Fs~r ea~h basic blocl~ the compiler keeps a list (logically) of variable~ and associated registers. This information i~ generated as part of the local register allocation phase s)f the compile and is kept for the debugger.

2.3~4 Inte~procedu~al Analy8i8 When the compiler is processing a procedure, there may be calls to other procedures. In the traditional software environment, the compiler has no knowledge of the effects of these o~her (or called) procedures.
Without such knowledge, the compiler is ~orced to assume the worst and inhibit many op~mizations that are safe. Interpr~edural analysis (IPA~ is the collection and analysis of procedure information. l~e results of this analysis allow the compiler to opti~e across called procedures. Certain opL~nizatiorss can benefit from interprocedural analysis. With the use of IPA information, the number of instances where an optimization can applied should be increased. The opt~tions that can bene~it ~rom 1 PA
include: common subexpression elimination, forward substitution, redundant store elimination, constant folding, constant propag~$ion, dead code elimination, global common subexpression elimination, vectorization and automatic multithreading.
In addition, for each procedure in a program, IPA collects a list of defined or used global variables and ~ounts how many times each variable is defined or used. IPA sums the number of defines and uses of the global variables and sorts them into the order of most frequent use. The most frequently used variables can then be allocated to L registers. The registers for a called procedure are offset from the c;illing procedure to reduce the number of register saves and restores in a procedure call.
There are two types of interprocedural analysis that are well known in t~e prior art, exhaustive and inaemental. For exhaustive analysis, the call graph is formed from information in ~e object code file files and analyzed. This is the "start from scratch" analysis. For incremental analysis, the call graph and analysis are assumed to exist from a preYious link of the p~ogram, and a small number of modified procedures are replaced in the call graph. This is the "do as little work as possible"
analysis.

%~lr~c~8 wo gl/20033 P~r/US91J040~6 In the traditional System V envlronment, a programmer can modify a proeedure, compile, and link a program without having to recompile any other procedures, since no dependencies exist between procedures. In an ~PA environment, dependencies exist between 5 procedures since procedures are bas~ng op~mizations upon knowledge of how called procedures beha~e. Hence when ~ called procedure 15 modified and recompiled, a calling procedure may also need to be recompiled. 'rhis problem is solved by recompiling a procedure when any of the procedures it calls has changes in its interprocedural information.
~.3.5 Compilation Advisor The compilation advisor 2340 as shown in Fig. 23 functions as an interhce between the programmer and the module compiler. It allows the . module information eompiler to ask the progJammer 15 optimization-related questions. The module compiler identifies the information that it need~ and formulates questions to ask the programmer. The modlale compiler sares these questions so the programmer can address them through the compilation ad~isor 2340. The compilation advisor 2340 relays t}le programmer's answer back to the 20 module compiler.
A second role of the compilation advisor 2340 i5 displaying dependence information so the programmer can attempt to eliminate dependencies. Dependencies amon~ expressions in a program inhibit vectorization and parallelization of parts of the program. Eliminating 25 dependendes enables the module compiler to generate more efficient code. When there are no transformations that the compiler can do to eliminate a dependence, the programmer may be able to change the algorithm to eliminate it.

30 2.4. Debugger The debugger 2400 is an interactive, symbolic, parallel debugger provided as part of the parallel user environment. The debllgger 2400 contains standard features of debuggers that are commonly available.
These features enable a progra~mer to execute a program under the 35 con~ol of the debugger 2400, stop it at ~; designated location in the program, display values of variables, and continue execution of the program.

i2~

WO 91/20033 Pc~/ussl/04n66 The debugger 2400 ha~ several unique features. ~e combination of these innovative capabilities p~ovide the user function~ity not gea~erally found in o~her debuggers. The debugger 24ao has two user interfaces. The first, a line~riented interface, accepts commands f~niliar to System V
5 users. The second interface, comprised of windows, is designed to minimize the learning required to use debugger 2400's capabilities As shown in Fig. 29, multiple windows display differen~ types of information.
Windows also provide flexible display and control of objects in a debugging session and a means for vL~ualizing data graphically.
As shown schematically in Fig. 30, the software archîtecture of the present invention maintains the information necessary to display high-level language source, for the segment of the program being debugged, in a number of enviromnents (e.g., Machine A, B and C). The compii3tion system creates a mapping of the high-level program source code to machine code and vice versa. One of several capabili~ies of the debugger 240û not found in otlher debuggers is source level debugging of optimized code. The optimizations that can be applied and still maintain sourc~level debugging inclucle dead-code elimination, code ~ugration, code scheduling, vectorization, register assignment and parallelization.
The debugger 2400 supports debugging of parallel code. A display of the program's dynamic threading structure aids the user in debugging parallel-processed programs. The user can interrogate individual threads and process2s for information, incl~ading a thread's current sta~e of synchronization. Other commands display the stahls of standard synchronization variables such as locks, events, and barriers. The debugger 2400 provides additional capabilities. For example, a programmer can set breakpoints for data and communication, as well as code. Macro facilities assign a series of coMmands to one conunand.
Conhol statements ln the command language allow more flexibilia~ in applying debugger conunands. Support for distributed processes enables the programmer to debug codes on different machines simlaltaneously.
Numerous intrinsic functions, including statistical tools, aid the programmer in analy~ing program data. The debugger 2400 support of languag~specific expressions allows familiar synta~c to be used.
2.41 Dis~ib~afed Debugger Desi~

2 ~ % ~

wo 91/20033 PCr/US~1/04066 7~
Distributing the functionality of the debugger into unique server processes locaUæs the machine dependent parts of the debugger 2400 to those unique ~crYer processes. The debugger 2400 is a distributed debugger consisting of a Debugger User Interface ~DUI) plus a Symbol S Server (SS) and a Debug Host Server ~DHS~. The DUI parses the commands from the user and creates an internal representation for the commands. It then interprets tlle command~ UI uses SS to get symbol information about ~ymbols in the process beirlg d~ugged. Symbol information includes type information and storage ~nforrnation abou~
10 symbols in the program ~eing debugged. DUI uses DHS to interact with the process executing the program being debuggecl. Interaction with the process includes starting and stopping the execution of the process, reading the registers in the process, and reading the memory ~mage of the process.
. l~e distributed nature of the debugger aids the debugging of distributed 15 applications (applications that run on a distribuied computer network).
For example, an application running on 3 computers would ha-ve 3 instances of DHS running on the 3 computers to interact with the three different parts of the applications. DUI communicates with SS and VHS
via remote procedure calls.
~
2.4.2 IJse of Register Mapping for Debugger The debugger 2400 uses the list of variable and register pairs associated with each basic block to determine which register holds the live value of a variable. Each variable resides in only one reg~ster in each basic blosk. The variable that is held in a register in a basic bloclc either enters the basic block in that register or is loaded into that register during executicn of the basic bloclc. If the varia~le is already in the register upon entry to the b~sic block, then its value is readily known from the variable-register pairs mainta}ned by the compiler. ~f the variable is not in a register upon entry to the basic block, the debugger 2400 examines a sequence of code in ~e basic bloclc to determine at what point the variable is loaded into the register.

2.4.3 Mapping Source Code to Executable Code A mapping of the source code to its executable eode (generated from the source code) and a mapping of the binary code to its source ~ode are maintained to aid in debugging optimized code. The mappings allow ,, , , . ..................................... . :

.

2~8~298 wo 91/20033 P~r/US~1/04û66 setting of breakpoint~ in the souroe code and mappln~ the bre~cpoints to the bina~y code. It also a~lows recognition of the ss~urce code associated wi~ each binary instruction.
The compilers translate their respective source program into an 5 intermediate form called HiForm or ~. Contained within the HF is the source file address of the source code that ~anslated into that 1~7. The source file address contains the line number for the souzce expression, the byte offset from the ~eginning of the file to ~e ~ource expression, and the path name to the source file. The HP is translated to LoForm or LF. The 10 LF is a second intermedi~te form ~at maps closely to the instruction set of the preferred embodiment of the processor. l~e HF maps directly to the LF that it generates. The LF contains a relative location of the binary instruction corresponding to the LF.
The debugger matches the line number of a line of solarce with a 15 souroe file address in the HP. lhe lHF points to its corresponding LF and the LF points to the corresponding location of its binary ~struction.

2.4.4 Debugging Inlined Procedure3 The compiler 2200 provides debugging support for in-lined 20 procedures by marking the HF for any procedure ~at has been in-lined and crea~ng a table of souroe file addresses where in-lining has ~en done.
The process of debugging is made more difficult by procedure in-lirung. One problem is ~at the source code for the program no longer reflects the executing code for ~e program because calls to procedures ~5 have been replaced by the code ~or the procedure. A second problem is that debugging the code in the procedure that has been in-lined is complicated by ~e hct that ~e code e)~ists in rnultiple places (wherever it has been in-line and potentially as a non in-lined instance of the procedure. To overcome these difficulties in-lining in the present 30 invention: 1) sets a bit in every statement node that was created due to some form of in-lining (also set a field in the statement node to point to ~he specific eall site that was in-lined); and 2) creates a list of source file addresses where in-lining has taken place per procedure being inlined, and attaches that list to the procedllre definition symbol.
2.~.5 Dual Le~rel P~s~ng 2 5,~

WO 91/2W33 PCr/USgl/D40 The parser assoc~ated ~th the debugger 2400 consist~ of multiple parsers, ~ncluding: a debugger command language parser, C expression parser, and Fortran expression parser. The con~and language parser executes until it recognizes that ~e next item~ to be parsed are expression~
5 in either C or Forhan. The command language parser knows when to expect an expression by the structure of ~e command language. It knows which language expression to expect because of the language flag that specifies what language is being processed. The conunand language parser then calls either the C expression parser or the ~ortran expression parser.
10 All parsers are built with the YACC parser generating systems with modifications to the names of the different parsers. Each parser has a different LEX generated lex~cal analyzer and each parser has its own separate parse table. All lexical analyzers also share a conunon input . stream. I'he grammar for each of ~e parsers is made simpler than a single 15 grammar for the command language, C expressions and Fortran expressions and the parsing is faster and more eff~lcient.
Al~ough the description of the preferred embodiment has been presented, it is contemplated that various changes could be made without deviating from the spirit of the present invention. Accordingly, it is 20 intended that the scope of the present invention be dictated by the appended claims rather than by the description of the preferred embodiment.
We claim:

Claims (19)

77
1. An integrated software architecture for controlling a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory, the integrated software architecture comprising:
control means for distributively controlling the operation and execution of multithreaded programs in the multiprocessor system by implementing an anarchy-based scheduling model for the scheduling of processes and resources that allows each processor to access a single image of the operating system stored in the common memory that operates on a common set of operating system shared resources; and interface means operably associated with the control means for interfacing between user application programs and the control means so as to present a common visual representation for a plurality of program development tools for providing compilation, execution and debugging of multithreaded programs.
2. The integrated software architecture of claim 1 wherein the control means is a symmetrically integrated multithreaded operating system and the interface means is an integrated parallel user environment.
3. The integrated software architecture of claim 1 wherein the software architecture decreases the overhead of context switches among a plurality of processes that comprise the multithreaded programs being executed on the multiprocessor system and also decreases the need for the multithreaded programs to be rewritten or customized to execute in parallel on the multiprocessor system.
4. The software architecture of claim 2 wherein the symmetrically integrated multithreaded operating system schedules the execution of processes by using an atomic resource allocation mechanism to operate on the operating system shared resources.
5. The software architecture of claim 4 wherein the processes to be scheduled include one or more microprocesses which have context that is discardable upon exit.
6. The integrated software architecture of claim 2 wherein the symmetrically integrated multithreaded operating system means comprises:

kernel means for processing multithreaded system services;
and input/output means for processing distributed, multithreaded input/output services.
7. The integrated software architecture of claim 6 wherein the kernel means for processing multithreaded system requests comprises:
parallel process scheduler means for scheduling multiple processes into multiple processors according the anarchy-based scheduling model;
parallel memory scheduler means for allocating shared memory among one or more process groups for the processor; and support means for providing accounting, control, monitor, security, administrative and operator information about the processor.
8. The integrated software architecture of claim 6 wherein the input/output means for processing multithreaded input/output requests comprises:
file management means for managing files containing both data and instructions for the user application software programs;
input/output management means for distributively processing input/output requests to peripheral devices attached to the multiprocessor system;
resource scheduler means for scheduling processes and allocating input/output resources to those processes to optimize the usage of the multiprocessor system; and network support means for supporting input/output requests to other processors interconnected with the multiprocessor system.
9. The integrated software architecture of claim 8 wherein the file management means comprises:
memory array management means for managing virtual memory arrays;
array file management means for managing array files having superstriping; and file cache management means for managing file caching.
10. The integrated software architecture of claim 6 wherein the symmetrically integrated multithreaded operating system means further comprises:

multithreaded interface library means for staring and interfacing common multithreaded object code files for performing standard programming library functions.
11. The integrated software architecture of claim 2 wherein the integrated parallel user environment means comprises:
compilation means for compiling a source code file representing the user application software program;
program management means for controlling the development environment for the source code file;
user interface means for presenting a common visual representation to one or more users of the status, control and execution options available for the multithreaded programs; and debugger means for providing debugging information and control in response to execution of the multithreaded program on the multiprocessor system.
12. The integrated software architecture of claim 11 wherein the compilation means comprises:
one or more front end means for parsing the source code file and for generating an intermediate language representation of the source code file;
optimization means for optimizing the parallel compilation of the source code file, including means for generating machine independent optimizations based on the intermediate language representation; and code generating means for generating an object code file based upon the intermediate language representation, including means for generating machine dependent optimizations.
13. The integrated software architecture of claim 12 wherein the user interface means comprises:
means for linking the object code version of the multithreaded program into an executable code file to be executed by the multiprocessor system;
means for executing the executable code file in the multiprocessor system; and means for monitoring and tuning the performance of the executable code file, including means for providing the status, control and execution options available for the user.
14. The integrated software architecture of claim 13 wherein the user interface means further comprises:
a set of icon-represented functions; and an equivalent set of command-line functions.
15. The integrated software architecture of claim 12 wherein the debugger means comprises:
means for mapping the source code file to the optimized parallel object code file of the multithreaded program; and means for mapping the optimized parallel object code to the source code file of the multithreaded program.
16. The integrated software architecture of claim 15 wherein the debugger further comprises:
means for debugging the optimized parallel object code executing on the multiprocessor system; and means for debugging the multithreaded program across an entire computer network, including the multiprocessor system and one or more remote processors networked together with the multiprocessor system.
17. An integrated parallel user environment for developing, compiling, executing, monitoring and debugging multithreaded programs, at least a portion of which are to be run on a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory, the integrated parallel user environment comprising:
program management means for controlling the development environment for a source code file representing a user application software program for which parallel software code is desired;
compilation means for compiling the source code file to create an executable code file comprised of multithreaded programs capable of parallel execution;
user interface means for presenting a common visual representation to one or more users of the status, control and execution options available for executing and monitoring the executable code file during the time that at least a portion of the object code file is executed on the multiprocessor system; and debugger means for providing debugging information and control in response to execution of the executable code file on the multiprocessor system.
18. An integrated software architecture for implementing parallelism by default in a computer processing system comprising:
a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory; and control means for distributively controlling the operation and execution of multithreaded programs in the multiprocessor system by implementing an anarchy-based scheduling model for the scheduling of processes and resources that allows each processor to access a single image of the operating system stored in the common memory that operates on a common set of operating system shared resources.
19. A method for controlling a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory comprising the steps of:
distributively controlling the operation and execution of multithreaded programs in the multiprocessor system by implementing an anarchy-based scheduling model for the scheduling of processes and resources that allows each processor to access a single image of the operating system stored in the common memory that operates on a common set of operating system shared resources; and interfacing between user application programs and the control means so as to present a common visual representation for a plurality of program development tools for providing compilation, execution and debugging of multithreaded programs, such that the overhead of context switches among a plurality of processes that comprise the multithreaded programs being executed on the multiprocessor system is decreased and the need for the multithreaded programs to be rewritten or customized to execute in parallel on the multiprocessor system is also decreased.
CA002084298A 1990-06-11 1991-06-10 Integrated software architecture for a highly parallel multiprocessor system Abandoned CA2084298A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/537,466 US5179702A (en) 1989-12-29 1990-06-11 System and method for controlling a highly parallel multiprocessor using an anarchy based scheduler for parallel execution thread scheduling
US07/537,466 1990-06-11

Publications (1)

Publication Number Publication Date
CA2084298A1 true CA2084298A1 (en) 1991-12-12

Family

ID=24142755

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002084298A Abandoned CA2084298A1 (en) 1990-06-11 1991-06-10 Integrated software architecture for a highly parallel multiprocessor system

Country Status (6)

Country Link
US (2) US5179702A (en)
JP (1) JPH06505350A (en)
KR (1) KR930701785A (en)
AU (1) AU8099891A (en)
CA (1) CA2084298A1 (en)
WO (1) WO1991020033A1 (en)

Families Citing this family (607)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339415A (en) * 1990-06-11 1994-08-16 Cray Research, Inc. Dual level scheduling of processes to multiple parallel regions of a multi-threaded program on a tightly coupled multiprocessor computer system
US5551041A (en) * 1990-06-13 1996-08-27 Hewlett-Packard Company Wait for service request in an iconic programming system
CA2019327C (en) * 1990-06-19 2000-02-29 Peter K.L. Shum User inquiry facility for data processing systems
US5421014A (en) * 1990-07-13 1995-05-30 I-Tech Corporation Method for controlling multi-thread operations issued by an initiator-type device to one or more target-type peripheral devices
US5388252A (en) * 1990-09-07 1995-02-07 Eastman Kodak Company System for transparent monitoring of processors in a network with display of screen images at a remote station for diagnosis by technical support personnel
IL100987A (en) * 1991-02-27 1995-10-31 Digital Equipment Corp Method and apparatus for compiling code
US5297274A (en) * 1991-04-15 1994-03-22 International Business Machines Corporation Performance analysis of program in multithread OS by creating concurrently running thread generating breakpoint interrupts to active tracing monitor
US5269017A (en) * 1991-08-29 1993-12-07 International Business Machines Corporation Type 1, 2 and 3 retry and checkpointing
US5410648A (en) * 1991-08-30 1995-04-25 International Business Machines Corporation Debugging system wherein multiple code views are simultaneously managed
FR2683344B1 (en) * 1991-10-30 1996-09-20 Bull Sa MULTIPROCESSOR SYSTEM WITH MICROPROGRAMS FOR THE DISTRIBUTION OF PROCESSES TO PROCESSORS.
US5355494A (en) * 1991-12-12 1994-10-11 Thinking Machines Corporation Compiler for performing incremental live variable analysis for data-parallel programs
US5319784A (en) * 1991-12-18 1994-06-07 International Business Machines Corp. System for automatic and selective compile-time installation of fastpath into program for calculation of function/procedure without executing the function/procedure
US5394547A (en) * 1991-12-24 1995-02-28 International Business Machines Corporation Data processing system and method having selectable scheduler
US5852449A (en) * 1992-01-27 1998-12-22 Scientific And Engineering Software Apparatus for and method of displaying running of modeled system designs
US5742842A (en) * 1992-01-28 1998-04-21 Fujitsu Limited Data processing apparatus for executing a vector operation under control of a master processor
JP2866241B2 (en) * 1992-01-30 1999-03-08 株式会社東芝 Computer system and scheduling method
AU662973B2 (en) * 1992-03-09 1995-09-21 Auspex Systems, Inc. High-performance non-volatile ram protected write cache accelerator system
JPH05257709A (en) * 1992-03-16 1993-10-08 Hitachi Ltd Parallelism discriminating method and parallelism supporting method using the same
CA2086691C (en) * 1992-03-30 1997-04-08 David A. Elko Communicating messages between processors and a coupling facility
US5404515A (en) * 1992-04-30 1995-04-04 Bull Hn Information Systems Inc. Balancing of communications transport connections over multiple central processing units
US6047122A (en) * 1992-05-07 2000-04-04 Tm Patents, L.P. System for method for performing a context switch operation in a massively parallel computer system
JP3309425B2 (en) * 1992-05-22 2002-07-29 松下電器産業株式会社 Cache control unit
US5515538A (en) * 1992-05-29 1996-05-07 Sun Microsystems, Inc. Apparatus and method for interrupt handling in a multi-threaded operating system kernel
JPH0628322A (en) * 1992-07-10 1994-02-04 Canon Inc Information processor
US5428803A (en) * 1992-07-10 1995-06-27 Cray Research, Inc. Method and apparatus for a unified parallel processing architecture
JPH0659906A (en) * 1992-08-10 1994-03-04 Hitachi Ltd Method for controlling execution of parallel
US5339261A (en) * 1992-10-22 1994-08-16 Base 10 Systems, Inc. System for operating application software in a safety critical environment
JP2551312B2 (en) * 1992-12-28 1996-11-06 日本電気株式会社 Job step parallel execution method
EP0605927B1 (en) * 1992-12-29 1999-07-28 Koninklijke Philips Electronics N.V. Improved very long instruction word processor architecture
US6002880A (en) * 1992-12-29 1999-12-14 Philips Electronics North America Corporation VLIW processor with less instruction issue slots than functional units
JP2561801B2 (en) * 1993-02-24 1996-12-11 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and system for managing process scheduling
JPH06266683A (en) * 1993-03-12 1994-09-22 Toshiba Corp Parallel processor
DE69429204T2 (en) * 1993-03-26 2002-07-25 Cabletron Systems Inc Sequence control method and device for a communication network
US5623698A (en) * 1993-04-30 1997-04-22 Cray Research, Inc. Memory interconnect network having separate routing networks for inputs and outputs using switches with FIFO queues and message steering bits
US5535116A (en) * 1993-05-18 1996-07-09 Stanford University Flat cache-only multi-processor architectures
US5379432A (en) * 1993-07-19 1995-01-03 Taligent, Inc. Object-oriented interface for a procedural operating system
JPH0784851A (en) * 1993-09-13 1995-03-31 Toshiba Corp Shared data managing method
CA2107299C (en) * 1993-09-29 1997-02-25 Mehrad Yasrebi High performance machine for switched communications in a heterogenous data processing network gateway
US6141689A (en) * 1993-10-01 2000-10-31 International Business Machines Corp. Method and mechanism for allocating switched communications ports in a heterogeneous data processing network gateway
JP3299611B2 (en) * 1993-10-20 2002-07-08 松下電器産業株式会社 Resource allocation device
US5551037A (en) * 1993-11-19 1996-08-27 Lucent Technologies Inc. Apparatus and methods for visualizing operation of a system of processes
US5742848A (en) * 1993-11-16 1998-04-21 Microsoft Corp. System for passing messages between source object and target object utilizing generic code in source object to invoke any member function of target object by executing the same instructions
US6064819A (en) * 1993-12-08 2000-05-16 Imec Control flow and memory management optimization
US5586318A (en) * 1993-12-23 1996-12-17 Microsoft Corporation Method and system for managing ownership of a released synchronization mechanism
US5581769A (en) * 1993-12-29 1996-12-03 International Business Machines Corporation Multipurpose program object linkage protocol for upward compatibility among different compilers
JP2856681B2 (en) * 1994-01-27 1999-02-10 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and system for handling external events
US5632032A (en) * 1994-02-07 1997-05-20 International Business Machines Corporation Cross address space thread control in a multithreaded environment
US5668993A (en) * 1994-02-28 1997-09-16 Teleflex Information Systems, Inc. Multithreaded batch processing system
WO1995023372A1 (en) * 1994-02-28 1995-08-31 Teleflex Information Systems, Inc. Method and apparatus for processing discrete billing events
US7412707B2 (en) * 1994-02-28 2008-08-12 Peters Michael S No-reset option in a batch billing system
US6708226B2 (en) 1994-02-28 2004-03-16 At&T Wireless Services, Inc. Multithreaded batch processing system
US5999916A (en) 1994-02-28 1999-12-07 Teleflex Information Systems, Inc. No-reset option in a batch billing system
US6658488B2 (en) 1994-02-28 2003-12-02 Teleflex Information Systems, Inc. No-reset option in a batch billing system
US5966539A (en) * 1994-03-01 1999-10-12 Digital Equipment Corporation Link time optimization with translation to intermediate program and following optimization techniques including program analysis code motion live variable set generation order analysis, dead code elimination and load invariant analysis
US5611043A (en) * 1994-03-18 1997-03-11 Borland International, Inc. Debugger system and method for controlling child processes
US5692193A (en) * 1994-03-31 1997-11-25 Nec Research Institute, Inc. Software architecture for control of highly parallel computer systems
US5600822A (en) * 1994-04-05 1997-02-04 International Business Machines Corporation Resource allocation synchronization in a parallel processing system
JP3658420B2 (en) * 1994-04-14 2005-06-08 株式会社日立製作所 Distributed processing system
US5613114A (en) * 1994-04-15 1997-03-18 Apple Computer, Inc System and method for custom context switching
US5553227A (en) * 1994-08-26 1996-09-03 International Business Machines Corporation Method and system for visually programming state information using a visual switch
US5481719A (en) * 1994-09-09 1996-01-02 International Business Machines Corporation Exception handling method and apparatus for a microkernel data processing system
KR960706125A (en) * 1994-09-19 1996-11-08 요트.게.아. 롤페즈 A microcontroller system for performing operations of multiple microcontrollers
US5619639A (en) * 1994-10-04 1997-04-08 Mast; Michael B. Method and apparatus for associating an image display area with an application display area
US5590330A (en) * 1994-12-13 1996-12-31 International Business Machines Corporation Method and system for providing a testing facility in a program development tool
US6131185A (en) * 1994-12-13 2000-10-10 International Business Machines Corporation Method and system for visually debugging on object in an object oriented system
US5611074A (en) * 1994-12-14 1997-03-11 International Business Machines Corporation Efficient polling technique using cache coherent protocol
US6182108B1 (en) 1995-01-31 2001-01-30 Microsoft Corporation Method and system for multi-threaded processing
US5812811A (en) * 1995-02-03 1998-09-22 International Business Machines Corporation Executing speculative parallel instructions threads with forking and inter-thread communication
EP0729097A1 (en) * 1995-02-07 1996-08-28 Sun Microsystems, Inc. Method and apparatus for run-time memory access checking and memory leak detection of a multi-threaded program
US5701473A (en) * 1995-03-17 1997-12-23 Unisys Corporation System for optimally storing a data file for enhanced query processing
US5758149A (en) * 1995-03-17 1998-05-26 Unisys Corporation System for optimally processing a transaction and a query to the same database concurrently
US5745915A (en) * 1995-03-17 1998-04-28 Unisys Corporation System for parallel reading and processing of a file
US6011920A (en) * 1995-04-05 2000-01-04 International Business Machines Corporation Method and apparatus for debugging applications on a personality neutral debugger
JPH08286932A (en) 1995-04-11 1996-11-01 Hitachi Ltd Parallel execution control method for job
JPH08286831A (en) * 1995-04-14 1996-11-01 Canon Inc Pen input type electronic device and its control method
US5625804A (en) * 1995-04-17 1997-04-29 International Business Machines Corporation Data conversion in a multiprocessing system usable while maintaining system operations
WO1996036918A1 (en) * 1995-05-19 1996-11-21 At & T Ipm Corp. Method for monitoring a digital multiprocessor
US6311324B1 (en) * 1995-06-07 2001-10-30 Intel Corporation Software profiler which has the ability to display performance data on a computer screen
JPH096633A (en) * 1995-06-07 1997-01-10 Internatl Business Mach Corp <Ibm> Method and system for operation of high-performance multiplelogical route in data-processing system
US6105053A (en) * 1995-06-23 2000-08-15 Emc Corporation Operating system for a non-uniform memory access multiprocessor system
US5815701A (en) * 1995-06-29 1998-09-29 Philips Electronics North America Corporation Computer method and apparatus which maintains context switching speed with a large number of registers and which improves interrupt processing time
US6732138B1 (en) 1995-07-26 2004-05-04 International Business Machines Corporation Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process
US5900025A (en) * 1995-09-12 1999-05-04 Zsp Corporation Processor having a hierarchical control register file and methods for operating the same
US5729726A (en) * 1995-10-02 1998-03-17 International Business Machines Corporation Method and system for performance monitoring efficiency of branch unit operation in a processing system
US5751945A (en) * 1995-10-02 1998-05-12 International Business Machines Corporation Method and system for performance monitoring stalls to identify pipeline bottlenecks and stalls in a processing system
US5949971A (en) * 1995-10-02 1999-09-07 International Business Machines Corporation Method and system for performance monitoring through identification of frequency and length of time of execution of serialization instructions in a processing system
US5752062A (en) * 1995-10-02 1998-05-12 International Business Machines Corporation Method and system for performance monitoring through monitoring an order of processor events during execution in a processing system
US5860138A (en) * 1995-10-02 1999-01-12 International Business Machines Corporation Processor with compiler-allocated, variable length intermediate storage
US5691920A (en) * 1995-10-02 1997-11-25 International Business Machines Corporation Method and system for performance monitoring of dispatch unit efficiency in a processing system
US5748855A (en) * 1995-10-02 1998-05-05 Iinternational Business Machines Corporation Method and system for performance monitoring of misaligned memory accesses in a processing system
US5797019A (en) * 1995-10-02 1998-08-18 International Business Machines Corporation Method and system for performance monitoring time lengths of disabled interrupts in a processing system
US5634056A (en) * 1995-10-06 1997-05-27 Runtime Design Automation Run time dependency management facility for controlling change propagation utilizing relationship graph
US5758345A (en) * 1995-11-08 1998-05-26 International Business Machines Corporation Program and method for establishing a physical database layout on a distributed processor system
US5799188A (en) * 1995-12-15 1998-08-25 International Business Machines Corporation System and method for managing variable weight thread contexts in a multithreaded computer system
US6886167B1 (en) * 1995-12-27 2005-04-26 International Business Machines Corporation Method and system for migrating an object between a split status and a merged status
JP2795244B2 (en) * 1996-01-17 1998-09-10 日本電気株式会社 Program debugging system
US5854754A (en) * 1996-02-12 1998-12-29 International Business Machines Corporation Scheduling computerized backup services
US6058393A (en) * 1996-02-23 2000-05-02 International Business Machines Corporation Dynamic connection to a remote tool in a distributed processing system environment used for debugging
JP3380390B2 (en) * 1996-03-27 2003-02-24 富士通株式会社 Debug information display device
US5781897A (en) * 1996-04-18 1998-07-14 International Business Machines Corporation Method and system for performing record searches in a database within a computer peripheral storage device
US5752260A (en) * 1996-04-29 1998-05-12 International Business Machines Corporation High-speed, multiple-port, interleaved cache with arbitration of multiple access addresses
US6381739B1 (en) * 1996-05-15 2002-04-30 Motorola Inc. Method and apparatus for hierarchical restructuring of computer code
US5812855A (en) * 1996-06-03 1998-09-22 Silicon Graphics, Inc. System and method for constaint propagation cloning for unknown edges in IPA
CA2178898C (en) * 1996-06-12 2000-02-01 David Joseph Streeter Sequencing and error detection of template instantiations during compilation of c++ programs
US5799143A (en) * 1996-08-26 1998-08-25 Motorola, Inc. Multiple context software analysis
DE19635233A1 (en) * 1996-08-30 1998-03-12 Siemens Ag Microcomputer-controlled medical diagnostic or therapeutic apparatus
JP2970553B2 (en) * 1996-08-30 1999-11-02 日本電気株式会社 Multi-thread execution method
US5887166A (en) * 1996-12-16 1999-03-23 International Business Machines Corporation Method and system for constructing a program including a navigation instruction
US5799182A (en) * 1997-01-21 1998-08-25 Ford Motor Company Multiple thread micro-sequencer apparatus and method with a single processor
US6065078A (en) * 1997-03-07 2000-05-16 National Semiconductor Corporation Multi-processor element provided with hardware for software debugging
US5835705A (en) * 1997-03-11 1998-11-10 International Business Machines Corporation Method and system for performance per-thread monitoring in a multithreaded processor
US6910211B1 (en) * 1997-03-14 2005-06-21 International Business Machines Corporation System and method for queue-less enforcement of queue-like behavior on multiple threads accessing a scarce source
US5907702A (en) * 1997-03-28 1999-05-25 International Business Machines Corporation Method and apparatus for decreasing thread switch latency in a multithread processor
US6167423A (en) * 1997-04-03 2000-12-26 Microsoft Corporation Concurrency control of state machines in a computer system using cliques
US6574720B1 (en) 1997-05-30 2003-06-03 Oracle International Corporation System for maintaining a buffer pool
US6002870A (en) * 1997-05-30 1999-12-14 Sun Microsystems, Inc. Method and apparatus for non-damaging process debugging via an agent thread
US6324623B1 (en) 1997-05-30 2001-11-27 Oracle Corporation Computing system for implementing a shared cache
US6078994A (en) * 1997-05-30 2000-06-20 Oracle Corporation System for maintaining a shared cache in a multi-threaded computer environment
US5946711A (en) * 1997-05-30 1999-08-31 Oracle Corporation System for locking data in a shared cache
US6240440B1 (en) * 1997-06-30 2001-05-29 Sun Microsystems Incorporated Method and apparatus for implementing virtual threads
JP3220055B2 (en) * 1997-07-17 2001-10-22 松下電器産業株式会社 An optimizing device for optimizing a machine language instruction sequence or an assembly language instruction sequence, and a compiler device for converting a source program described in a high-level language into a machine language or an assembly language instruction sequence.
JPH1139190A (en) * 1997-07-22 1999-02-12 Fujitsu Ltd Debugging system of parallel processing program and its debugging method
DE69803575T2 (en) * 1997-07-25 2002-08-29 British Telecomm VISUALIZATION IN A MODULAR SOFTWARE SYSTEM
US6078744A (en) * 1997-08-01 2000-06-20 Sun Microsystems Method and apparatus for improving compiler performance during subsequent compilations of a source program
US6032245A (en) * 1997-08-18 2000-02-29 International Business Machines Corporation Method and system for interrupt handling in a multi-processor computer system executing speculative instruction threads
US6016557A (en) * 1997-09-25 2000-01-18 Lucent Technologies, Inc. Method and apparatus for nonintrusive passive processor monitor
US6101524A (en) * 1997-10-23 2000-08-08 International Business Machines Corporation Deterministic replay of multithreaded applications
US6076157A (en) * 1997-10-23 2000-06-13 International Business Machines Corporation Method and apparatus to force a thread switch in a multithreaded processor
US6697935B1 (en) 1997-10-23 2004-02-24 International Business Machines Corporation Method and apparatus for selecting thread switch events in a multithreaded processor
US6567839B1 (en) 1997-10-23 2003-05-20 International Business Machines Corporation Thread switch control in a multithreaded processor system
US6105051A (en) * 1997-10-23 2000-08-15 International Business Machines Corporation Apparatus and method to guarantee forward progress in execution of threads in a multithreaded processor
US6212544B1 (en) 1997-10-23 2001-04-03 International Business Machines Corporation Altering thread priorities in a multithreaded processor
KR100248376B1 (en) * 1997-10-28 2000-03-15 정선종 Integrated dynamic-visual parallel debugger and its debugging method
US7076784B1 (en) 1997-10-28 2006-07-11 Microsoft Corporation Software component execution management using context objects for tracking externally-defined intrinsic properties of executing software components within an execution environment
US6134594A (en) 1997-10-28 2000-10-17 Microsoft Corporation Multi-user, multiple tier distributed application architecture with single-user access control of middle tier objects
US6061710A (en) * 1997-10-29 2000-05-09 International Business Machines Corporation Multithreaded processor incorporating a thread latch register for interrupt service new pending threads
US6256775B1 (en) 1997-12-11 2001-07-03 International Business Machines Corporation Facilities for detailed software performance analysis in a multithreaded processor
US6047390A (en) * 1997-12-22 2000-04-04 Motorola, Inc. Multiple context software analysis
US6018759A (en) * 1997-12-22 2000-01-25 International Business Machines Corporation Thread switch tuning tool for optimal performance in a computer processor
US7437725B1 (en) * 1999-01-04 2008-10-14 General Electric Company Processing techniques for servers handling client/server traffic and communications
US6457064B1 (en) * 1998-04-27 2002-09-24 Sun Microsystems, Inc. Method and apparatus for detecting input directed to a thread in a multi-threaded process
EP0953898A3 (en) * 1998-04-28 2003-03-26 Matsushita Electric Industrial Co., Ltd. A processor for executing Instructions from memory according to a program counter, and a compiler, an assembler, a linker and a debugger for such a processor
EP0964333A1 (en) 1998-06-10 1999-12-15 Sun Microsystems, Inc. Resource management
US6487577B1 (en) * 1998-06-29 2002-11-26 Intel Corporation Distributed compiling
SE521433C2 (en) * 1998-07-22 2003-11-04 Ericsson Telefon Ab L M A method for managing the risk of a total lock between simultaneous transactions in a database
US6442620B1 (en) 1998-08-17 2002-08-27 Microsoft Corporation Environment extensibility and automatic services for component applications using contexts, policies and activators
US6473791B1 (en) 1998-08-17 2002-10-29 Microsoft Corporation Object load balancing
US7013305B2 (en) 2001-10-01 2006-03-14 International Business Machines Corporation Managing the state of coupling facility structures, detecting by one or more systems coupled to the coupling facility, the suspended state of the duplexed command, detecting being independent of message exchange
US6952827B1 (en) * 1998-11-13 2005-10-04 Cray Inc. User program and operating system interface in a multithreaded environment
US6480818B1 (en) * 1998-11-13 2002-11-12 Cray Inc. Debugging techniques in a multithreaded environment
US6487665B1 (en) 1998-11-30 2002-11-26 Microsoft Corporation Object security boundaries
US6574736B1 (en) 1998-11-30 2003-06-03 Microsoft Corporation Composable roles
US6385724B1 (en) 1998-11-30 2002-05-07 Microsoft Corporation Automatic object caller chain with declarative impersonation and transitive trust
US7117342B2 (en) * 1998-12-03 2006-10-03 Sun Microsystems, Inc. Implicitly derived register specifiers in a processor
US7114056B2 (en) * 1998-12-03 2006-09-26 Sun Microsystems, Inc. Local and global register partitioning in a VLIW processor
US6718457B2 (en) * 1998-12-03 2004-04-06 Sun Microsystems, Inc. Multiple-thread processor for threaded software applications
US6178519B1 (en) * 1998-12-10 2001-01-23 Mci Worldcom, Inc. Cluster-wide database system
US6314557B1 (en) * 1998-12-14 2001-11-06 Infineon Technologies Development Center Tel Aviv Ltd Hybrid computer programming environment
WO2000036491A2 (en) * 1998-12-17 2000-06-22 California Institute Of Technology Programming system and thread synchronization mechanisms for the development of selectively sequential and multithreaded computer programs
US6826752B1 (en) 1998-12-17 2004-11-30 California Institute Of Technology Programming system and thread synchronization mechanisms for the development of selectively sequential and multithreaded computer programs
US6266763B1 (en) * 1999-01-05 2001-07-24 Advanced Micro Devices, Inc. Physical rename register for efficiently storing floating point, integer, condition code, and multimedia values
US7529654B2 (en) * 1999-02-01 2009-05-05 International Business Machines Corporation System and procedure for controlling and monitoring programs in a computer network
US6341338B1 (en) 1999-02-04 2002-01-22 Sun Microsystems, Inc. Protocol for coordinating the distribution of shared memory
US6341371B1 (en) * 1999-02-23 2002-01-22 International Business Machines Corporation System and method for optimizing program execution in a computer system
US6553369B1 (en) * 1999-03-11 2003-04-22 Oracle Corporation Approach for performing administrative functions in information systems
US7644439B2 (en) * 1999-05-03 2010-01-05 Cisco Technology, Inc. Timing attacks against user logon and network I/O
US9268748B2 (en) 1999-05-21 2016-02-23 E-Numerate Solutions, Inc. System, method, and computer program product for outputting markup language documents
US9262383B2 (en) 1999-05-21 2016-02-16 E-Numerate Solutions, Inc. System, method, and computer program product for processing a markup document
US7249328B1 (en) * 1999-05-21 2007-07-24 E-Numerate Solutions, Inc. Tree view for reusable data markup language
US6920608B1 (en) * 1999-05-21 2005-07-19 E Numerate Solutions, Inc. Chart view for reusable data markup language
US9262384B2 (en) 1999-05-21 2016-02-16 E-Numerate Solutions, Inc. Markup language system, method, and computer program product
US7650355B1 (en) 1999-05-21 2010-01-19 E-Numerate Solutions, Inc. Reusable macro markup language
JP3557947B2 (en) * 1999-05-24 2004-08-25 日本電気株式会社 Method and apparatus for simultaneously starting thread execution by a plurality of processors and computer-readable recording medium
US6769122B1 (en) * 1999-07-02 2004-07-27 Silicon Graphics, Inc. Multithreaded layered-code processor
JP2001034504A (en) * 1999-07-19 2001-02-09 Mitsubishi Electric Corp Source level debugger
JP3807588B2 (en) * 1999-08-12 2006-08-09 富士通株式会社 Multi-thread processing apparatus, processing method, and computer-readable recording medium storing multi-thread program
US7996843B2 (en) * 1999-08-25 2011-08-09 Qnx Software Systems Gmbh & Co. Kg Symmetric multi-processor system
US6427196B1 (en) * 1999-08-31 2002-07-30 Intel Corporation SRAM controller for parallel processor architecture including address and command queue and arbiter
US6983350B1 (en) 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US6668317B1 (en) * 1999-08-31 2003-12-23 Intel Corporation Microengine for parallel processor architecture
US6606704B1 (en) * 1999-08-31 2003-08-12 Intel Corporation Parallel multithreaded processor with plural microengines executing multiple threads each microengine having loadable microcode
US6347318B1 (en) * 1999-09-01 2002-02-12 Hewlett-Packard Company Method, system, and apparatus to improve performance of tree-based data structures in computer programs
US7546444B1 (en) 1999-09-01 2009-06-09 Intel Corporation Register set used in multithreaded parallel processor architecture
US6748555B1 (en) * 1999-09-09 2004-06-08 Microsoft Corporation Object-based software management
EP1188294B1 (en) 1999-10-14 2008-03-26 Bluearc UK Limited Apparatus and method for hardware implementation or acceleration of operating system functions
WO2001035209A2 (en) * 1999-11-09 2001-05-17 University Of Victoria Innovation And Development Corporation Modified move to rear list system and methods for thread scheduling
US7512724B1 (en) * 1999-11-19 2009-03-31 The United States Of America As Represented By The Secretary Of The Navy Multi-thread peripheral processing using dedicated peripheral bus
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6633865B1 (en) * 1999-12-23 2003-10-14 Pmc-Sierra Limited Multithreaded address resolution system
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6625654B1 (en) * 1999-12-28 2003-09-23 Intel Corporation Thread signaling in multi-threaded network processor
US6307789B1 (en) * 1999-12-28 2001-10-23 Intel Corporation Scratchpad memory
US6631430B1 (en) * 1999-12-28 2003-10-07 Intel Corporation Optimizations to receive packet status from fifo bus
US6661794B1 (en) * 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US6976095B1 (en) * 1999-12-30 2005-12-13 Intel Corporation Port blocking technique for maintaining receive packet ordering for a multiple ethernet port switch
US6584522B1 (en) * 1999-12-30 2003-06-24 Intel Corporation Communication between processors
US6631462B1 (en) * 2000-01-05 2003-10-07 Intel Corporation Memory shared between processing threads
US6715060B1 (en) * 2000-01-28 2004-03-30 Hewlett-Packard Development Company, L.P. Utilizing a scoreboard with multi-bit registers to indicate a progression status of an instruction that retrieves data
US7035989B1 (en) 2000-02-16 2006-04-25 Sun Microsystems, Inc. Adaptive memory allocation
US6886005B2 (en) * 2000-02-17 2005-04-26 E-Numerate Solutions, Inc. RDL search engine
US8271321B1 (en) * 2000-06-05 2012-09-18 Buildinglink.com, LLC Apparatus and method for providing building management information
JP2001356934A (en) * 2000-03-02 2001-12-26 Texas Instr Inc <Ti> Constituting method of software system to interact with hardware system and digital system
JP3477137B2 (en) * 2000-03-03 2003-12-10 松下電器産業株式会社 Computer-readable recording medium recording optimization device and optimization program
AU2001255808A1 (en) * 2000-03-15 2001-09-24 Arc Cores, Inc. Method and apparatus for debugging programs in a distributed environment
US7143410B1 (en) * 2000-03-31 2006-11-28 Intel Corporation Synchronization mechanism and method for synchronizing multiple threads with a single thread
JP4693074B2 (en) * 2000-04-28 2011-06-01 ルネサスエレクトロニクス株式会社 Appearance inspection apparatus and appearance inspection method
US6647546B1 (en) 2000-05-03 2003-11-11 Sun Microsystems, Inc. Avoiding gather and scatter when calling Fortran 77 code from Fortran 90 code
US6802057B1 (en) 2000-05-03 2004-10-05 Sun Microsystems, Inc. Automatic generation of fortran 90 interfaces to fortran 77 code
US6725457B1 (en) * 2000-05-17 2004-04-20 Nvidia Corporation Semaphore enhancement to improve system performance
US6658652B1 (en) 2000-06-08 2003-12-02 International Business Machines Corporation Method and system for shadow heap memory leak detection and other heap analysis in an object-oriented environment during real-time trace processing
US6799317B1 (en) 2000-06-27 2004-09-28 International Business Machines Corporation Interrupt mechanism for shared memory message passing
US6721948B1 (en) * 2000-06-30 2004-04-13 Equator Technologies, Inc. Method for managing shared tasks in a multi-tasking data processing system
US6735758B1 (en) 2000-07-06 2004-05-11 International Business Machines Corporation Method and system for SMP profiling using synchronized or nonsynchronized metric variables with support across multiple systems
US7389497B1 (en) * 2000-07-06 2008-06-17 International Business Machines Corporation Method and system for tracing profiling information using per thread metric variables with reused kernel threads
US6904594B1 (en) 2000-07-06 2005-06-07 International Business Machines Corporation Method and system for apportioning changes in metric variables in an symmetric multiprocessor (SMP) environment
US6742178B1 (en) 2000-07-20 2004-05-25 International Business Machines Corporation System and method for instrumenting application class files with correlation information to the instrumentation
US6662359B1 (en) 2000-07-20 2003-12-09 International Business Machines Corporation System and method for injecting hooks into Java classes to handle exception and finalization processing
US20060168586A1 (en) * 2000-08-09 2006-07-27 Stone Glen D System and method for interactively utilizing a user interface to manage device resources
US6687905B1 (en) * 2000-08-11 2004-02-03 International Business Machines Corporation Multiple port input/output job scheduling
US6910107B1 (en) 2000-08-23 2005-06-21 Sun Microsystems, Inc. Method and apparatus for invalidation of data in computer systems
US7681018B2 (en) 2000-08-31 2010-03-16 Intel Corporation Method and apparatus for providing large register address space while maximizing cycletime performance for a multi-threaded register file set
US6665734B1 (en) 2000-09-21 2003-12-16 International Business Machines Corporation Blending object-oriented objects with traditional programming languages
US6668370B1 (en) 2000-09-21 2003-12-23 International Business Machines Corporation Synchronous execution of object-oriented scripts and procedural code from within an interactive test facility
CA2321016A1 (en) * 2000-09-27 2002-03-27 Ibm Canada Limited-Ibm Canada Limitee Interprocedural dead store elimination
JP3933380B2 (en) * 2000-10-05 2007-06-20 富士通株式会社 compiler
US7406681B1 (en) 2000-10-12 2008-07-29 Sun Microsystems, Inc. Automatic conversion of source code from 32-bit to 64-bit
US7305360B1 (en) 2000-10-25 2007-12-04 Thomson Financial Inc. Electronic sales system
EP2056248A1 (en) * 2000-10-25 2009-05-06 Thomson Financial Inc. Electronic commerce system
US7287089B1 (en) 2000-10-25 2007-10-23 Thomson Financial Inc. Electronic commerce infrastructure system
US7330830B1 (en) 2000-10-25 2008-02-12 Thomson Financial Inc. Distributed commerce system
US6957208B1 (en) 2000-10-31 2005-10-18 Sun Microsystems, Inc. Method, apparatus, and article of manufacture for performance analysis using semantic knowledge
JP2004513428A (en) * 2000-11-06 2004-04-30 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method and system for allocating resource allocation to tasks
US20020095434A1 (en) * 2001-01-12 2002-07-18 Lane Robert M. Performance modeling based upon empirical measurements of synchronization points
US6925634B2 (en) * 2001-01-24 2005-08-02 Texas Instruments Incorporated Method for maintaining cache coherency in software in a shared memory system
US9600842B2 (en) * 2001-01-24 2017-03-21 E-Numerate Solutions, Inc. RDX enhancement of system and method for implementing reusable data markup language (RDL)
US7325232B2 (en) * 2001-01-25 2008-01-29 Improv Systems, Inc. Compiler for multiple processor and distributed memory architectures
US6848103B2 (en) * 2001-02-16 2005-01-25 Telefonaktiebolaget Lm Ericsson Method and apparatus for processing data in a multi-processor environment
US7233998B2 (en) * 2001-03-22 2007-06-19 Sony Computer Entertainment Inc. Computer architecture and software cells for broadband networks
US20020144004A1 (en) * 2001-03-29 2002-10-03 Gaur Daniel R. Driver having multiple deferred procedure calls for interrupt processing and method for interrupt processing
CA2343437A1 (en) * 2001-04-06 2002-10-06 Ibm Canada Limited-Ibm Canada Limitee Method and system for cross platform, parallel processing
US6966000B2 (en) * 2001-04-13 2005-11-15 Ge Medical Technology Services, Inc. Method and system to remotely grant limited access to software options resident on a device
US7478394B1 (en) * 2001-06-04 2009-01-13 Hewlett-Packard Development Company, L.P. Context-corrupting context switching
US20030018957A1 (en) * 2001-06-22 2003-01-23 International Business Machines Corporation Debugger monitor with anticipatory highlights
US8234156B2 (en) * 2001-06-28 2012-07-31 Jpmorgan Chase Bank, N.A. System and method for characterizing and selecting technology transition options
US7055144B2 (en) * 2001-07-12 2006-05-30 International Business Machines Corporation Method and system for optimizing the use of processors when compiling a program
US20030028583A1 (en) * 2001-07-31 2003-02-06 International Business Machines Corporation Method and apparatus for providing dynamic workload transition during workload simulation on e-business application server
US7428485B2 (en) * 2001-08-24 2008-09-23 International Business Machines Corporation System for yielding to a processor
US7251814B2 (en) * 2001-08-24 2007-07-31 International Business Machines Corporation Yield on multithreaded processors
US6868476B2 (en) * 2001-08-27 2005-03-15 Intel Corporation Software controlled content addressable memory in a general purpose execution datapath
US7126952B2 (en) * 2001-09-28 2006-10-24 Intel Corporation Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
CA2359862A1 (en) * 2001-10-24 2003-04-24 Ibm Canada Limited - Ibm Canada Limitee Using identifiers and counters for controlled optimization compilation
JP4272371B2 (en) * 2001-11-05 2009-06-03 パナソニック株式会社 A debugging support device, a compiler device, a debugging support program, a compiler program, and a computer-readable recording medium.
WO2003042823A1 (en) * 2001-11-14 2003-05-22 Exegesys, Inc. Method and system for software application development and customizable runtime environment
US20030110474A1 (en) * 2001-12-06 2003-06-12 International Business Machines Corporation System for coverability analysis
US6978466B2 (en) * 2002-01-02 2005-12-20 Intel Corporation Method and system to reduce thrashing in a multi-threaded programming environment
US7895239B2 (en) 2002-01-04 2011-02-22 Intel Corporation Queue arrays in network devices
US20030131146A1 (en) * 2002-01-09 2003-07-10 International Business Machines Corporation Interactive monitoring and control of computer script commands in a network environment
US6934951B2 (en) * 2002-01-17 2005-08-23 Intel Corporation Parallel processor with functional pipeline providing programming engines by supporting multiple contexts and critical section
JP2003256222A (en) * 2002-03-04 2003-09-10 Matsushita Electric Ind Co Ltd Distribution processing system, job distribution processing method and its program
US7243354B1 (en) * 2002-04-08 2007-07-10 3Com Corporation System and method for efficiently processing information in a multithread environment
US8032891B2 (en) * 2002-05-20 2011-10-04 Texas Instruments Incorporated Energy-aware scheduling of application execution
US20050188177A1 (en) * 2002-05-31 2005-08-25 The University Of Delaware Method and apparatus for real-time multithreading
US7571439B1 (en) * 2002-05-31 2009-08-04 Teradata Us, Inc. Synchronizing access to global resources
US20030225890A1 (en) * 2002-06-03 2003-12-04 Dunstan Robert A. State token for thin client devices
US7471688B2 (en) * 2002-06-18 2008-12-30 Intel Corporation Scheduling system for transmission of cells to ATM virtual circuits and DSL ports
US7185086B2 (en) * 2002-06-26 2007-02-27 Hewlett-Packard Development Company, L.P. Method for electronic tracking of an electronic device
CA2496664C (en) 2002-08-23 2015-02-17 Exit-Cube, Inc. Encrypting operating system
US9043194B2 (en) * 2002-09-17 2015-05-26 International Business Machines Corporation Method and system for efficient emulation of multiprocessor memory consistency
US7953588B2 (en) * 2002-09-17 2011-05-31 International Business Machines Corporation Method and system for efficient emulation of multiprocessor address translation on a multiprocessor host
US8108843B2 (en) * 2002-09-17 2012-01-31 International Business Machines Corporation Hybrid mechanism for more efficient emulation and method therefor
US7496494B2 (en) * 2002-09-17 2009-02-24 International Business Machines Corporation Method and system for multiprocessor emulation on a multiprocessor host system
US20040083158A1 (en) * 2002-10-09 2004-04-29 Mark Addison Systems and methods for distributing pricing data for complex derivative securities
US7603664B2 (en) * 2002-10-22 2009-10-13 Sun Microsystems, Inc. System and method for marking software code
US7222218B2 (en) * 2002-10-22 2007-05-22 Sun Microsystems, Inc. System and method for goal-based scheduling of blocks of code for concurrent execution
US7346902B2 (en) * 2002-10-22 2008-03-18 Sun Microsystems, Inc. System and method for block-based concurrentization of software code
US7340650B2 (en) 2002-10-30 2008-03-04 Jp Morgan Chase & Co. Method to measure stored procedure execution statistics
US7457822B1 (en) * 2002-11-01 2008-11-25 Bluearc Uk Limited Apparatus and method for hardware-based file system
US8041735B1 (en) 2002-11-01 2011-10-18 Bluearc Uk Limited Distributed file system and method
US7433307B2 (en) * 2002-11-05 2008-10-07 Intel Corporation Flow control in a network environment
US20040095348A1 (en) * 2002-11-19 2004-05-20 Bleiweiss Avi I. Shading language interface and method
US7149752B2 (en) * 2002-12-03 2006-12-12 Jp Morgan Chase Bank Method for simplifying databinding in application programs
US7085759B2 (en) 2002-12-06 2006-08-01 Jpmorgan Chase Bank System and method for communicating data to a process
US8032439B2 (en) 2003-01-07 2011-10-04 Jpmorgan Chase Bank, N.A. System and method for process scheduling
US7401156B2 (en) * 2003-02-03 2008-07-15 Jp Morgan Chase Bank Method using control interface to suspend software network environment running on network devices for loading and executing another software network environment
KR100498486B1 (en) 2003-02-06 2005-07-01 삼성전자주식회사 Computer system providing for recompiling a program and cxtracting threads dynamically by a thread binary compiler and Simultaneous Multithreading method thereof
US20040162888A1 (en) * 2003-02-17 2004-08-19 Reasor Jason W. Remote access to a firmware developer user interface
US7484087B2 (en) * 2003-02-24 2009-01-27 Jp Morgan Chase Bank Systems, methods, and software for preventing redundant processing of transmissions sent to a remote host computer
US7225437B2 (en) * 2003-03-26 2007-05-29 Sun Microsystems, Inc. Dynamic distributed make
US8180943B1 (en) * 2003-03-27 2012-05-15 Nvidia Corporation Method and apparatus for latency based thread scheduling
US7000051B2 (en) * 2003-03-31 2006-02-14 International Business Machines Corporation Apparatus and method for virtualizing interrupts in a logically partitioned computer system
US7379998B2 (en) * 2003-03-31 2008-05-27 Jp Morgan Chase Bank System and method for multi-platform queue queries
US7281075B2 (en) * 2003-04-24 2007-10-09 International Business Machines Corporation Virtualization of a global interrupt queue
US20040230602A1 (en) * 2003-05-14 2004-11-18 Andrew Doddington System and method for decoupling data presentation layer and data gathering and storage layer in a distributed data processing system
US7366722B2 (en) * 2003-05-15 2008-04-29 Jp Morgan Chase Bank System and method for specifying application services and distributing them across multiple processors using XML
US7509641B2 (en) * 2003-05-16 2009-03-24 Jp Morgan Chase Bank Job processing framework
CA2432866A1 (en) * 2003-06-20 2004-12-20 Ibm Canada Limited - Ibm Canada Limitee Debugging optimized flows
US7249246B1 (en) * 2003-06-20 2007-07-24 Transmeta Corporation Methods and systems for maintaining information for locating non-native processor instructions when executing native processor instructions
US7228528B2 (en) * 2003-06-26 2007-06-05 Intel Corporation Building inter-block streams from a dynamic execution trace for a program
CA2433527A1 (en) * 2003-06-26 2004-12-26 Ibm Canada Limited - Ibm Canada Limitee System and method for object-oriented graphically integrated command sh ell
US7421658B1 (en) * 2003-07-30 2008-09-02 Oracle International Corporation Method and system for providing a graphical user interface for a script session
US20050044067A1 (en) * 2003-08-22 2005-02-24 Jameson Kevin Wade Collection processing system
US7664860B2 (en) * 2003-09-02 2010-02-16 Sap Ag Session handling
US7383539B2 (en) * 2003-09-18 2008-06-03 International Business Machines Corporation Managing breakpoints in a multi-threaded environment
JP4837247B2 (en) * 2003-09-24 2011-12-14 パナソニック株式会社 Processor
US7475257B2 (en) * 2003-09-25 2009-01-06 International Business Machines Corporation System and method for selecting and using a signal processor in a multiprocessor system to operate as a security for encryption/decryption of data
US7478390B2 (en) * 2003-09-25 2009-01-13 International Business Machines Corporation Task queue management of virtual devices using a plurality of processors
US20050071828A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for compiling source code for multi-processor environments
US7389508B2 (en) * 2003-09-25 2008-06-17 International Business Machines Corporation System and method for grouping processors and assigning shared memory space to a group in heterogeneous computer environment
US7523157B2 (en) * 2003-09-25 2009-04-21 International Business Machines Corporation Managing a plurality of processors as devices
US7415703B2 (en) * 2003-09-25 2008-08-19 International Business Machines Corporation Loading software on a plurality of processors
US7496917B2 (en) * 2003-09-25 2009-02-24 International Business Machines Corporation Virtual devices using a pluarlity of processors
US7444632B2 (en) * 2003-09-25 2008-10-28 International Business Machines Corporation Balancing computational load across a plurality of processors
US7549145B2 (en) * 2003-09-25 2009-06-16 International Business Machines Corporation Processor dedicated code handling in a multi-processor environment
US7516456B2 (en) * 2003-09-25 2009-04-07 International Business Machines Corporation Asymmetric heterogeneous multi-threaded operating system
US7383368B2 (en) * 2003-09-25 2008-06-03 Dell Products L.P. Method and system for autonomically adaptive mutexes by considering acquisition cost value
US7043626B1 (en) 2003-10-01 2006-05-09 Advanced Micro Devices, Inc. Retaining flag value associated with dead result data in freed rename physical register with an indicator to select set-aside register instead for renaming
EP1522923A3 (en) * 2003-10-08 2011-06-22 STMicroelectronics SA Simultaneous multi-threaded (SMT) processor architecture
US7631307B2 (en) * 2003-12-05 2009-12-08 Intel Corporation User-programmable low-overhead multithreading
WO2005059750A1 (en) * 2003-12-16 2005-06-30 Real Enterprise Solutions Development B.V. Memory management in a computer system using different swapping criteria
JP4085389B2 (en) * 2003-12-24 2008-05-14 日本電気株式会社 Multiprocessor system, consistency control device and consistency control method in multiprocessor system
US7213099B2 (en) * 2003-12-30 2007-05-01 Intel Corporation Method and apparatus utilizing non-uniformly distributed DRAM configurations and to detect in-range memory address matches
US20050144174A1 (en) * 2003-12-31 2005-06-30 Leonid Pesenson Framework for providing remote processing of a graphical user interface
US7747659B2 (en) * 2004-01-05 2010-06-29 International Business Machines Corporation Garbage collector with eager read barrier
US20050154910A1 (en) * 2004-01-12 2005-07-14 Shaw Mark E. Security measures in a partitionable computing system
JP4367167B2 (en) * 2004-02-18 2009-11-18 日本電気株式会社 Real-time system, QoS adaptive control device, QoS adaptive control method used therefor, and program thereof
US7702767B2 (en) * 2004-03-09 2010-04-20 Jp Morgan Chase Bank User connectivity process management system
US20050222990A1 (en) * 2004-04-06 2005-10-06 Milne Kenneth T Methods and systems for using script files to obtain, format and disseminate database information
US9734222B1 (en) 2004-04-06 2017-08-15 Jpmorgan Chase Bank, N.A. Methods and systems for using script files to obtain, format and transport data
US20050240928A1 (en) * 2004-04-09 2005-10-27 Brown Theresa M Resource reservation
CA2563354C (en) * 2004-04-26 2010-08-17 Jp Morgan Chase Bank System and method for routing messages
US7383500B2 (en) 2004-04-30 2008-06-03 Microsoft Corporation Methods and systems for building packages that contain pre-paginated documents
US7359902B2 (en) * 2004-04-30 2008-04-15 Microsoft Corporation Method and apparatus for maintaining relationships between parts in a package
US8661332B2 (en) * 2004-04-30 2014-02-25 Microsoft Corporation Method and apparatus for document processing
US7519899B2 (en) 2004-05-03 2009-04-14 Microsoft Corporation Planar mapping of graphical elements
US8363232B2 (en) 2004-05-03 2013-01-29 Microsoft Corporation Strategies for simultaneous peripheral operations on-line using hierarchically structured job information
US20050246384A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Systems and methods for passing data between filters
US7580948B2 (en) * 2004-05-03 2009-08-25 Microsoft Corporation Spooling strategies using structured job information
US7755786B2 (en) 2004-05-03 2010-07-13 Microsoft Corporation Systems and methods for support of various processing capabilities
US7607141B2 (en) * 2004-05-03 2009-10-20 Microsoft Corporation Systems and methods for support of various processing capabilities
US8243317B2 (en) * 2004-05-03 2012-08-14 Microsoft Corporation Hierarchical arrangement for spooling job data
US7634775B2 (en) * 2004-05-03 2009-12-15 Microsoft Corporation Sharing of downloaded resources
US7318125B2 (en) * 2004-05-20 2008-01-08 International Business Machines Corporation Runtime selective control of hardware prefetch mechanism
US20050267892A1 (en) * 2004-05-21 2005-12-01 Patrick Paul B Service proxy definition
US20050273521A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Dynamically configurable service oriented architecture
US7310684B2 (en) * 2004-05-21 2007-12-18 Bea Systems, Inc. Message processing in a service oriented architecture
US20060031432A1 (en) * 2004-05-21 2006-02-09 Bea Systens, Inc. Service oriented architecture with message processing pipelines
US20060069791A1 (en) * 2004-05-21 2006-03-30 Bea Systems, Inc. Service oriented architecture with interchangeable transport protocols
US20060031354A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Service oriented architecture
US20060080419A1 (en) * 2004-05-21 2006-04-13 Bea Systems, Inc. Reliable updating for a service oriented architecture
US20050273502A1 (en) * 2004-05-21 2005-12-08 Patrick Paul B Service oriented architecture with message processing stages
US20050273516A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Dynamic routing in a service oriented architecture
US20060031481A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Service oriented architecture with monitoring
US20050273517A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Service oriented architecture with credential management
US7774485B2 (en) * 2004-05-21 2010-08-10 Bea Systems, Inc. Dynamic service composition and orchestration
US20050278335A1 (en) * 2004-05-21 2005-12-15 Bea Systems, Inc. Service oriented architecture with alerts
US20060031355A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Programmable service oriented architecture
US20050273520A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Service oriented architecture with file transport protocol
US7653008B2 (en) * 2004-05-21 2010-01-26 Bea Systems, Inc. Dynamically configurable service oriented architecture
US20050278374A1 (en) * 2004-05-21 2005-12-15 Bea Systems, Inc. Dynamic program modification
US20060005063A1 (en) * 2004-05-21 2006-01-05 Bea Systems, Inc. Error handling for a service oriented architecture
US20050273497A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Service oriented architecture with electronic mail transport protocol
US20060007918A1 (en) * 2004-05-21 2006-01-12 Bea Systems, Inc. Scaleable service oriented architecture
US20050270970A1 (en) * 2004-05-21 2005-12-08 Bea Systems, Inc. Failsafe service oriented architecture
US20050264581A1 (en) * 2004-05-21 2005-12-01 Bea Systems, Inc. Dynamic program modification
US20060031431A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Reliable updating for a service oriented architecture
US20060031930A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Dynamically configurable service oriented architecture
US20060031353A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Dynamic publishing in a service oriented architecture
US20060031433A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Batch updating for a service oriented architecture
US20050267947A1 (en) * 2004-05-21 2005-12-01 Bea Systems, Inc. Service oriented architecture with message processing pipelines
US7340654B2 (en) * 2004-06-17 2008-03-04 Platform Computing Corporation Autonomic monitoring in a grid environment
US7861246B2 (en) * 2004-06-17 2010-12-28 Platform Computing Corporation Job-centric scheduling in a grid environment
US7844969B2 (en) * 2004-06-17 2010-11-30 Platform Computing Corporation Goal-oriented predictive scheduling in a grid environment
US7665127B1 (en) 2004-06-30 2010-02-16 Jp Morgan Chase Bank System and method for providing access to protected services
US7565659B2 (en) * 2004-07-15 2009-07-21 International Business Machines Corporation Light weight context switching
US20060020701A1 (en) * 2004-07-21 2006-01-26 Parekh Harshadrai G Thread transfer between processors
US7392471B1 (en) 2004-07-28 2008-06-24 Jp Morgan Chase Bank System and method for comparing extensible markup language (XML) documents
US7681199B2 (en) * 2004-08-31 2010-03-16 Hewlett-Packard Development Company, L.P. Time measurement using a context switch count, an offset, and a scale factor, received from the operating system
US7536427B2 (en) * 2004-09-20 2009-05-19 Sap Ag Comparing process sizes
US20060070069A1 (en) * 2004-09-30 2006-03-30 International Business Machines Corporation System and method for sharing resources between real-time and virtualizing operating systems
US8185776B1 (en) * 2004-09-30 2012-05-22 Symantec Operating Corporation System and method for monitoring an application or service group within a cluster as a resource of another cluster
CA2577221C (en) 2004-10-04 2014-08-19 Research In Motion Limited System and method for adaptive allocation of threads to user objects in a computer system
CA2577230C (en) * 2004-10-04 2014-12-30 Research In Motion Limited Allocation of threads to user objects in a computer system
US20060085492A1 (en) * 2004-10-14 2006-04-20 Singh Arun K System and method for modifying process navigation
US7584111B2 (en) * 2004-11-19 2009-09-01 Microsoft Corporation Time polynomial Arrow-Debreu market equilibrium
US7496735B2 (en) * 2004-11-22 2009-02-24 Strandera Corporation Method and apparatus for incremental commitment to architectural state in a microprocessor
US7634773B2 (en) * 2004-11-24 2009-12-15 Hewlett-Packard Development Company, L.P. Method and apparatus for thread scheduling on multiple processors
KR100649713B1 (en) * 2004-12-06 2006-11-28 한국전자통신연구원 Method for hierarchical system configuration and integrated scheduling to provide multimedia streaming service on a two-level double cluster system
US8020141B2 (en) * 2004-12-06 2011-09-13 Microsoft Corporation Operating-system process construction
JP2006243838A (en) * 2005-02-28 2006-09-14 Toshiba Corp Program development device
US8219823B2 (en) 2005-03-04 2012-07-10 Carter Ernst B System for and method of managing access to a system using combinations of user information
US7698430B2 (en) * 2005-03-16 2010-04-13 Adaptive Computing Enterprises, Inc. On-demand compute environment
US9075657B2 (en) * 2005-04-07 2015-07-07 Adaptive Computing Enterprises, Inc. On-demand access to compute resources
JP4318664B2 (en) * 2005-05-12 2009-08-26 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and task execution method
US7383396B2 (en) * 2005-05-12 2008-06-03 International Business Machines Corporation Method and apparatus for monitoring processes in a non-uniform memory access (NUMA) computer system
US8407206B2 (en) * 2005-05-16 2013-03-26 Microsoft Corporation Storing results related to requests for software development services
WO2006129578A1 (en) * 2005-05-30 2006-12-07 Semiconductor Energy Laboratory Co., Ltd. Semiconductor device and driving method thereof
US7308565B2 (en) * 2005-06-15 2007-12-11 Seiko Epson Corporation Saving/restoring task state data from/to device controller host interface upon command from host processor to handle task interruptions
US8849968B2 (en) * 2005-06-20 2014-09-30 Microsoft Corporation Secure and stable hosting of third-party extensions to web services
US7908163B2 (en) * 2005-07-15 2011-03-15 The Board Of Trustees Of The University Of Alabama Method and system for parallel scheduling of complex dags under uncertainty
US20070043916A1 (en) * 2005-08-16 2007-02-22 Aguilar Maximino Jr System and method for light weight task switching when a shared memory condition is signaled
US8572516B1 (en) 2005-08-24 2013-10-29 Jpmorgan Chase Bank, N.A. System and method for controlling a screen saver
US20070067488A1 (en) * 2005-09-16 2007-03-22 Ebay Inc. System and method for transferring data
US20070101325A1 (en) * 2005-10-19 2007-05-03 Juraj Bystricky System and method for utilizing a remote memory to perform an interface save/restore procedure
US8074231B2 (en) * 2005-10-26 2011-12-06 Microsoft Corporation Configuration of isolated extensions and device drivers
US20070094495A1 (en) * 2005-10-26 2007-04-26 Microsoft Corporation Statically Verifiable Inter-Process-Communicative Isolated Processes
KR100728899B1 (en) * 2005-10-27 2007-06-15 한국과학기술원 High Performance Embedded Processor with Multiple Register Sets and Hardware Context Manager
US7499933B1 (en) 2005-11-12 2009-03-03 Jpmorgan Chase Bank, N.A. System and method for managing enterprise application configuration
US8181016B1 (en) 2005-12-01 2012-05-15 Jpmorgan Chase Bank, N.A. Applications access re-certification system
US9081609B2 (en) * 2005-12-21 2015-07-14 Xerox Corporation Image processing system and method employing a threaded scheduler
US8104030B2 (en) * 2005-12-21 2012-01-24 International Business Machines Corporation Mechanism to restrict parallelization of loops
JP2007220086A (en) * 2006-01-17 2007-08-30 Ntt Docomo Inc Input/output controller, input/output control system, and input/output control method
US20070174695A1 (en) * 2006-01-18 2007-07-26 Srinidhi Varadarajan Log-based rollback-recovery
US20070226696A1 (en) * 2006-02-03 2007-09-27 Dell Products L.P. System and method for the execution of multithreaded software applications
US8510596B1 (en) * 2006-02-09 2013-08-13 Virsec Systems, Inc. System and methods for run time detection and correction of memory corruption
US7913249B1 (en) 2006-03-07 2011-03-22 Jpmorgan Chase Bank, N.A. Software installation checker
US7895565B1 (en) 2006-03-15 2011-02-22 Jp Morgan Chase Bank, N.A. Integrated system and method for validating the functionality and performance of software applications
US8402451B1 (en) * 2006-03-17 2013-03-19 Epic Games, Inc. Dual mode evaluation for programs containing recursive computations
JP4963854B2 (en) * 2006-03-28 2012-06-27 株式会社ソニー・コンピュータエンタテインメント Multiprocessor computer and network computing system
US7500073B1 (en) * 2006-03-29 2009-03-03 Sun Microsystems, Inc. Relocation of virtual-to-physical mappings
US8087026B2 (en) 2006-04-27 2011-12-27 International Business Machines Corporation Fair share scheduling based on an individual user's resource usage and the tracking of that usage
US9703285B2 (en) * 2006-04-27 2017-07-11 International Business Machines Corporation Fair share scheduling for mixed clusters with multiple resources
US7716336B2 (en) * 2006-04-28 2010-05-11 International Business Machines Corporation Resource reservation for massively parallel processing systems
KR20070109432A (en) * 2006-05-11 2007-11-15 삼성전자주식회사 Apparatus and method for kernel aware debugging
US8572616B2 (en) * 2006-05-25 2013-10-29 International Business Machines Corporation Apparatus, system, and method for managing z/OS batch jobs with prerequisites
US7610172B2 (en) * 2006-06-16 2009-10-27 Jpmorgan Chase Bank, N.A. Method and system for monitoring non-occurring events
US7814486B2 (en) * 2006-06-20 2010-10-12 Google Inc. Multi-thread runtime system
US8024708B2 (en) * 2006-06-20 2011-09-20 Google Inc. Systems and methods for debugging an application running on a parallel-processing computer system
US8261270B2 (en) 2006-06-20 2012-09-04 Google Inc. Systems and methods for generating reference results using a parallel-processing computer system
US8381202B2 (en) * 2006-06-20 2013-02-19 Google Inc. Runtime system for executing an application in a parallel-processing computer system
WO2007149884A2 (en) * 2006-06-20 2007-12-27 Google, Inc. A multi-thread high performance computing system
US8136104B2 (en) * 2006-06-20 2012-03-13 Google Inc. Systems and methods for determining compute kernels for an application in a parallel-processing computer system
US8375368B2 (en) * 2006-06-20 2013-02-12 Google Inc. Systems and methods for profiling an application running on a parallel-processing computer system
US8146066B2 (en) 2006-06-20 2012-03-27 Google Inc. Systems and methods for caching compute kernels for an application running on a parallel-processing computer system
US8136102B2 (en) 2006-06-20 2012-03-13 Google Inc. Systems and methods for compiling an application for a parallel-processing computer system
US8108844B2 (en) * 2006-06-20 2012-01-31 Google Inc. Systems and methods for dynamically choosing a processing element for a compute kernel
US8443348B2 (en) * 2006-06-20 2013-05-14 Google Inc. Application program interface of a parallel-processing computer system that supports multiple programming languages
US7512590B2 (en) * 2006-06-21 2009-03-31 International Business Machines Corporation Discovery directives
US8032898B2 (en) * 2006-06-30 2011-10-04 Microsoft Corporation Kernel interface with categorized kernel objects
US7549035B1 (en) * 2006-09-22 2009-06-16 Sun Microsystems, Inc. System and method for reference and modification tracking
US7472253B1 (en) * 2006-09-22 2008-12-30 Sun Microsystems, Inc. System and method for managing table lookaside buffer performance
US7546439B1 (en) * 2006-09-22 2009-06-09 Sun Microsystems, Inc. System and method for managing copy-on-write faults and change-protection
US8522240B1 (en) * 2006-10-19 2013-08-27 United Services Automobile Association (Usaa) Systems and methods for collaborative task management
US8108879B1 (en) * 2006-10-27 2012-01-31 Nvidia Corporation Method and apparatus for context switching of multiple engines
US20080163183A1 (en) * 2006-12-29 2008-07-03 Zhiyuan Li Methods and apparatus to provide parameterized offloading on multiprocessor architectures
US8108845B2 (en) * 2007-02-14 2012-01-31 The Mathworks, Inc. Parallel programming computing system to dynamically allocate program portions
US8789063B2 (en) * 2007-03-30 2014-07-22 Microsoft Corporation Master and subordinate operating system kernels for heterogeneous multiprocessor systems
US20080244507A1 (en) * 2007-03-30 2008-10-02 Microsoft Corporation Homogeneous Programming For Heterogeneous Multiprocessor Systems
US7926035B2 (en) * 2007-04-24 2011-04-12 Microsoft Corporation Testing multi-thread software using prioritized context switch limits
US8046766B2 (en) * 2007-04-26 2011-10-25 Hewlett-Packard Development Company, L.P. Process assignment to physical processors using minimum and maximum processor shares
US8996394B2 (en) * 2007-05-18 2015-03-31 Oracle International Corporation System and method for enabling decision activities in a process management and design environment
US8185916B2 (en) 2007-06-28 2012-05-22 Oracle International Corporation System and method for integrating a business process management system with an enterprise service bus
US9015399B2 (en) 2007-08-20 2015-04-21 Convey Computer Multiple data channel memory module architecture
US8122229B2 (en) * 2007-09-12 2012-02-21 Convey Computer Dispatch mechanism for dispatching instructions from a host processor to a co-processor
US8561037B2 (en) * 2007-08-29 2013-10-15 Convey Computer Compiler for generating an executable comprising instructions for a plurality of different instruction sets
US8095735B2 (en) * 2008-08-05 2012-01-10 Convey Computer Memory interleave for heterogeneous computing
US8156307B2 (en) * 2007-08-20 2012-04-10 Convey Computer Multi-processor system having at least one processor that comprises a dynamically reconfigurable instruction set
US9710384B2 (en) 2008-01-04 2017-07-18 Micron Technology, Inc. Microprocessor architecture having alternative memory access paths
AU2008298585A1 (en) * 2007-09-14 2009-03-19 Softkvm, Llc Software method and system for controlling and observing computer networking devices
US8756402B2 (en) * 2007-09-14 2014-06-17 Intel Mobile Communications GmbH Processing module, processor circuit, instruction set for processing data, and method for synchronizing the processing of codes
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US8060878B2 (en) * 2007-11-27 2011-11-15 International Business Machines Corporation Prevention of deadlock in a distributed computing environment
US8087022B2 (en) * 2007-11-27 2011-12-27 International Business Machines Corporation Prevention of deadlock in a distributed computing environment
US8510756B1 (en) * 2007-12-06 2013-08-13 Parallels IP Holdings GmbH Guest operating system code optimization for virtual machine
US8516484B2 (en) 2008-02-01 2013-08-20 International Business Machines Corporation Wake-and-go mechanism for a data processing system
US8612977B2 (en) * 2008-02-01 2013-12-17 International Business Machines Corporation Wake-and-go mechanism with software save of thread state
US8316218B2 (en) * 2008-02-01 2012-11-20 International Business Machines Corporation Look-ahead wake-and-go engine with speculative execution
US8250396B2 (en) * 2008-02-01 2012-08-21 International Business Machines Corporation Hardware wake-and-go mechanism for a data processing system
US8341635B2 (en) 2008-02-01 2012-12-25 International Business Machines Corporation Hardware wake-and-go mechanism with look-ahead polling
US8732683B2 (en) 2008-02-01 2014-05-20 International Business Machines Corporation Compiler providing idiom to idiom accelerator
US8127080B2 (en) 2008-02-01 2012-02-28 International Business Machines Corporation Wake-and-go mechanism with system address bus transaction master
US8725992B2 (en) 2008-02-01 2014-05-13 International Business Machines Corporation Programming language exposing idiom calls to a programming idiom accelerator
US8312458B2 (en) 2008-02-01 2012-11-13 International Business Machines Corporation Central repository for wake-and-go mechanism
US8145849B2 (en) * 2008-02-01 2012-03-27 International Business Machines Corporation Wake-and-go mechanism with system bus response
US8386822B2 (en) * 2008-02-01 2013-02-26 International Business Machines Corporation Wake-and-go mechanism with data monitoring
US8452947B2 (en) * 2008-02-01 2013-05-28 International Business Machines Corporation Hardware wake-and-go mechanism and content addressable memory with instruction pre-fetch look-ahead to detect programming idioms
US8880853B2 (en) 2008-02-01 2014-11-04 International Business Machines Corporation CAM-based wake-and-go snooping engine for waking a thread put to sleep for spinning on a target address lock
US8788795B2 (en) * 2008-02-01 2014-07-22 International Business Machines Corporation Programming idiom accelerator to examine pre-fetched instruction streams for multiple processors
US8015379B2 (en) * 2008-02-01 2011-09-06 International Business Machines Corporation Wake-and-go mechanism with exclusive system bus response
US8171476B2 (en) 2008-02-01 2012-05-01 International Business Machines Corporation Wake-and-go mechanism with prioritization of threads
US8225120B2 (en) 2008-02-01 2012-07-17 International Business Machines Corporation Wake-and-go mechanism with data exclusivity
US8640141B2 (en) 2008-02-01 2014-01-28 International Business Machines Corporation Wake-and-go mechanism with hardware private array
US8245212B2 (en) * 2008-02-22 2012-08-14 Microsoft Corporation Building call tree branches and utilizing break points
US8621154B1 (en) 2008-04-18 2013-12-31 Netapp, Inc. Flow based reply cache
US8161236B1 (en) 2008-04-23 2012-04-17 Netapp, Inc. Persistent reply cache integrated with file system
US20090276448A1 (en) * 2008-04-30 2009-11-05 International Business Machines Corporation Parallel transformation of files
US9720729B2 (en) * 2008-06-02 2017-08-01 Microsoft Technology Licensing, Llc Scheduler finalization
JPWO2009150815A1 (en) * 2008-06-11 2011-11-10 パナソニック株式会社 Multiprocessor system
US8024504B2 (en) * 2008-06-26 2011-09-20 Microsoft Corporation Processor interrupt determination
US8276145B2 (en) * 2008-06-27 2012-09-25 Microsoft Corporation Protected mode scheduling of operations
US8151008B2 (en) * 2008-07-02 2012-04-03 Cradle Ip, Llc Method and system for performing DMA in a multi-core system-on-chip using deadline-based scheduling
US20100023924A1 (en) * 2008-07-23 2010-01-28 Microsoft Corporation Non-constant data encoding for table-driven systems
US9021490B2 (en) * 2008-08-18 2015-04-28 Benoît Marchand Optimizing allocation of computer resources by tracking job status and resource availability profiles
US20120297395A1 (en) * 2008-08-18 2012-11-22 Exludus Inc. Scalable work load management on multi-core computer systems
US8380549B2 (en) * 2008-09-18 2013-02-19 Sap Ag Architectural design for embedded support application software
US20100077384A1 (en) * 2008-09-23 2010-03-25 Microsoft Corporation Parallel processing of an expression
US8549093B2 (en) 2008-09-23 2013-10-01 Strategic Technology Partners, LLC Updating a user session in a mach-derived system environment
US8745360B2 (en) * 2008-09-24 2014-06-03 Apple Inc. Generating predicate values based on conditional data dependency in vector processors
US8346995B2 (en) 2008-09-30 2013-01-01 Microsoft Corporation Balancing usage of hardware devices among clients
US8245229B2 (en) * 2008-09-30 2012-08-14 Microsoft Corporation Temporal batching of I/O jobs
JP5093509B2 (en) * 2008-10-28 2012-12-12 日本電気株式会社 CPU emulation system, CPU emulation method, and CPU emulation program
US8205066B2 (en) * 2008-10-31 2012-06-19 Convey Computer Dynamically configured coprocessor for different extended instruction set personality specific to application program with shared memory storing instructions invisibly dispatched from host processor
US20100115233A1 (en) * 2008-10-31 2010-05-06 Convey Computer Dynamically-selectable vector register partitioning
US8892789B2 (en) * 2008-12-19 2014-11-18 Netapp, Inc. Accelerating internet small computer system interface (iSCSI) proxy input/output (I/O)
US8549496B2 (en) 2009-02-27 2013-10-01 Texas Tech University System Method, apparatus and computer program product for automatically generating a computer program using consume, simplify and produce semantics with normalize, transpose and distribute operations
US8171227B1 (en) 2009-03-11 2012-05-01 Netapp, Inc. System and method for managing a flow based reply cache
US8886919B2 (en) 2009-04-16 2014-11-11 International Business Machines Corporation Remote update programming idiom accelerator with allocated processor resources
US8145723B2 (en) * 2009-04-16 2012-03-27 International Business Machines Corporation Complex remote update programming idiom accelerator
US8230201B2 (en) * 2009-04-16 2012-07-24 International Business Machines Corporation Migrating sleeping and waking threads between wake-and-go mechanisms in a multiple processor data processing system
US8082315B2 (en) * 2009-04-16 2011-12-20 International Business Machines Corporation Programming idiom accelerator for remote update
US8266289B2 (en) * 2009-04-23 2012-09-11 Microsoft Corporation Concurrent data processing in a distributed system
US8539397B2 (en) * 2009-06-11 2013-09-17 Advanced Micro Devices, Inc. Superscalar register-renaming for a stack-addressed architecture
US8452475B1 (en) 2009-10-02 2013-05-28 Rockwell Collins, Inc. Systems and methods for dynamic aircraft maintenance scheduling
US8423745B1 (en) 2009-11-16 2013-04-16 Convey Computer Systems and methods for mapping a neighborhood of data to general registers of a processing element
US9979589B2 (en) 2009-12-10 2018-05-22 Royal Bank Of Canada Coordinated processing of data by networked computing resources
US9940670B2 (en) 2009-12-10 2018-04-10 Royal Bank Of Canada Synchronized processing of data by networked computing resources
US10057333B2 (en) 2009-12-10 2018-08-21 Royal Bank Of Canada Coordinated processing of data by networked computing resources
BR112012013891B1 (en) * 2009-12-10 2020-12-08 Royal Bank Of Canada SYSTEM FOR PERFORMING SYNCHRONIZED DATA PROCESSING THROUGH MULTIPLE NETWORK COMPUTING RESOURCES, METHOD, DEVICE AND MEDIA LEGIBLE BY COMPUTER
US9959572B2 (en) 2009-12-10 2018-05-01 Royal Bank Of Canada Coordinated processing of data by networked computing resources
CN101848549B (en) * 2010-04-29 2012-06-20 中国人民解放军国防科学技术大学 Task scheduling method for wireless sensor network node
US8656377B2 (en) * 2010-06-10 2014-02-18 Microsoft Corporation Tracking variable information in optimized code
US8514232B2 (en) * 2010-06-28 2013-08-20 International Business Machines Corporation Propagating shared state changes to multiple threads within a multithreaded processing environment
US8782434B1 (en) 2010-07-15 2014-07-15 The Research Foundation For The State University Of New York System and method for validating program execution at run-time
US8527970B1 (en) * 2010-09-09 2013-09-03 The Boeing Company Methods and systems for mapping threads to processor cores
US9489183B2 (en) 2010-10-12 2016-11-08 Microsoft Technology Licensing, Llc Tile communication operator
GB2484729A (en) * 2010-10-22 2012-04-25 Advanced Risc Mach Ltd Exception control in a multiprocessor system
US8874457B2 (en) * 2010-11-17 2014-10-28 International Business Machines Corporation Concurrent scheduling of plan operations in a virtualized computing environment
US8402450B2 (en) * 2010-11-17 2013-03-19 Microsoft Corporation Map transformation in data parallel code
US9430204B2 (en) 2010-11-19 2016-08-30 Microsoft Technology Licensing, Llc Read-only communication operator
US9507568B2 (en) 2010-12-09 2016-11-29 Microsoft Technology Licensing, Llc Nested communication operator
KR101738641B1 (en) 2010-12-17 2017-05-23 삼성전자주식회사 Apparatus and method for compilation of program on multi core system
US9395957B2 (en) 2010-12-22 2016-07-19 Microsoft Technology Licensing, Llc Agile communication operator
US9405547B2 (en) * 2011-04-07 2016-08-02 Intel Corporation Register allocation for rotation based alias protection register
US8713361B2 (en) * 2011-05-04 2014-04-29 Advanced Micro Devices, Inc. Error protection for pipeline resources
EP2541348B1 (en) * 2011-06-28 2017-03-22 Siemens Aktiengesellschaft Method and programming system for programming an automation component
WO2013048379A1 (en) 2011-09-27 2013-04-04 Intel Corporation Expediting execution time memory aliasing checking
US8850557B2 (en) 2012-02-29 2014-09-30 International Business Machines Corporation Processor and data processing method with non-hierarchical computer security enhancements for context states
US9122522B2 (en) * 2011-12-14 2015-09-01 Advanced Micro Devices, Inc. Software mechanisms for managing task scheduling on an accelerated processing device (APD)
KR101867960B1 (en) * 2012-01-05 2018-06-18 삼성전자주식회사 Dynamically reconfigurable apparatus for operating system in manycore system and method of the same
JP6021342B2 (en) * 2012-02-09 2016-11-09 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Parallelization method, system, and program
US10228976B2 (en) * 2012-05-01 2019-03-12 Keysight Technologies Singapore (Holdings) Pte. Ltd. Methods, systems, and computer readable media for balancing incoming connections across multiple cores
US10430190B2 (en) 2012-06-07 2019-10-01 Micron Technology, Inc. Systems and methods for selectively controlling multithreaded execution of executable code segments
US9710357B2 (en) * 2012-08-04 2017-07-18 Microsoft Technology Licensing, Llc Function evaluation using lightweight process snapshots
US9122873B2 (en) 2012-09-14 2015-09-01 The Research Foundation For The State University Of New York Continuous run-time validation of program execution: a practical approach
US9069782B2 (en) 2012-10-01 2015-06-30 The Research Foundation For The State University Of New York System and method for security and privacy aware virtual machine checkpointing
US10558437B1 (en) * 2013-01-22 2020-02-11 Altera Corporation Method and apparatus for performing profile guided optimization for high-level synthesis
US8954546B2 (en) 2013-01-25 2015-02-10 Concurix Corporation Tracing with a workload distributor
US9720655B1 (en) 2013-02-01 2017-08-01 Jpmorgan Chase Bank, N.A. User interface event orchestration
US10002041B1 (en) 2013-02-01 2018-06-19 Jpmorgan Chase Bank, N.A. System and method for maintaining the health of a machine
US8997063B2 (en) 2013-02-12 2015-03-31 Concurix Corporation Periodicity optimization in an automated tracing system
US8924941B2 (en) 2013-02-12 2014-12-30 Concurix Corporation Optimization analysis using similar frequencies
US20130283281A1 (en) 2013-02-12 2013-10-24 Concurix Corporation Deploying Trace Objectives using Cost Analyses
US9088459B1 (en) 2013-02-22 2015-07-21 Jpmorgan Chase Bank, N.A. Breadth-first resource allocation system and methods
US9665474B2 (en) 2013-03-15 2017-05-30 Microsoft Technology Licensing, Llc Relationships derived from trace data
JP6070355B2 (en) * 2013-03-28 2017-02-01 富士通株式会社 Virtual machine control program, virtual machine control method, virtual machine control device, and cloud system
US9575874B2 (en) 2013-04-20 2017-02-21 Microsoft Technology Licensing, Llc Error list and bug report analysis for configuring an application tracer
US9772827B2 (en) 2013-04-22 2017-09-26 Nvidia Corporation Techniques for determining instruction dependencies
US9766866B2 (en) * 2013-04-22 2017-09-19 Nvidia Corporation Techniques for determining instruction dependencies
US9348596B2 (en) 2013-06-28 2016-05-24 International Business Machines Corporation Forming instruction groups based on decode time instruction optimization
US9372695B2 (en) 2013-06-28 2016-06-21 Globalfoundries Inc. Optimization of instruction groups across group boundaries
US9292415B2 (en) 2013-09-04 2016-03-22 Microsoft Technology Licensing, Llc Module specific tracing in a shared module environment
US9619410B1 (en) 2013-10-03 2017-04-11 Jpmorgan Chase Bank, N.A. Systems and methods for packet switching
WO2015071778A1 (en) 2013-11-13 2015-05-21 Concurix Corporation Application execution path tracing with configurable origin definition
GB2520571B (en) * 2013-11-26 2020-12-16 Advanced Risc Mach Ltd A data processing apparatus and method for performing vector processing
US9742869B2 (en) * 2013-12-09 2017-08-22 Nvidia Corporation Approach to adaptive allocation of shared resources in computer systems
US9262192B2 (en) * 2013-12-16 2016-02-16 Vmware, Inc. Virtual machine data store queue allocation
US9542259B1 (en) 2013-12-23 2017-01-10 Jpmorgan Chase Bank, N.A. Automated incident resolution system and method
US9868054B1 (en) 2014-02-10 2018-01-16 Jpmorgan Chase Bank, N.A. Dynamic game deployment
US9740464B2 (en) 2014-05-30 2017-08-22 Apple Inc. Unified intermediate representation
US10430169B2 (en) 2014-05-30 2019-10-01 Apple Inc. Language, function library, and compiler for graphical and non-graphical computation on a graphical processor unit
US10346941B2 (en) 2014-05-30 2019-07-09 Apple Inc. System and method for unified application programming interface and model
US9740644B2 (en) * 2014-09-26 2017-08-22 Intel Corporation Avoiding premature enabling of nonmaskable interrupts when returning from exceptions
US9684546B2 (en) 2014-12-16 2017-06-20 Microsoft Technology Licensing, Llc Job scheduling and monitoring in a distributed computing environment
US10048960B2 (en) * 2014-12-17 2018-08-14 Semmle Limited Identifying source code used to build executable files
WO2016165065A1 (en) * 2015-04-14 2016-10-20 华为技术有限公司 Process management method, apparatus and device
US20160335198A1 (en) * 2015-05-12 2016-11-17 Apple Inc. Methods and system for maintaining an indirection system for a mass storage device
US10204135B2 (en) 2015-07-29 2019-02-12 Oracle International Corporation Materializing expressions within in-memory virtual column units to accelerate analytic queries
US10366083B2 (en) 2015-07-29 2019-07-30 Oracle International Corporation Materializing internal computations in-memory to improve query performance
US9851957B2 (en) * 2015-12-03 2017-12-26 International Business Machines Corporation Improving application code execution performance by consolidating accesses to shared resources
WO2017218872A1 (en) 2016-06-16 2017-12-21 Virsec Systems, Inc. Systems and methods for remediating memory corruption in a computer application
US10203940B2 (en) * 2016-12-15 2019-02-12 Microsoft Technology Licensing, Llc Compiler with type inference and target code generation
US10055335B2 (en) 2017-01-23 2018-08-21 International Business Machines Corporation Programming assistance to identify suboptimal performing code and suggesting alternatives
US10628313B2 (en) 2017-05-26 2020-04-21 International Business Machines Corporation Dual clusters of fully connected integrated circuit multiprocessors with shared high-level cache
US10437641B2 (en) * 2017-06-19 2019-10-08 Cisco Technology, Inc. On-demand processing pipeline interleaved with temporal processing pipeline
TWI779069B (en) 2017-07-30 2022-10-01 埃拉德 希提 Memory chip with a memory-based distributed processor architecture
US11114138B2 (en) 2017-09-15 2021-09-07 Groq, Inc. Data structures with multiple read ports
US11243880B1 (en) 2017-09-15 2022-02-08 Groq, Inc. Processor architecture
US11360934B1 (en) 2017-09-15 2022-06-14 Groq, Inc. Tensor streaming processor architecture
US11868804B1 (en) 2019-11-18 2024-01-09 Groq, Inc. Processor instruction dispatch configuration
US11170307B1 (en) 2017-09-21 2021-11-09 Groq, Inc. Predictive model compiler for generating a statically scheduled binary with known resource constraints
US10552128B1 (en) 2017-12-26 2020-02-04 Cerner Innovaton, Inc. Generating asynchronous runtime compatibility in javascript applications
US10606595B2 (en) 2018-03-23 2020-03-31 Arm Limited Data processing systems
US10754706B1 (en) 2018-04-16 2020-08-25 Microstrategy Incorporated Task scheduling for multiprocessor systems
US11226955B2 (en) 2018-06-28 2022-01-18 Oracle International Corporation Techniques for enabling and integrating in-memory semi-structured data and text document searches with in-memory columnar query processing
CN110795318B (en) * 2018-08-01 2023-05-02 阿里云计算有限公司 Data processing method and device and electronic equipment
US11455370B2 (en) 2018-11-19 2022-09-27 Groq, Inc. Flattened input stream generation for convolution with expanded kernel
CN109634570A (en) * 2018-12-15 2019-04-16 中国平安人寿保险股份有限公司 Front and back end integrated development method, device, equipment and computer readable storage medium
US10678677B1 (en) * 2019-01-10 2020-06-09 Red Hat Israel, Ltd. Continuous debugging
CN110675115A (en) * 2019-08-14 2020-01-10 中国平安财产保险股份有限公司 Quotation information checking method and device, electronic equipment and non-transient storage medium
WO2021068102A1 (en) * 2019-10-08 2021-04-15 Intel Corporation Reducing compiler type check costs through thread speculation and hardware transactional memory
US11429310B2 (en) 2020-03-06 2022-08-30 Samsung Electronics Co., Ltd. Adjustable function-in-memory computation system
CN111735176B (en) * 2020-05-26 2021-07-23 珠海格力电器股份有限公司 Debugging terminal based on image recognition and debugging method of electrical equipment
GB2596872B (en) * 2020-07-10 2022-12-14 Graphcore Ltd Handling injected instructions in a processor
US11875425B2 (en) * 2020-12-28 2024-01-16 Advanced Micro Devices, Inc. Implementing heterogeneous wavefronts on a graphics processing unit (GPU)
TWI776338B (en) * 2020-12-30 2022-09-01 國立成功大學 Compiler adapted in graph processing unit and non-transitory computer-readable medium
CN113177723B (en) * 2021-05-12 2023-02-17 广东电网有限责任公司 Distributed scheduling sequential control method and system
CN114840268A (en) * 2022-04-25 2022-08-02 平安普惠企业管理有限公司 Service code execution method and device, electronic equipment and storage medium
US11811681B1 (en) 2022-07-12 2023-11-07 T-Mobile Usa, Inc. Generating and deploying software architectures using telecommunication resources

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3593300A (en) * 1967-11-13 1971-07-13 Ibm Arrangement for automatically selecting units for task executions in data processing systems
US3614715A (en) * 1969-07-25 1971-10-19 Gen Cable Corp Cordset blade design
US3648253A (en) * 1969-12-10 1972-03-07 Ibm Program scheduler for processing systems
US4099235A (en) * 1972-02-08 1978-07-04 Siemens Aktiengesellschaft Method of operating a data processing system
US4183083A (en) * 1972-04-14 1980-01-08 Duquesne Systems, Inc. Method of operating a multiprogrammed computing system
US4073005A (en) * 1974-01-21 1978-02-07 Control Data Corporation Multi-processor computer system
JPS57164340A (en) * 1981-04-03 1982-10-08 Hitachi Ltd Information processing method
US4394727A (en) * 1981-05-04 1983-07-19 International Business Machines Corporation Multi-processor task dispatching apparatus
US4484274A (en) * 1982-09-07 1984-11-20 At&T Bell Laboratories Computer system with improved process switch routine
US4800521A (en) * 1982-09-21 1989-01-24 Xerox Corporation Task control manager
US4633387A (en) * 1983-02-25 1986-12-30 International Business Machines Corporation Load balancing in a multiunit system
US4837676A (en) * 1984-11-05 1989-06-06 Hughes Aircraft Company MIMD instruction flow computer architecture
US4845665A (en) * 1985-08-26 1989-07-04 International Business Machines Corp. Simulation of computer program external interfaces
US4747130A (en) * 1985-12-17 1988-05-24 American Telephone And Telegraph Company, At&T Bell Laboratories Resource allocation in distributed control systems
US4939507A (en) * 1986-04-28 1990-07-03 Xerox Corporation Virtual and emulated objects for use in the user interface of a display screen of a display processor
GB2191917A (en) * 1986-06-16 1987-12-23 Ibm A multiple window display system
JPH0540686Y2 (en) * 1987-02-26 1993-10-15
US4809170A (en) * 1987-04-22 1989-02-28 Apollo Computer, Inc. Computer device for aiding in the development of software system
US4951192A (en) * 1987-06-04 1990-08-21 Apollo Computer, Inc. Device for managing software configurations in parallel in a network
JPH01105308A (en) * 1987-10-16 1989-04-21 Sony Corp Rotary head drum device
US5050070A (en) * 1988-02-29 1991-09-17 Convex Computer Corporation Multi-processor computer system having self-allocating processors

Also Published As

Publication number Publication date
JPH06505350A (en) 1994-06-16
US5179702A (en) 1993-01-12
AU8099891A (en) 1992-01-07
US6195676B1 (en) 2001-02-27
KR930701785A (en) 1993-06-12
WO1991020033A1 (en) 1991-12-26

Similar Documents

Publication Publication Date Title
CA2084298A1 (en) Integrated software architecture for a highly parallel multiprocessor system
Halstead Jr New ideas in parallel lisp: Language design, implementation, and programming tools
US6438573B1 (en) Real-time programming method
US20020046230A1 (en) Method for scheduling thread execution on a limited number of operating system threads
Ramsey et al. A single intermediate language that supports multiple implementations of exceptions
Dimitrov et al. Arachne: A portable threads system supporting migrant threads on heterogeneous network farms
Theobald et al. Overview of the Threaded-C language
US20100257516A1 (en) Leveraging multicore systems when compiling procedures
Jesshope μ TC–an intermediate language for programming chip multiprocessors
Lin et al. Supporting lock‐based multiprocessor resource sharing protocols in real‐time programming languages
Holt Structure of computer programs: a survey
Beck et al. A parallel-programming process model
Vella Seamless parallel computing on heterogeneous networks of multiprocessor workstations
Bernard et al. On the compilation of a language for general concurrent target architectures
Gentleman et al. Using the Harmony operating system: Release 3.0
Mattson et al. OpenMP: An API for writing portable SMP application software
Lopes On the Design and Implementation of a Virtual Machine for Process Calculi
Royuela Alcázar High-level compiler analysis for OpenMP
Covington Validation of Rice Parallel Processing Testbed Applications
Berrington et al. Guaranteeing unpredictability
Sang et al. The Xthreads library: Design, implementation, and applications
Guarna VPC, a Proposal for a Vector Parallel C Programming Language
van der Sluys Operating systems (OPS) lecture notes
Probert Efficient cross-domain mechanisms for building kernel-less operating systems
Arthur et al. A comparison using APPL and PVM for a parallel implementation of an unstructured grid generation program

Legal Events

Date Code Title Description
FZDE Discontinued