WO2001086514A1 - Resource consumer structure - Google Patents

Resource consumer structure Download PDF

Info

Publication number
WO2001086514A1
WO2001086514A1 PCT/AU2001/000513 AU0100513W WO0186514A1 WO 2001086514 A1 WO2001086514 A1 WO 2001086514A1 AU 0100513 W AU0100513 W AU 0100513W WO 0186514 A1 WO0186514 A1 WO 0186514A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource consumer
resource
buffer
resources
consumer
Prior art date
Application number
PCT/AU2001/000513
Other languages
French (fr)
Inventor
Raymond John Huetter
Michael Cahill
Colin Pickup
Original Assignee
Bullant Technology Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bullant Technology Pty Ltd filed Critical Bullant Technology Pty Ltd
Priority to AU55982/01A priority Critical patent/AU5598201A/en
Publication of WO2001086514A1 publication Critical patent/WO2001086514A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present invention relates to a resource consumer structure and, more particularly, to structures and methods which permit the efficient operation of a resource consumer arrangement and, more particularly but not exclusively, a task processor, within a computing environment.
  • a problem generally with communications links and more particularly with the remote links and even more particularly with the shared communications links is that transmission of
  • a resource consumer structure for a computer system, said structure including a first resource consumer adapted to consume resources allocated to it by a resource governor; at least a second resource consumer adapted to received predetermined resources from said first resource consumer for specialised processing.
  • said at least second resource consumer is adapted to return said predetermined resources to said first resource consumer.
  • Preferably said predetermined resources comprise tasks which perform asynchronous operations .
  • Preferably said first resource consumer receives resources for consumption from a non-locking input buffer.
  • said non-locking input buffer comprises a ring buffer in which resources for consumption are input via a first in first out queue.
  • a thread inserting resources for consumption into said ring buffer manages said queue .
  • said resource consumer comprises a thread which executes a task; said task comprising a flow of operations; said thread executing said task by executing said operations.
  • each said operation results in a driver function being executed by said computer system.
  • driver functions include bounce flag communication means wherein said driver function can cause said thread to communicate said task to a different thread.
  • said different thread comprises said second resource consumer.
  • each operation of said flow of operations causes execution of a driver function which is specific to the operation.
  • said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread.
  • driver function is specific to said computer system.
  • said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system.
  • said code layers include a platform independent layer.
  • Preferably said resource consumer structure is implemented in said platform independent layer.
  • said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure .
  • said resources include tasks, unused objects, a unit of network band width or a designated unit of an input/output operation.
  • each said resource consumer structure includes an input buffer.
  • each said resource consumer includes an output buffer.
  • said input buffer is a non- locking buffer.
  • said output buffer is a non-locking buffer.
  • said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer.
  • a thread inserting a resource for consumption into said ring buffer manages said queue.
  • said output buffer is a ring buffer.
  • said predetermined resources are so defined by a predetermined operation of the flow of operations comprising a task.
  • Preferably said predetermined operation is an asynchronous operation.
  • a method of allocation of usable resources to resource consumers in a computer system said computer system having a plurality of usable resources available for work or to be worked on and one or more resource consumer structures capable of consuming said usable resources; said method comprising: (a) instituting a special purpose task as a resource governor; (b) said resource governor distributing said usable resources to said one or more resource consumer structures ,- (c) said resource governor receiving used resources from said one or more resource consumer structures; • (d) each resource consumer structure of said one or more resource consumer structures comprising a first resource consumer and an at least second resource consumer; said first resource consumer adapted for communication with said second resource consumer.
  • said usable resource is a first task comprising a flow of operations which are executed by said first resource consumer comprising a first thread.
  • said flow of operations causes execution of a driver function which is specific to the operation.
  • said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread.
  • driver function is specific to said computer system.
  • said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system.
  • code layers include a platform independent layer.
  • said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure.
  • said usable resources include tasks, unused objects, a unit of network band width or a designated unit of an input/output operation.
  • said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer.
  • a thread inserting a resource for consumption into said ring buffer manages said queue .
  • a resource consumer structure suitable for use in conjunction with a resource governor,- said structure comprising a first resource consumer having a first input buffer and from which said resource consumer takes usable resources for use; said structure further including at least a second resource consumer which receives resources for consumption from a second input buffer; said second input buffer receiving resources for consumption from said first resource consumer.
  • said second resource consumer accepts resources for use of a predefined type from said first recourse consumer.
  • Preferably said first resource consumer obtains said usable resources for use only from said at least one input buffer.
  • Preferably said first input buffer receives resources of only said predefined type.
  • said structure further includes an output buffer into which said resource consumer structure places said usable resources following execution.
  • a driver function for use by a resource consumer structure of a computer system; said resource consumer structure including a first resource consumer and a second resource consumer; said first resource consumer adapted to transfer predetermined tasks to said second resource consumer; said driver function incorporating bounce flag communication means whereby said driver function when invoked by said first resource consumer can communicate to said first resource consumer a bounce flag whereby said first resource consumer is caused to communicate said predetermined task to said second resource consumer for further execution of said task.
  • said predetermined resources are so defined by a predetermined operation of the flow of operations comprising a task.
  • said predetermined operation is an asynchronous operation.
  • a resource consumer structure for a computer system; said structure comprising a first resource consumer and an at
  • said queue is a first in first out queue.
  • said queue is adapted to handle overflow of resources for consumption placed in said ring buffer.
  • a thread inserting a resource for consumption into said ring buffer manages said queue.
  • a multi-function resource consumer structure comprising a first special purpose resource consumer and an at least second special purpose resource consumer; said first special purpose resource consumer adapted to pass resources to said second resource consumer for specialised processing.
  • Fig. 1A is a block diagram of a resource governor arrangement for use in conjunction with a resource consumer according to various embodiments of the present invention
  • Fig. IB illustrates one embodiment of a resource consumer according to the present invention for use in conjunction with the governor arrangement of Fig. 1A;
  • Fig. 1C is a flow diagram for the resource consumer arrangement of Fig. IB.
  • Fig. 2A illustrates a resource consumer structure according to a further embodiment of the present invention
  • Fig. 3A is a block diagram of a non-locking buffer structure useable with embodiments of the present invention
  • Fig. 3B is a block diagram of a non-locking buffer structure useable with embodiments of the present invention
  • Fig. 3C illustrates a particular preferred form of implementation of a non- locking buffer
  • Fig. 5 is a block diagram of a task governor useable with embodiments of the invention.
  • Fig. 6 steps A-H illustrate detailed operation of the governor of Fig. 5;
  • Fig. 7 illustrates detailed operation of a particular embodiment of a resource consumer structure in accordance with an embodiment of the present invention
  • the resource consumer structure 10 receives resources 11 for consumption from a resource pool 12 via a resource governor 13.
  • Substitute Sheet (Rule 26) RO/AU 1/AUU1/UUD 1J sceived 14 August 2001
  • the resources for consumption comprise tasks which, as illustrated in Fig. IB can be executed by main thread 17A and secondary thread 17B.
  • the resource consumer structure 10 is sub-structured so as to include at least a first resource consumer or main resource consumer 21 and an at least second resource consumer or secondary resource consumer 22.
  • the first resource consumer 21 is adapted to consume/process the majority of resources input to it via non-locking input buffer 14 and to output the consumer/processed resources to non-locking output buffer 23 for return to governor 13.
  • the operation of governor 13 is described in more detail in the applicant's co-pending application Australian Patent Application No. PQ4181, the description and drawings of which are incorporated herein by cross-reference.
  • the second resource consumer 22 is adapted to consume predetermined resources typically comprising a clearly defined subset of resources available for consumption by virtual machine 15.
  • the resources can comprise tasks and, more specifically, tasks which require asynchronous execution, which is to say tasks which rely on external events for their completion and which external events themselves occur according to a non-timed (asynchronous) schedule.
  • a characteristic of first resource consumer 21 which enables this arrangement is that first resource consumer 21 can recognise the predetermined resources as such, either immediately or in the course of consumption. On recognition, it passes that resource to second resource consumer 22 for further consumption/execution by it. In this particular instance the predetermined resource is passed via secondary non-locking input buffer 24 to second resource consumer 22.
  • the way the first resource consumer 21 recognises whether a resource is a predetermined resource is
  • driver program or driver function 120 called by the first resource consumer 21.
  • the driver function 120 is called for the purpose of executing an operation required by the resource . As part of its function the driver function 120 determined whether the operation it is required to execute qualifies the resource as a predetermined resource. If it does so qualify the resource, then the driver program/function flags this by means of bounce flag 121 to the first resource consumer 21 which then passes the resource to the second resource consumer 22.
  • the nature of the predetermined resource is such that the operation which caused the predetermined resource to be so categorised will cause the second resource consumer 22 to call the same driver function 120 for the purpose of completing the operation.
  • the driver program flags this to the second resource consumer 22 which then places the predetermined resource in its output ring buffer 25. After consumption/execution the predetermined resource placed in secondary output buffer 25 is then either returned to first resource consumer 21 or to governor 13. (Refer implementations of Fig. 2A, 2B below) .
  • Fig. 1C is a flowchart illustrating the decision making process for a generalised resource by which a determination is arrived at as to whether the resource is to be entirely consumed by the first resource consumer 21 or passed to the second resource consumer 22 for completion of consumption.
  • first resource consumer in the form of thread 122 executes task 123 which comprises a flow or series of operations 124 in the form of operation 1, operation 2, operation 4 ... operation N.
  • Each operation is executed by way of a respective driver function. That is operation 1 calls driver function 1, operation 2 calls driver function 2 and so on.
  • a called driver function is a predetermined function it sets bounce flag 121 which is recognised by thread 122 and causes thread 122 to pass task 123 to second resource consumer in the form of second thread 125 for execution of the balance of the operations comprising task 123.
  • driver function 3 is a platform specific driver located in service layer 126 of computer platform 16. In this instance all other driver functions are located at
  • Fig. 2A a particular embodiment of the arrangement of Fig. 1 is illustrated comprising resource consumer structure 100 which comprises non-locking input buffer 101 in communication with main task processor 102 which is in communication with and outputs executed tasks to non-locking output bu fer 103.
  • Tasks which are identified to main task processor 102 as predetermined tasks are passed by main task processor 102 to non-locking secondary input buffer 104 of secondary task processor 105.
  • Secondary task processor 105 receives the predetermined tasks from secondary input buffer 104, processes them and outputs them to non-locking output buffer 106.
  • Non-locking secondary output buffer 106 then returns these predetermined tasks to main task processor 102.
  • a second preferred embodiment based on the generalised arrangement of Fig. 1 comprising, in this instance, resource consumer structure 110 comprising non-locking input buffer 111 which inputs tasks to first task processor 112.
  • Task processor 112 outputs executed tasks to output buffer 113.
  • Predetermined tasks identified to first task processor 112 are passed to secondary input buffer 114 of secondary
  • Secondary task processor 115 processes the predetermined tasks 116 and outputs them to secondary output buffer 117.
  • the output buffers 103, 113, 117 pass executed tasks to a resource governor (not shown) for further scheduling and/or return to the task/resource pool
  • a resource consumer 210 according to a first preferred embodiment of the invention comprising a resource consumer device 211 which accepts resources for consumption via input buffer 212.
  • the arrangement is such that resource consumer device 211 can take only one defined resource at a time from input buffer 212 thereby providing an inherently contention- free structure for consumption of resources by resource consumer device 211.
  • a resource consumer 215 according to a second embodiment of the invention is illustrated comprising, in this instance, a resource consumer device 216 which receives resources for consumption from input buffer 217 and then, after consumption of resources, outputs the consumed resources to output buffer 218.
  • resources for consumption are fed to the input buffers 212, 217 and from whence, in order of delivery into buffer 212, 217 the resources are taken by respective consumer devices 211, 216.
  • no contentions occur within the consumer devices 211, 216 because the consumer devices can operate simply on the basis of working on (consuming) one resource at a time and, in the case of the second embodiment of Fig. 3B, simply placing an executed or consumed resource into output buffer 218 before looking to, and only to, input buffer 217 for the next resource for consumption by resource consumer 216.
  • Fig. 3C illustrates a non-locking buffer arrangement 130 particularly suited for queues of indeterminate length.
  • the buffer comprises a ring buffer 131 of finite length which receives input from a queue 132 arranged as a first in first out (FIFO) queue.
  • FIFO first in first out
  • the control over insertion of resources for consumption into ring buffer 131 is managed by one thread. This same thread also is responsible for the placement of resources into the FIFO queue 132 when the ring buffer 131 is full. A separate thread is dedicated to the removal of resources from the ring buffer 131 for consumption by a resource consumer structure .
  • resources can be any definable unit of work which can be worked upon by a resource consumer. So, for example, the resource can be a task for execution by the resource consumer which, itself, can be a CPU or thread of a computer system.
  • a resource can also be a data block or other definable unit of data which requires work to be performed on it. More broadly, a resource can be a unit of network bandwidth or a definable input/output operation.
  • Fig. 3A or 3B or 3C can be applied via the intermediary of a resource governor or resource allocator to a system of resource allocation which allows allocation and consumption of resources without the consuming devices becoming involved in the allocation process. Stated another way, in this embodiment, the resource consumers do not need to have their
  • a plurality of resources 219 await allocation m resource pool 220.
  • the resources 219 are allocated for consumption by consumers 215a, 215b exclusively by a special purpose thread or task known as the resource governor 221.
  • the resource governor 221 allocates resources to buffers 217a, b ... providing there is room in the buffers to take the resources for consumption.
  • the resource governor 221 also receives from output buffers 218a, b ... consumed resources which are either re-allocated to an input buffer or are returned to the resource pool 220.
  • the resource governor 221 is the only entity which supplies resources to input buffers 217a, b ... and receives consumed resources from output buffers 218a, b ... there is no locking required and no resource contention.
  • the resource consumers 215a, b ... do not become involved in resource contention issues and can devote themselves exclusively to resource consumption in the form of what might generally be termed useful work.
  • the resources 219 are tasks for execution by a multi-tasking computer system.
  • the resource pool 220 comprises tasks which are dormant in the system.
  • the governor 221 removes them from the pool 220 and puts them into the next available input buffer 217a, b ....
  • the resource consumers 215a, b ... remove tasks from their respective input buffers 217a, b ..., execute them and then place them into output buffers 218a, b ... following which the governor 221 removes these tasks from the output buffers and reschedules them.
  • This particular embodiment exhibits the property that the resource scheduling overhead varies linearly with the number of consumers 215a, b ... available to do work and not with the total amount of work to be done .
  • Figs. 6A to 6H provide a particular example of the arrangement of Fig. 4 and, more particularly, the arrangement of Fig. 5 where the resources comprise tasks in the form of
  • the resource pool 220 itself comprises a structure of the type described with reference to Fig. 3B where the consumer takes the form of an addition requestor 222 having an input ring buffer 223 into which completed additions are placed and an output ring buffer 224 into which addition operations 225 are placed for working on by consumers, in this instance in the form of adders 215a, 215b.
  • governor 221 accepts addition operation 225 (the addition of integers 1 and 2) (Fig. 6B) .
  • the governor 221 then puts the addition operation 225 into the input ring buffer 217a of first adder 216a (Fig. 6C) .
  • First adder 216a then takes the addition operation 225 from input ring buffer 217a and performs the addition (to produce the result integer 3) as shown in Fig. 6D.
  • first adder 216a then inserts the (completed) addition operation 225 into output ring buffer 218a as shown in Fig. 6E.
  • FIGs. 6G and 6H show the application of the basic sequence just described with reference to Figs. 6A through to 6F where there are multiple addition operations 225a, b, c, d
  • first adder 215a or second adder 215b for execution by one or other of first adder 215a or second adder 215b.
  • arrows in Fig. 6H indicate the direction of insertion into respective input buffers and removal from the respective output buffers.
  • this basic arrangement can be scaled to handle a significantly larger number of addition operations by adjusting the size (length) of the various buffers with reference to the rate at which the adders can perform addition operations and the governor can put and get addition operations.
  • the first resource consumer 302 performs the function of an adder as was the case for the description with reference to figures 6.
  • the adder includes the additional functionality that it can pass tasks either to output buffer 306 for return to governor
  • FIG. 8 a further embodiment of the resource consumer structure of the present invention will be described.
  • the overall structure follows more closely the arrangement of Fig. 2B when orchestrated, in this instance, by a governor of the type described with reference to Figs . 4 to 6.
  • the resource consumer structure 401 comprises a first resource consumer 402 and a second resource consumer 403.
  • the first resource consumer 402 in the form of an execution thread receives task 404 from governor 405 via non-locking first input buffer 406.
  • the task 404 comprises a calculation part 407 and a send part 408 where the intention is that the send part 408 will send the result of calculation 407 via a communication link to a remote computer system 409.
  • the send operation will be asynchronous in nature.
  • Task 404 is initially executed by execution thread 402 by the calling of calculation driver functions 407a also from within virtual machine layer 410. Once the calculations are complete the execution thread of first resource consumer 402 calls send function driver 408a. The send driver 408a returns a bounce flag 411 to execution thread 402 which causes execution thread 402 to pass task 404 via second input buffer 412 to the second resource consumer 403 in the form of asynchronous thread 403. Asynchronous thread 403 then recalls the send driver 408a which attends to the send operation to remote computer system 409. On completion of the send operation the asynchronous thread 403 passes the full executed task 404 via second output buffer 413 to governor 405.
  • the send driver function 408a is located at service layer 126 of system 16, this driver being platform dependent.
  • the ability of at least some resource consumers to make us of driver functions, at least some of which can be device dependent allows optimum use to be made by the resource consumers of all features of any given hardware platform and operating system platform including those features of any given platform or operating system which may be more efficient than others at particular tasks.
  • the ability of the resource consumer structure to take on tasks which require execution of a broader range of operations can act to reduce the load placed on the governor structure.
  • the use of a plurality of intercommunicating and multifunction resource consumers in a resource consumer structure allows efficiency of operation of at least the first resource consumer of such resource consumer structures to be maintained.
  • Annexure A provides a very specific example of the processing of a predetermined task in the context of a socket for an internet application implemented on a platform which facilitates the processing of asynchronous tasks.
  • Section 10 10.1, 10.2, 10.3 describes a possible solution to the perceived problems described in the introduction to this specification which involves the use of a dedicated asynchronous layer within a computer system.
  • the problems with this approach to be contrasted with embodiments of the present invention, are stated and then a more preferred arrangement in accordance with embodiments of the present invention is then summarised at Section 10.4 wherein, in a particular preferred embodiment at least some driver functions callable by at least the first resource consumer of the resource consumer structure of the present invention include one or more of the following features:
  • Section 11 (the APU model) of Appendix A then provides a particularly detailed description of an embodiment of the present invention.
  • Section 12 provides a description of additionally particularly preferred features of particular embodiments whilst Section 13 (Analysing Sockets III) seeks to summarise characteristics of the particular embodiment described with reference to Section 11 of Annexure A.
  • section to the left of the dash- dot line is a plan view of the thread/task theme within the virtual machine.
  • the section to the right of the dash-dot line is a cross- section view of kernel showing its various layers .
  • a user thread calls a driver which issues a request to the asynchronous layer
  • the calling user thread deposits its request context into an asynchronous request ring (a-request) ;
  • the calling user thread returns (usually) indicating the associated user task should go into an asynchronous wait state
  • An asynchronous thread inspects the a-request ring and removes the request from the ring;
  • the asynchronous thread processes the asynchronous request (potentially iteratively) ; 6. After servicing the request the asynchronous thread deposits the user task into it's asynchronous response ring; 7. Eventually the governor removes the user task out of the a-response ring;
  • thread management would be shared/duplicated between the virtual machine and the asynchronous layer.
  • the asynchronous threads look like user threads - just deeper down m the kernel layers. The difference is that user threads handle synchronous user activity while asynchronous threads handle asynchronous activity.
  • the section above the dash-dot line is a plan view of the thread theme layer in the virtual machine.
  • the section below the dash-dot line is a plan view of the thread theme layer in the asynchronous layer.
  • the entire diagram is a plan view.
  • An APU would consist of a synchronous thread (executing a task's synchronous activity) and an asynchronous thread (executing a task' s asynchronous activity) .
  • an APU With the appropriate set of ring buffers an APU has the capacity to handle synchronous and asynchronous activities with excellent throughput characteristics.
  • the difficulty with this model is how to permit services to be synchronous on one platform and asynchronous on another without complicating driver definitions.
  • the solution to platform portability is to allow at least some drivers to:
  • Some of the drivers are then deemed synchronous drivers while other are asynchronous drivers .
  • An APU consists of an AUT providing asynchronous driver services to an SUT.
  • a SUT executing a u-task invokes an a-driver via the message bus ;
  • the a-driver detects it has been called by a SUT; 3 .
  • the a-driver returns to the message bus with a bus request of SBK_BUS_BOUNCE to indicate it needs to be running on an AUT;
  • the message bus detects the thread bounce request
  • the u-task member m_bAsynchronous is set to true
  • the SUT takes user tasks from the head of the overflow queue and appends them into the associated AUT a-ring while there are tasks in the overflow queue and space in the in-ring; 9. If there are no user tasks in the SUT's a-request overflow queue and there is room in the associated AUT's a-ring the SUT appends the user task onto the a-ring of that AUT;
  • AUT's a-ring the SUT appends the user task onto it's a- request overflow queue
  • the SUT goes back to its thread loop and inspects its in-ring; 12. Eventually the AUT inspects it's a-ring and removes the u-task from the ring;
  • the AUT message bus appends the u-task to the AUT's asynchronous request queue
  • the bus request is then set to SBK_BUS_REINVOKE;
  • a SBK_BUS_REINVOKE request causes the message bus to recall the a-driver
  • the a-driver detects it has been invoked by an AUT and begins executing the asynchronous portion of its code
  • the a-thread loop then tests its in ring again (there may need to be a cycle limit on this) ;
  • the AUT goes into an alertable sleep state ,- 25. Eventually the AUT is interrupted by a callback from the operating system;
  • the AUT determines the task from the context supplied by the callback
  • the AUT calls the message bus on behalf of that user task which in turn calls the a-driver;
  • the a-driver services the callback
  • SBK_BUS_COMPLETED the message bus performs the logical equivalent of an SBK_BUS_RETURN (which pops the tasks stack frame) , resets the bus request to SBK_BUS_ATTACH, ets m_bAsynchronous to false and puts the task onto the AUT's r-ring;
  • the governor takes the task out of AUT's r ring and processing proceeds as per normal.
  • the a-thread dispatches the appropriate overlapped I/O request (such as ReadFileEx, WriteFileEx) ; 5.
  • the a-thread keeps inspecting it's a-ring while it finds a-requests in it [ [check this - we may need to set a loop limit] ] ,- 6.
  • the a-thread Upon detecting it's a-ring being empty the a-thread goes to an alertable sleep state by calling SleepEx;
  • the a-thread should probably sleep for the same duration as the u-thread does when it detects no work to do (S ⁇ TTING_THREAD_WAIT_DURATION - lets rename this to just SETTING_THREAD_WAIT) ; 8. (*)At some point the a-thread is altered/interrupted via an I/O completion callback;
  • the a-thread processes the completion callback
  • the governor eventually picks the u-task up;
  • the particular driver requires a select () to be performed on a socket ,- 2.
  • the driver records its state in its DCB and returns with a SBK_BUS_BOUNC ⁇ request;
  • the calling SUT appends the u-task onto its AUT's a- ring;
  • the a-thread inspects it a-request list and builds it's FD SETs from it's a-request list;
  • the a-thread examines which sockets, if any, may proceed;
  • the associated u-task has it's bus request set to SBK_BUS_ATTACH and is then appended to the a-threads o- ring;
  • the governor eventually picks the u-task out of the a- threads o-ring and processing continues per normal.
  • Thread Affinity There maybe times when a user task genuinely requires a dedicated thread.
  • An example of this in sockets is the task that is accepting new connection requests.
  • the task spawning model will be overloaded to include an additional parameter indicating if the task requires a dedicated thread or not (a.k.a. strong or weak thread affinity) .
  • a buffer will be associated with each socket to assist in implementing those calls (such as reading varying length strings) which can only be implemented as byte fetch loops.
  • Sockets III should exhibit a marked improvement in performance and stability over Sockets II.
  • Embodiments of the present invention are applicable to use in an operating system environment and can also be applied in a virtual machine environment .

Abstract

A resource consumer structure for a computer system, the structure including a first resource consumer adapted to consume resources allocated to it by a resource governor; at least a second resource consumer adapted to received predetermined resources from the first resource consumer for specialised processing.

Description

Received 14 August 2001
- 1 -
RESOURCE CONSUMER STRUCTURE
The present invention relates to a resource consumer structure and, more particularly, to structures and methods which permit the efficient operation of a resource consumer arrangement and, more particularly but not exclusively, a task processor, within a computing environment.
BACKGROUND
It is perceived as commercially desirable that any given hardware platform running computer programs should operate as efficiently as possible with a view to maximising return on the hardware investment.
In recent times there has been an explosion in the networking of computer systems. Nowadays computers are linked for communication typically within an office environment and, further, are interlinked for communication with computers located remotely from the office. The remote links can be dedicated communications links or can be universally accessible or shared links. An example of a communications environment which provides shared links is what is currently known as the Internet .
A problem generally with communications links and more particularly with the remote links and even more particularly with the shared communications links is that transmission of
Substitute Sheet (Eule 26) RO,AU deceived 14 August 2001
- 2 -
data across them follows no particular time schedule. That is, one cannot determine in advance how long all or part of a grouping of information will take to be transmitted from one computer to another. It is perceived as important to ensure that where a given computing hardware platform is executing a multitude of tasks and where at least some of those tasks are dependent upon events (such as the reception of data from a remote location) whose timing cannot be anticipated that the overall efficiency of task execution upon that hardware platform is not degraded by tasks which are relying on other events to occur before they can continue execution.
It is an object of at least preferred embodiments of the present invention to address or ameliorate one or more of the abovementioned disadvantages.
BRIEF DESCRIPTION OF INVENTION
Accordingly, in one broad form of the invention there is provided a resource consumer structure for a computer system, said structure including a first resource consumer adapted to consume resources allocated to it by a resource governor; at least a second resource consumer adapted to received predetermined resources from said first resource consumer for specialised processing.
Substitute Sheet (RuIe26)R0/AU Received 14 August 2001
In one particular preferred form said at least second resource consumer is adapted to return said predetermined resources to said first resource consumer.
In an alternative particular preferred form said at least second resource consumer is adapted to return said predetermined resources to said resource governor.
Preferably said predetermined resources comprise tasks which perform asynchronous operations .
Preferably said first resource consumer receives resources for consumption from a non-locking input buffer.
Preferably said non-locking input buffer comprises a ring buffer in which resources for consumption are input via a first in first out queue.
Preferably a thread inserting resources for consumption into said ring buffer manages said queue .
Preferably said resource consumer comprises a thread which executes a task; said task comprising a flow of operations; said thread executing said task by executing said operations. Preferably each said operation results in a driver function being executed by said computer system.
Preferably said driver functions include bounce flag communication means wherein said driver function can cause said thread to communicate said task to a different thread.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
Preferably said different thread comprises said second resource consumer.
Preferably said resources comprise a first task comprising a flow of operations which are executed by said first resource consumer comprising a first thread.
Preferably each operation of said flow of operations causes execution of a driver function which is specific to the operation.
Preferably said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread.
Preferably said driver function is specific to said computer system.
Preferably said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system. Preferably said code layers include a platform independent layer.
Preferably said resource consumer structure is implemented in said platform independent layer.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
Preferably said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure .
Preferably said resources include tasks, unused objects, a unit of network band width or a designated unit of an input/output operation.
Preferably each said resource consumer structure includes an input buffer.
Preferably each said resource consumer includes an output buffer.
Preferably said input buffer is a non- locking buffer.
Preferably said output buffer is a non-locking buffer.
Preferably said input buffer is a ring buffer.
Preferably said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer.
Preferably a thread inserting a resource for consumption into said ring buffer manages said queue.
Preferably said output buffer is a ring buffer. Preferably said predetermined resources are so defined by a predetermined operation of the flow of operations comprising a task.
Preferably said predetermined operation is an asynchronous operation.
Substitute Sheet (Rule 2Θ) RO/AU Received 14 August 2001
- 6 -
In yet a further broad form of the invention there is provided a method of allocation of usable resources to resource consumers in a computer system, said computer system having a plurality of usable resources available for work or to be worked on and one or more resource consumer structures capable of consuming said usable resources; said method comprising: (a) instituting a special purpose task as a resource governor; (b) said resource governor distributing said usable resources to said one or more resource consumer structures ,- (c) said resource governor receiving used resources from said one or more resource consumer structures; • (d) each resource consumer structure of said one or more resource consumer structures comprising a first resource consumer and an at least second resource consumer; said first resource consumer adapted for communication with said second resource consumer. Preferably said usable resource is a first task comprising a flow of operations which are executed by said first resource consumer comprising a first thread.
Preferably said flow of operations causes execution of a driver function which is specific to the operation.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 7 -
Preferably said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread.
Preferably said driver function is specific to said computer system.
Preferably said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system.
Preferably said code layers include a platform independent layer.
Preferably said resource consumer structure is implemented in said platform independent layer.
Preferably said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure.
Preferably said usable resources include tasks, unused objects, a unit of network band width or a designated unit of an input/output operation.
Preferably said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
Preferably a thread inserting a resource for consumption into said ring buffer manages said queue .
In yet a further broad form of the invention there is provided a resource consumer structure suitable for use in conjunction with a resource governor,- said structure comprising a first resource consumer having a first input buffer and from which said resource consumer takes usable resources for use; said structure further including at least a second resource consumer which receives resources for consumption from a second input buffer; said second input buffer receiving resources for consumption from said first resource consumer.
Preferably said second resource consumer accepts resources for use of a predefined type from said first recourse consumer.
Preferably said first resource consumer obtains said usable resources for use only from said at least one input buffer.
Preferably said first input buffer receives resources of only said predefined type.
Preferably said resource consumer experiences no contention for execution of said usable resources for use.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 9 -
Preferably said structure further includes an output buffer into which said resource consumer structure places said usable resources following execution.
In yet a further broad form of the invention there is provided a driver function for use by a resource consumer structure of a computer system; said resource consumer structure including a first resource consumer and a second resource consumer; said first resource consumer adapted to transfer predetermined tasks to said second resource consumer; said driver function incorporating bounce flag communication means whereby said driver function when invoked by said first resource consumer can communicate to said first resource consumer a bounce flag whereby said first resource consumer is caused to communicate said predetermined task to said second resource consumer for further execution of said task.
Preferably said predetermined resources are so defined by a predetermined operation of the flow of operations comprising a task. Preferably said predetermined operation is an asynchronous operation.
In yet a further broad form of the invention there is provided a resource consumer structure for a computer system; said structure comprising a first resource consumer and an at
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 10 -
least second resource consumer; said first resource consumer in communication with said second resource consumer by way of a non-locking input buffer.
In yet a further broad form of the invention there is provided a non-locking buffer structure comprising a ring buffer and a queue .
Preferably said queue is a first in first out queue.
Preferably said queue is adapted to handle overflow of resources for consumption placed in said ring buffer. Preferably a thread inserting a resource for consumption into said ring buffer manages said queue.
In yet a further broad form of the invention there is provided a multi-function resource consumer structure comprising a first special purpose resource consumer and an at least second special purpose resource consumer; said first special purpose resource consumer adapted to pass resources to said second resource consumer for specialised processing.
BRIEF DESCRIPTION OF DRAWINGS
Embodiments of the invention will now be described with reference to the accompanying drawings wherein:
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 11 -
Fig. 1A is a block diagram of a resource governor arrangement for use in conjunction with a resource consumer according to various embodiments of the present invention;
Fig. IB illustrates one embodiment of a resource consumer according to the present invention for use in conjunction with the governor arrangement of Fig. 1A;
Fig. 1C is a flow diagram for the resource consumer arrangement of Fig. IB.
Fig. 2A illustrates a resource consumer structure according to a further embodiment of the present invention;
Fig. 2B illustrates a resource consumer structure according to a further embodiment of the present invention;
Fig. 3A is a block diagram of a non-locking buffer structure useable with embodiments of the present invention; Fig. 3B is a block diagram of a non-locking buffer structure useable with embodiments of the present invention;
Fig. 3C illustrates a particular preferred form of implementation of a non- locking buffer;
Fig. 4 is a block diagram of a resource governor arrangement useable with embodiments of the invention;
Fig. 5 is a block diagram of a task governor useable with embodiments of the invention;
Fig. 6 steps A-H illustrate detailed operation of the governor of Fig. 5;
Figure imgf000012_0001
Received 14 August 2001
- 12 -
Fig. 7 illustrates detailed operation of a particular embodiment of a resource consumer structure in accordance with an embodiment of the present invention;
Fig. 8 illustrates detailed operations of a further example of a resource consumer structure according to a further embodiment of the present invention; and
Fig. 9 illustrates detailed operations of yet a further example of a resource consumer structure of yet a further preferred embodiment as described in Appendix A.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
With reference to Fig. 1A a resource consumer structure
10 according to a generalised embodiment of the present invention is illustrated. The resource consumer structure 10 receives resources 11 for consumption from a resource pool 12 via a resource governor 13.
More particularly the resource consumer structure 10 receives resources 11 for consumption via a non-locking input buffer 14. This arrangement can be utilised for the scheduling and control of consumption of a wide variety of resources for example in virtual machine 15 implemented on computer platform 16 as illustrated in the inset of Fig. 1A.
Substitute Sheet (Rule 26) RO/AU 1/AUU1/UUD 1J sceived 14 August 2001
- 13 -
In this instance the resources for consumption comprise tasks which, as illustrated in Fig. IB can be executed by main thread 17A and secondary thread 17B.
Virtual machine 15 runs under operating system 18 via the intermediary of abstraction layer 19 and service layer 20. The abstraction layer 19 and service layer 20 provide a customised interface between virtual machine 15 and the specific operating system 18 running on computer platform 16. In preferred implementations of virtual machine 15 at least the virtual machine code is made platform independent. The abstraction layers 19 and service layers 20 are customised for the specific operating system 18 and platform 16.
With particular reference to Fig. IB the resource consumer structure 10 according to embodiments of the invention is sub-structured so as to include at least a first resource consumer or main resource consumer 21 and an at least second resource consumer or secondary resource consumer 22.
The first resource consumer 21 is adapted to consume/process the majority of resources input to it via non-locking input buffer 14 and to output the consumer/processed resources to non-locking output buffer 23 for return to governor 13.
Su stitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 14 -
The operation of governor 13 is described in more detail in the applicant's co-pending application Australian Patent Application No. PQ4181, the description and drawings of which are incorporated herein by cross-reference. The second resource consumer 22 is adapted to consume predetermined resources typically comprising a clearly defined subset of resources available for consumption by virtual machine 15. In one particular form, as will be described in further detail, the resources can comprise tasks and, more specifically, tasks which require asynchronous execution, which is to say tasks which rely on external events for their completion and which external events themselves occur according to a non-timed (asynchronous) schedule. A characteristic of first resource consumer 21 which enables this arrangement is that first resource consumer 21 can recognise the predetermined resources as such, either immediately or in the course of consumption. On recognition, it passes that resource to second resource consumer 22 for further consumption/execution by it. In this particular instance the predetermined resource is passed via secondary non-locking input buffer 24 to second resource consumer 22.
In this instance, the way the first resource consumer 21 recognises whether a resource is a predetermined resource is
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 15 -
via a driver program or driver function 120 called by the first resource consumer 21.
The driver function 120 is called for the purpose of executing an operation required by the resource . As part of its function the driver function 120 determined whether the operation it is required to execute qualifies the resource as a predetermined resource. If it does so qualify the resource, then the driver program/function flags this by means of bounce flag 121 to the first resource consumer 21 which then passes the resource to the second resource consumer 22.
In this instance, the nature of the predetermined resource is such that the operation which caused the predetermined resource to be so categorised will cause the second resource consumer 22 to call the same driver function 120 for the purpose of completing the operation. Once the operation is completed, the driver program flags this to the second resource consumer 22 which then places the predetermined resource in its output ring buffer 25. After consumption/execution the predetermined resource placed in secondary output buffer 25 is then either returned to first resource consumer 21 or to governor 13. (Refer implementations of Fig. 2A, 2B below) .
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 16 -
Fig. 1C is a flowchart illustrating the decision making process for a generalised resource by which a determination is arrived at as to whether the resource is to be entirely consumed by the first resource consumer 21 or passed to the second resource consumer 22 for completion of consumption.
Specifically first resource consumer in the form of thread 122 executes task 123 which comprises a flow or series of operations 124 in the form of operation 1, operation 2, operation 4 ... operation N. Each operation is executed by way of a respective driver function. That is operation 1 calls driver function 1, operation 2 calls driver function 2 and so on.
If a called driver function is a predetermined function it sets bounce flag 121 which is recognised by thread 122 and causes thread 122 to pass task 123 to second resource consumer in the form of second thread 125 for execution of the balance of the operations comprising task 123.
In the flowchart of Fig. 1C it is operation 3 which is a predetermined function which sets bounce flag 121 resulting in operations 4 onwards being executed by second thread 125. In this instance driver function 3 is a platform specific driver located in service layer 126 of computer platform 16. In this instance all other driver functions are located at
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 17 -
the level of and executed within the layer comprising virtual machine 15.
With reference to Fig. 2A a particular embodiment of the arrangement of Fig. 1 is illustrated comprising resource consumer structure 100 which comprises non-locking input buffer 101 in communication with main task processor 102 which is in communication with and outputs executed tasks to non-locking output bu fer 103.
Tasks which are identified to main task processor 102 as predetermined tasks are passed by main task processor 102 to non-locking secondary input buffer 104 of secondary task processor 105. Secondary task processor 105 receives the predetermined tasks from secondary input buffer 104, processes them and outputs them to non-locking output buffer 106. Non-locking secondary output buffer 106 then returns these predetermined tasks to main task processor 102.
With reference to Fig. 2B a second preferred embodiment based on the generalised arrangement of Fig. 1 is illustrated comprising, in this instance, resource consumer structure 110 comprising non-locking input buffer 111 which inputs tasks to first task processor 112. Task processor 112 outputs executed tasks to output buffer 113.
Predetermined tasks identified to first task processor 112 are passed to secondary input buffer 114 of secondary
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 18 -
task processor 115. Secondary task processor 115 processes the predetermined tasks 116 and outputs them to secondary output buffer 117.
Where the systems are of the type described with reference to Figs. 1A, B, C the output buffers 103, 113, 117 pass executed tasks to a resource governor (not shown) for further scheduling and/or return to the task/resource pool
12.
With reference to Fig. 3A there is illustrated a resource consumer 210 according to a first preferred embodiment of the invention comprising a resource consumer device 211 which accepts resources for consumption via input buffer 212. The arrangement is such that resource consumer device 211 can take only one defined resource at a time from input buffer 212 thereby providing an inherently contention- free structure for consumption of resources by resource consumer device 211.
With reference to Fig. 3B a resource consumer 215 according to a second embodiment of the invention is illustrated comprising, in this instance, a resource consumer device 216 which receives resources for consumption from input buffer 217 and then, after consumption of resources, outputs the consumed resources to output buffer 218.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 19 -
In use for the embodiments of both Fig. 3A and Fig. 3B resources for consumption are fed to the input buffers 212, 217 and from whence, in order of delivery into buffer 212, 217 the resources are taken by respective consumer devices 211, 216. In this way no contentions occur within the consumer devices 211, 216 because the consumer devices can operate simply on the basis of working on (consuming) one resource at a time and, in the case of the second embodiment of Fig. 3B, simply placing an executed or consumed resource into output buffer 218 before looking to, and only to, input buffer 217 for the next resource for consumption by resource consumer 216.
Fig. 3C illustrates a non-locking buffer arrangement 130 particularly suited for queues of indeterminate length. The buffer comprises a ring buffer 131 of finite length which receives input from a queue 132 arranged as a first in first out (FIFO) queue.
The control over insertion of resources for consumption into ring buffer 131 is managed by one thread. This same thread also is responsible for the placement of resources into the FIFO queue 132 when the ring buffer 131 is full. A separate thread is dedicated to the removal of resources from the ring buffer 131 for consumption by a resource consumer structure .
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 20 -
In the context of computer systems "resources" can be any definable unit of work which can be worked upon by a resource consumer. So, for example, the resource can be a task for execution by the resource consumer which, itself, can be a CPU or thread of a computer system. A resource can also be a data block or other definable unit of data which requires work to be performed on it. More broadly, a resource can be a unit of network bandwidth or a definable input/output operation. With reference to Figs. 4 to 6 the detailed operation of a resource governor for use in conjunction with one or more resource consumers is described below. There then follows with reference to Figs. 7 and 8 a more detailed description of preferred embodiments of the resource consumer structure of the present invention when utilised, in this instance, with a resource governor arrangement such as that illustrated and described with reference to Figs. 4 to 6 inclusive.
With reference to Fig. 4 the arrangements of Fig. 3A or 3B or 3C can be applied via the intermediary of a resource governor or resource allocator to a system of resource allocation which allows allocation and consumption of resources without the consuming devices becoming involved in the allocation process. Stated another way, in this embodiment, the resource consumers do not need to have their
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 21 -
execution interrupted for the purpose of resource allocation or resolution of contention.
In the arrangement of Fig. 4 the consumers are numbered as for the second embodiment of Fig. 3B with the first resource consumer and its components incorporating the suffix a, the second resource consumer and its components the suffix b and so on .
In this instance a plurality of resources 219 await allocation m resource pool 220. The resources 219 are allocated for consumption by consumers 215a, 215b exclusively by a special purpose thread or task known as the resource governor 221.
The resource governor 221 allocates resources to buffers 217a, b ... providing there is room in the buffers to take the resources for consumption. The resource governor 221 also receives from output buffers 218a, b ... consumed resources which are either re-allocated to an input buffer or are returned to the resource pool 220.
Because the resource governor 221 is the only entity which supplies resources to input buffers 217a, b ... and receives consumed resources from output buffers 218a, b ... there is no locking required and no resource contention.
The combination of the single purpose resource governor 221 with the buffering arrangement illustrated in Fig. 4
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 22 -
means that the resource consumers 215a, b ... do not become involved in resource contention issues and can devote themselves exclusively to resource consumption in the form of what might generally be termed useful work. With reference to Fig. 5 a particular example of an application of the arrangement of Fig. 4 is illustrated where the resources 219 are tasks for execution by a multi-tasking computer system. In this instance the resource pool 220 comprises tasks which are dormant in the system. When dormant tasks become active the governor 221 removes them from the pool 220 and puts them into the next available input buffer 217a, b ....
The resource consumers 215a, b ... remove tasks from their respective input buffers 217a, b ..., execute them and then place them into output buffers 218a, b ... following which the governor 221 removes these tasks from the output buffers and reschedules them.
This particular embodiment exhibits the property that the resource scheduling overhead varies linearly with the number of consumers 215a, b ... available to do work and not with the total amount of work to be done .
Figs. 6A to 6H provide a particular example of the arrangement of Fig. 4 and, more particularly, the arrangement of Fig. 5 where the resources comprise tasks in the form of
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 23 -
additions of integers which need to be performed by consumers or workers which, in this case, are simple adder devices.
Again, like components are numbered as for Fig. 4 and Fig. 5. In this instance the resource pool 220 itself comprises a structure of the type described with reference to Fig. 3B where the consumer takes the form of an addition requestor 222 having an input ring buffer 223 into which completed additions are placed and an output ring buffer 224 into which addition operations 225 are placed for working on by consumers, in this instance in the form of adders 215a, 215b.
Governor 221 accepts addition operation 225 (the addition of integers 1 and 2) (Fig. 6B) . The governor 221 then puts the addition operation 225 into the input ring buffer 217a of first adder 216a (Fig. 6C) .
First adder 216a then takes the addition operation 225 from input ring buffer 217a and performs the addition (to produce the result integer 3) as shown in Fig. 6D.
Having completed the addition operation ("the work") first adder 216a then inserts the (completed) addition operation 225 into output ring buffer 218a as shown in Fig. 6E.
Governor 221 then takes the (completed) addition operation 225 from the output ring buffer 218a and puts it
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 24 -
into the input ring buffer 223 of (addition) resource pool 220 as shown in Fig. 6F.
Figs. 6G and 6H show the application of the basic sequence just described with reference to Figs. 6A through to 6F where there are multiple addition operations 225a, b, c, d
... p for execution by one or other of first adder 215a or second adder 215b.
The arrows in Fig. 6H indicate the direction of insertion into respective input buffers and removal from the respective output buffers.
It will be appreciated that this basic arrangement can be scaled to handle a significantly larger number of addition operations by adjusting the size (length) of the various buffers with reference to the rate at which the adders can perform addition operations and the governor can put and get addition operations.
In all instances it will be observed that all additions are allocated by the special purpose governor 221. It will also be observed that, in all instances, each adder is not concerned with resource allocation or resource conflict resolution. By virtue of the buffering arrangement illustrated each adder is devoted to performing sequentially the work which it is defined to do, in this instance
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 25 -
addition, so long as addition operations are available for removal from an input buffer of that adder.
With reference to Fig. 7 detail of a further preferred embodiment of the resource consumer structure of the present invention is described in conjunction with the use of a resource governor as described above.
In this embodiment, with reference to Fig. 7, the resource consumer structure 301 comprises a first resource consumer 302 and a second resource consumer 303. The resource consumer structure 301 receives resources for consumption from governor 304 via non-locking input buffer 305.
In this instance the first resource consumer 302 performs the function of an adder as was the case for the description with reference to figures 6. In this instance the adder includes the additional functionality that it can pass tasks either to output buffer 306 for return to governor
304 or it can pass tasks to second non-locking input buffer
307 of second resource consumer 303. In this instance second resource consumer 303 functions as a multiplier. Tasks passed to multiplier 303 can be returned, after execution, via second output buffer 308 to first resource consumer 302 as illustrated by the arrowed paths in Fig. 7.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 26 -
Adder 302 takes the form of a thread executing task 309 which comprises a flow of operations, in this instance an addition operation 310 and a multiplication operation 311 as received from governor 304. Adder 302 will parse task 309 resulting in it making a function call to a multiply driver function 311a. Multiply function 311a returns bounce flag 312 to adder 302 which causes adder 302 to place task 309 in second input buffer 307 from which it is taken by multiplier 303. Multiplier 303 then also calls multiply function 311a which performs the multiplication and returns a result (7*8=56) to multiplier 303 which places the result (56) in second output buffer 308 for return to adder 302. Adder 302 now calls addition function driver 310a which returns the addition result (6+56=62) . Adder 302 places the result (62) in output buffer 306 where it is returned as the completed result of task 309 to governor 304.
With reference to Fig. 8 a further embodiment of the resource consumer structure of the present invention will be described. The overall structure follows more closely the arrangement of Fig. 2B when orchestrated, in this instance, by a governor of the type described with reference to Figs . 4 to 6.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 27 -
In this embodiment the resource consumer structure 401 comprises a first resource consumer 402 and a second resource consumer 403. The first resource consumer 402 in the form of an execution thread receives task 404 from governor 405 via non-locking first input buffer 406.
The task 404 comprises a calculation part 407 and a send part 408 where the intention is that the send part 408 will send the result of calculation 407 via a communication link to a remote computer system 409. Of necessity the send operation will be asynchronous in nature.
Task 404 is initially executed by execution thread 402 by the calling of calculation driver functions 407a also from within virtual machine layer 410. Once the calculations are complete the execution thread of first resource consumer 402 calls send function driver 408a. The send driver 408a returns a bounce flag 411 to execution thread 402 which causes execution thread 402 to pass task 404 via second input buffer 412 to the second resource consumer 403 in the form of asynchronous thread 403. Asynchronous thread 403 then recalls the send driver 408a which attends to the send operation to remote computer system 409. On completion of the send operation the asynchronous thread 403 passes the full executed task 404 via second output buffer 413 to governor 405.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 28 -
In this instance the send driver function 408a is located at service layer 126 of system 16, this driver being platform dependent. As will be described in Appendix A the ability of at least some resource consumers to make us of driver functions, at least some of which can be device dependent allows optimum use to be made by the resource consumers of all features of any given hardware platform and operating system platform including those features of any given platform or operating system which may be more efficient than others at particular tasks.
In addition the ability of the resource consumer structure to take on tasks which require execution of a broader range of operations can act to reduce the load placed on the governor structure. In particular the use of a plurality of intercommunicating and multifunction resource consumers in a resource consumer structure allows efficiency of operation of at least the first resource consumer of such resource consumer structures to be maintained.
Annexure A provides a very specific example of the processing of a predetermined task in the context of a socket for an internet application implemented on a platform which facilitates the processing of asynchronous tasks.
The full text and drawings of Annexure A form part of this specification.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 29 -
In particular Section 10, 10.1, 10.2, 10.3 describes a possible solution to the perceived problems described in the introduction to this specification which involves the use of a dedicated asynchronous layer within a computer system. The problems with this approach, to be contrasted with embodiments of the present invention, are stated and then a more preferred arrangement in accordance with embodiments of the present invention is then summarised at Section 10.4 wherein, in a particular preferred embodiment at least some driver functions callable by at least the first resource consumer of the resource consumer structure of the present invention include one or more of the following features:
1. Have platform specific code;
2. Are recallable; 3. Record a context or state across invocations;
4. Detect whether they are running on a synchronous or asynchronous thread;
5. Request the message bus bounce them to the asynchronous thread and recall the driver. This particularly preferred form is most directly relevant to the embodiment previously described with reference to Fig. 8 and is illustrated in Fig. 9 of the present application.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 30 -
Section 11 (the APU model) of Appendix A then provides a particularly detailed description of an embodiment of the present invention. Section 12 (two further refinements) provides a description of additionally particularly preferred features of particular embodiments whilst Section 13 (Analysing Sockets III) seeks to summarise characteristics of the particular embodiment described with reference to Section 11 of Annexure A.
The above describes only some embodiments of the present invention and modifications, obvious to those skilled in the art can be made thereto without departing from the scope and spirit of the present invention.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 31 -
ANNEXURE A
An example of the processing of a predetermined task in the context of a socket for an internet application implemented on platforms which facilitate the processing of asynchronous events .
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
32 -
10 Using The Asynchronous Layer
In the following diagram the section to the left of the dash- dot line is a plan view of the thread/task theme within the virtual machine. The section to the right of the dash-dot line is a cross- section view of kernel showing its various layers . We would use ring buffers in this model to conform to the established governor-scheduling model.
The Asynchronous Layer Approach —
Figure imgf000033_0001
Generally speaking the asynchronous layer would work in the following manner:
1. A user thread calls a driver which issues a request to the asynchronous layer;
2. The calling user thread deposits its request context into an asynchronous request ring (a-request) ;
3. The calling user thread returns (usually) indicating the associated user task should go into an asynchronous wait state;
4. An asynchronous thread inspects the a-request ring and removes the request from the ring;
5. The asynchronous thread processes the asynchronous request (potentially iteratively) ; 6. After servicing the request the asynchronous thread deposits the user task into it's asynchronous response ring; 7. Eventually the governor removes the user task out of the a-response ring;
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
33
8. Recognizing the asynchronous response the governor moves the user task to the sliced queue;
9. Processing proceeds as per normal.
Note that a callback is required from the asynchronous layer to the virtual machine layer so as to deposit the user task into a ring that is visible to the governor.
10.1 Black Box Issues The most obvious problem with this particular approach was that it would duplicate a lot of structures already defined higher up in the kernel layers. We could alleviate this by shifting some of the structures (such as ring buffers) down into a new layer below the asynchronous layer. However since these are objects we would also have to move objects down below it as well. Therefore the revised architecture could be:
Possible Architecture 2
Shell
Virtual Machine
Compiler
Knowledge
Services
Asynchronous
Structures
Objects
Platform
Primitives
The most significant concern however is that thread management would be shared/duplicated between the virtual machine and the asynchronous layer. There is a coordination issue between the virtual machine and the asynchronous layer as to how many a-response rings there are. Following our v5 ring model, there would be one a-response ring per asynchronous thread. Since the asynchronous layer has the right to create its own threads it has to communicate this information upward to the governor. This black box was clearly getting messy.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
34
10.2 A Haunting Similarity
Because we followed the same pattern of having in and out ring buffers for the asynchronous threads, the asynchronous threads look like user threads - just deeper down m the kernel layers. The difference is that user threads handle synchronous user activity while asynchronous threads handle asynchronous activity.
At which point we literally compared th s to conscious versus unconscious aspects of the human mind and redrew the diagram. In the following diagram the section above the dash-dot line is a plan view of the thread theme layer in the virtual machine. The section below the dash-dot line is a plan view of the thread theme layer in the asynchronous layer. The entire diagram is a plan view.
( governor j u-thread X<ι) u-thread χ0) •- OX u-thread χD)
OX a-thread X (aχ a-thread r) O a thread X
It may help to imagine looking down into a fish tank. The "user thread fishes" are higher up m the tank, the
"asynchronous thread fishes" are deeper down. Either way they are both fishes.
10.3 A Shift in Perspective
Armed with this perspective we saw there is an abstraction which potentially permits a solution in the virtual machine without platform specific protrusions.
The combination of the two kinds of threads forms a processing unit - the Ant Processing Uni t perhaps ? An APU would consist of a synchronous thread (executing a task's synchronous activity) and an asynchronous thread (executing a task' s asynchronous activity) .
With the appropriate set of ring buffers an APU has the capacity to handle synchronous and asynchronous activities with excellent throughput characteristics.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
35
The difficulty with this model is how to permit services to be synchronous on one platform and asynchronous on another without complicating driver definitions.
20.4 The Dolce Twist
The solution to platform portability is to allow at least some drivers to:
1. Have platform specific code;
2. Be recallable;
3. Record a context or state across invocations (held in an asynchronous context block) ; . Detect whether they are running on a synchronous or asynchronous thread;
5. Request the message bus bounce them to the asynchronous thread and recall it.
Some of the drivers are then deemed synchronous drivers while other are asynchronous drivers .
We then end up with a cross-sectional model looks like:
Figure imgf000036_0001
With this arrangement we have a general purpose solution which attains our goals.
11 The APU Model
11.1 Some Terms
Term Definition a-driver Asynchronous driver
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
36
Figure imgf000037_0002
11.2 APU's In Practice
An APU consists of an AUT providing asynchronous driver services to an SUT.
Figure imgf000037_0001
Generally speaking the APU's work in the following manner:
A SUT executing a u-task invokes an a-driver via the message bus ;
2 . The a-driver detects it has been called by a SUT; 3 . The a-driver returns to the message bus with a bus request of SBK_BUS_BOUNCE to indicate it needs to be running on an AUT;
4 . The message bus detects the thread bounce request;
5 , The bus request is changed to SBK_BUS_APPEND (which will be detected by the AUT's message bus) ;
The u-task member m_bAsynchronous is set to true;
If there are user tasks in the SUT's a-request overflow queue the current user task is appended to the end of that queue;
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
37 -
8. The SUT takes user tasks from the head of the overflow queue and appends them into the associated AUT a-ring while there are tasks in the overflow queue and space in the in-ring; 9. If there are no user tasks in the SUT's a-request overflow queue and there is room in the associated AUT's a-ring the SUT appends the user task onto the a-ring of that AUT;
10. If there are no user tasks in the SUT's a-request overflow queue but there is no room in the associated
AUT's a-ring the SUT appends the user task onto it's a- request overflow queue;
11. The SUT goes back to its thread loop and inspects its in-ring; 12. Eventually the AUT inspects it's a-ring and removes the u-task from the ring;
13. If the u-task is marked as terminating the a-thread puts the task onto it's r-ring;
14. Otherwise the AUT then executes the task starting with the SBK_BUS_APPEND request;
15. The AUT message bus appends the u-task to the AUT's asynchronous request queue;
16. The bus request is then set to SBK_BUS_REINVOKE;
17. A SBK_BUS_REINVOKE request causes the message bus to recall the a-driver;
18. The a-driver detects it has been invoked by an AUT and begins executing the asynchronous portion of its code;
19. The a-driver detects it's DCB is un-initialized, so it initializes it; 20. The a-driver then issues an asynchronous request to the operating system;
21. The a-driver updates the state in its DCB;
22. (*)The a-driver returns to the message bus with an SBK_BUS_AWAIT request and then returns back to the a- thread loop, discarding the u-task;
23. The a-thread loop then tests its in ring again (there may need to be a cycle limit on this) ;
24. If there are no further tasks in the in-ring, the AUT goes into an alertable sleep state ,- 25. Eventually the AUT is interrupted by a callback from the operating system;
26. The AUT determines the task from the context supplied by the callback;
27. The AUT calls the message bus on behalf of that user task which in turn calls the a-driver;
28. The a-driver services the callback;
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
38 -
29. If the logical asynchronous request has now been serviced the a-driver returns back to the message bus with a return value of SBK_BUS_COMPLETED,-
30. Detecting SBK_BUS_COMPLETED the message bus performs the logical equivalent of an SBK_BUS_RETURN (which pops the tasks stack frame) , resets the bus request to SBK_BUS_ATTACH, ets m_bAsynchronous to false and puts the task onto the AUT's r-ring;
31. The governor takes the task out of AUT's r ring and processing proceeds as per normal.
32. If the logical asynchronous request had not been completed, the a-driver dispatches a further asynchronous request to the OS, updates the state in its DCB and processing repeats from (*) . 11.3 Canceling Requests
1. Another user task u-task2 terminates u-taski,-
2. The governor puts u-taskj. into the c-ring of the a-thread processing it; 3. On each cycle of inspecting the a-ring the a-thread also inspects its c-ring,- 4. If the c-ring has a u-task in it and the a-thread is still processing that u-task (it's in it's list) the a- thread cancels the overlapped I/O request; 5. The u-task is removed from the a-thread a-queue and put into the a-threads r-ring marked with request cancelled; 6. Termination processing proceeds as per normal.
11.4 Using Overlapped I/O on Windows NT This section describes the specific case of implementing asynchronous sockets using overlapped I/O on Windows NT.
1. The calling SUT appends the u-task onto its AUT's a- ring; 2. Inspecting it's a-ring the AUT removes the u-task;
3. The u-task is appended to the list of requests the a- thread is managing;
4. The a-thread dispatches the appropriate overlapped I/O request (such as ReadFileEx, WriteFileEx) ; 5. The a-thread keeps inspecting it's a-ring while it finds a-requests in it [ [check this - we may need to set a loop limit] ] ,- 6. Upon detecting it's a-ring being empty the a-thread goes to an alertable sleep state by calling SleepEx;
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 39 -
7. The a-thread should probably sleep for the same duration as the u-thread does when it detects no work to do (SΞTTING_THREAD_WAIT_DURATION - lets rename this to just SETTING_THREAD_WAIT) ; 8. (*)At some point the a-thread is altered/interrupted via an I/O completion callback;
9. The a-thread processes the completion callback;
10. If the a-request is now logically completed the a-thread removes the u-task out of it's a-list; 11. The u-task is then appended to the a-threads o-ring and the a-thread returns back out of its SleepEx call;
12. The governor eventually picks the u-task up;
13. If the a-request is not logically completed the a-thread issues a subsequent dispatch and returns back out of its SleepEx call;
14. Processing repeats from (*).
11.5 Using the Berkley Model
Supporting the Berkley Model under Sockets III basically remains the same as it was done in Sockets II except that the test for data is done in the driver:
I. The particular driver requires a select () to be performed on a socket ,- 2. The driver records its state in its DCB and returns with a SBK_BUS_BOUNCΞ request;
3. The calling SUT appends the u-task onto its AUT's a- ring;
4. (*) Inspecting it's a-ring the AUT removes the u-task; 5. The u-task is appended to the list of requests the a- thread is managing;
6. The a-thread inspects it a-request list and builds it's FD SETs from it's a-request list;
7. If there are sockets in the FD SET the a-thread dispatches the select () call for a maximum of
SETTING_THREAD_WAIT, otherwise the a-thread sleeps for SETTING_THREAD_WAIT and repeats from (*);
8. The select () call returns,-
9. The a-thread examines which sockets, if any, may proceed;
10. If there are any sockets which may proceed the a-thread searches for the associated task in it's a-request list;
II. The associated u-task has it's bus request set to SBK_BUS_ATTACH and is then appended to the a-threads o- ring;
12. The a-thread repeats from (*) ;
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 40 -
13. The governor eventually picks the u-task out of the a- threads o-ring and processing continues per normal.
12 Two Further Refinements
12.1 Thread Affinity There maybe times when a user task genuinely requires a dedicated thread. An example of this in sockets is the task that is accepting new connection requests.
To cater for this requirement the task spawning model will be overloaded to include an additional parameter indicating if the task requires a dedicated thread or not (a.k.a. strong or weak thread affinity) .
We will need to make it clear to application developers that they need to make sparing use of this facility - creating an excessive number of strong affinity tasks could interfere with the zero friction model.
12.2 Socket Buffering
As a performance improvement a buffer will be associated with each socket to assist in implementing those calls (such as reading varying length strings) which can only be implemented as byte fetch loops.
Without this buffer byte fetch loops introduce significant scheduling overheads. 13 Analyzing Sockets III
Let us conclude by reviewing the Sockets III model [= APU
Model] :
1. It directly extends the capability of the virtual machine by defining a processing unit which permits both synchronous and asynchronous execution;
2. It is a general purpose model which is not specific to sockets;
3. It is not a black box solution and so with it removes the unwanted duplications;
4. It supports polling and event models,-
5. Is highly portable;
6. Can make explicit use of specific operating system features; 7. Is relatively simple to implement;
8. The design is a very clean separation of concerns;
9. It can probably be patented.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
41 -
Sockets III should exhibit a marked improvement in performance and stability over Sockets II.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 42 -
INDUSTRIAL APPLICABILITY
Embodiments of the present invention are applicable to use in an operating system environment and can also be applied in a virtual machine environment .
Substitute Sheet (Rule 26) RO/AU

Claims

Received 14 August 2001
- 43 -
CLAIMS A resource consumer structure for a computer system, said structure including a first resource consumer adapted to consume resources allocated to it by a resource governor; at least a second resource consumer adapted to received predetermined resources from said first resource consumer for specialised processing. The structure of Claim 1 wherein said at least second resource consumer is adapted to return said predetermined resources to said first resource consumer. The apparatus of Claim 1 wherein said at least second resource consumer is adapted to return said predetermined resources to said resource governor. The apparatus of Claim 1 wherein said predetermined resources comprise tasks which perform asynchronous operations . The apparatus of Claim 1 wherein said first resource consumer receives resources for consumption from a nonlocking input buffer. The apparatus of Claim 5 wherein said non-locking input buffer comprises a ring buffer in which resources for consumption are input via a first in first out queue.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 44 -
7. The apparatus of Claim 6 wherein a thread inserting resources for consumption into said ring buffer manages said queue .
8. The apparatus of any one of Claims 1 to 7 wherein said resource consumer comprises a thread which executes a task; said task comprising a flow of operations; said thread executing said task by executing said operations.
9. The apparatus of Claim 8 wherein each said operation results in a driver function being executed by said computer system.
10. The apparatus of Claim 9 wherein said driver functions include bounce flag communication means wherein said driver function can cause said thread to communicate said task to a different thread. 11. The apparatus of Claim 5 wherein said different thread comprises said second resource consumer.
12. The structure of Claim 1 wherein said resources comprise a first task comprising a flow of operations which are executed by said first resource consumer comprising a first thread.
13. The structure Claim 12 wherein each operation of said flow of operations causes execution of a driver function which is specific to the operation.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 45 -
14. The structure of Claim 13 wherein said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread.
15. The structure of Claim 14 wherein said driver function is specific to said computer system.
16. The structure of Claim 15 wherein said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system.
17. The structure Claim 16 wherein said code layers include a platform independent layer. 18. The structure of Claim 17 wherein said resource consumer structure is implemented in said platform independent layer .
19. The structure of any one of Claims 13 to 18 wherein said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure.
20. The structure of Claim 1 wherein said resources include tasks, unused objects, a unit of network band width or a designated unit of an input/output operation.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 46 -
21. The structure of Claim 1 or Claim 20 wherein each said resource consumer structure includes an input buffer.
22. The structure of Claim 1 or Claim 20 or Claim 21 wherein each said resource consumer includes an output buffer. 23. The structure of Claim 21 wherein said input buffer is a non-locking buffer.
24. The structure of Claim 22 wherein said output buffer is a non-locking buffer.
25. The structure of Claim 23 wherein said input buffer is a ring buffer.
26. The structure of Claim 23 wherein said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer. 27. The structure of Claim 26 wherein a thread inserting a resource for consumption into said ring buffer manages said queue. 28. The structure of Claim 24 wherein said output buffer is a ring buffer. 29. The structure of Claim 1 wherein said predetermined resources are so defined by a predetermined operation of the flow of operations comprising a task. 30. The structure of Claim 29 wherein said predetermined operation is an asynchronous operation.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 47 -
A method of allocation of usable resources to resource consumers in a computer system, said computer system having a plurality of usable resources available for work or to be worked on and one or more resource consumer structures capable of consuming said usable resources; said method comprising:
(a) instituting a special purpose task as a resource governor; (b) said resource governor distributing said usable resources to said one or more resource consumer structures; (c) said resource governor receiving used resources from said one or more resource consumer structures; (d) each resource consumer structure of said one or more resource consumer structures comprising a first resource consumer and an at least second resource consumer; said first resource consumer adapted for communication with said second resource consumer . The method of Claim 31 wherein said usable resource is a first task comprising a flow of operations which are executed by said first resource consumer comprising a first thread.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 48 -
33. The method Claim 32 wherein each operation of said flow of operations causes execution of a driver function which is specific to the operation.
34. The method of Claim 33 wherein said driver includes bounce flag communication means for communication of a change designated thread flag which causes said first thread to pass said first task to said second resource consumer; said second resource consumer comprising a second thread. 35. The system of Claim 34 wherein said driver function is specific to said computer system.
36. The method of Claim 35 wherein said computer system is adapted to execute instruction code; said instruction code arranged as a plurality of intercommunicating code layers within said computer system.
37. The method Claim 36 wherein said code layers include a platform independent layer.
38. The method of Claim 37 wherein said resource consumer structure is implemented in said platform independent layer .
39. The method of any one of Claims 33 to 38 wherein said resource consumer structure is arranged so that said driver function is implemented in a separate one of said layers from said resource consumer structure.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 49 -
40. The method of Claim 31 wherein said usable resources include tasks, unused objects, a unit of network bandwidth or a designated unit of an input/output operation. 41. The method of Claim 31 or Claim 32 wherein each said resource consumer structure includes an input buffer.
42. The method of Claim 31 or Claim 32 or Claim 33 wherein each said resource consumer includes an output buffer.
43. The method of Claim 33 wherein said input buffer is a non-locking buffer.
44. The method of Claim 34 wherein said output buffer is a non-locking buffer.
45. The method of Claim 35 wherein said input buffer is a ring buffer. 46. The method of Claim 43 wherein said input buffer is a ring buffer associated with an overflow queue; said queue adapted to handle overflow of resources for consumption placed in said ring buffer.
47. The method of Claim 46 wherein a thread inserting a resource for consumption into said ring buffer manages said queue .
48. The method of Claim 44 wherein said output buffer includes a ring buffer.
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 50 -
49. A resource consumer structure suitable for use in conjunction with a resource governor; said structure comprising a first resource consumer having a first input buffer and from which said resource consumer takes usable resources for use; said structure further including at least a second resource consumer which receives resources for consumption from a second input buffer; said second input buffer receiving resources for consumption from said first resource consumer. 50. The structure of Claim 49 wherein said second resource consumer accepts resources for use of a predefined type from said first recourse consumer.
51. The structure of Claim 49 wherein said first resource consumer obtains said usable resources for use only from said at least one input buffer.
52. The apparatus of Claim 50 or Claim 51 wherein said first input buffer receives resources of only said predefined type.
53. The structure of any one of Claims 49 to 52 wherein said resource consumer experiences no contention for execution of said usable resources for use.
54. The structure of any one of Claims 49 to 53; said structure further including an output buffer into which
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 51 -
said resource consumer structure places said usable resources following execution. 55. The structure of any one of Claims 49 to 54 wherein said first input buffer is a non-locking buffer. 56. The structure of Claim 54 wherein said output buffer is a non-locking buffer.
57. The structure of Claim 55 wherein said first input buffer includes a ring buffer.
58. The structure of Claim 56 wherein said output buffer includes a ring buffer.
59. A driver function for use by a resource consumer structure of a computer system; said resource consumer structure including a first resource consumer and a second resource consumer; said first resource consumer adapted to transfer predetermined tasks to said second resource consumer; said driver function incorporating bounce flag communication means whereby said driver function when invoked by said first resource consumer can communicate to said first resource consumer a bounce flag whereby said first resource consumer is caused to communicate said predetermined task to said second resource consumer for further execution of said task.
60. The driver function of Claim 59 wherein said predetermined resources are so defined by a
Figure imgf000052_0001
Received 14 August 2001
- 52 -
predetermined operation of the flow of operations comprising a task. 61. The driver function of Claim 60 wherein said predetermined operation is an asynchronous operation. 62. A resource consumer structure for a computer system; said structure comprising a first resource consumer and an at least second resource consumer; said first resource consumer in communication with said second resource consumer by way of a non-locking input buffer. 63. A non-locking buffer structure comprising a ring buffer and a queue .
64. The buffer structure of Claim 63 wherein said queue is a first in first out queue.
65. The buffer structure of Claim 63 or 64 wherein said queue is adapted to handle overflow of resources for consumption placed in said ring buffer.
66 . The buffer structure of Claim 65 wherein a thread inserting a resource for consumption into said ring buffer manages said queue. 67. A multi -function resource consumer structure comprising a first special purpose resource consumer and an at least second special purpose resource consumer; said first special purpose resource consumer adapted to pass
Substitute Sheet (Rule 26) RO/AU Received 14 August 2001
- 53 -
resources to said second resource consumer for specialised processing.
Substitute Sheet (Rule 26) RO/AU
PCT/AU2001/000513 2000-05-05 2001-05-04 Resource consumer structure WO2001086514A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU55982/01A AU5598201A (en) 2000-05-05 2001-05-04 Resource consumer structure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPQ7324A AUPQ732400A0 (en) 2000-05-05 2000-05-05 Resource consumer structure
AUPQ7324 2000-05-05

Publications (1)

Publication Number Publication Date
WO2001086514A1 true WO2001086514A1 (en) 2001-11-15

Family

ID=3821405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2001/000513 WO2001086514A1 (en) 2000-05-05 2001-05-04 Resource consumer structure

Country Status (2)

Country Link
AU (1) AUPQ732400A0 (en)
WO (1) WO2001086514A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5003471A (en) * 1988-09-01 1991-03-26 Gibson Glenn A Windowed programmable data transferring apparatus which uses a selective number of address offset registers and synchronizes memory access to buffer
WO1991020040A1 (en) * 1990-06-11 1991-12-26 Unisys Corporation Remote terminal interface as for a unixtm operating system computer
EP0550196A2 (en) * 1991-12-31 1993-07-07 International Business Machines Corporation Personal computer with generalized data streaming apparatus for multimedia devices
JPH07170267A (en) * 1993-12-14 1995-07-04 Toshiba Corp Traffic shaping system in atm communication
US5771383A (en) * 1994-12-27 1998-06-23 International Business Machines Corp. Shared memory support method and apparatus for a microkernel data processing system
JPH10207639A (en) * 1997-01-28 1998-08-07 Sony Corp High speed data recording/reproducing device and method therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5003471A (en) * 1988-09-01 1991-03-26 Gibson Glenn A Windowed programmable data transferring apparatus which uses a selective number of address offset registers and synchronizes memory access to buffer
WO1991020040A1 (en) * 1990-06-11 1991-12-26 Unisys Corporation Remote terminal interface as for a unixtm operating system computer
EP0550196A2 (en) * 1991-12-31 1993-07-07 International Business Machines Corporation Personal computer with generalized data streaming apparatus for multimedia devices
JPH07170267A (en) * 1993-12-14 1995-07-04 Toshiba Corp Traffic shaping system in atm communication
US5771383A (en) * 1994-12-27 1998-06-23 International Business Machines Corp. Shared memory support method and apparatus for a microkernel data processing system
JPH10207639A (en) * 1997-01-28 1998-08-07 Sony Corp High speed data recording/reproducing device and method therefor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN *

Also Published As

Publication number Publication date
AUPQ732400A0 (en) 2000-06-01

Similar Documents

Publication Publication Date Title
US5390329A (en) Responding to service requests using minimal system-side context in a multiprocessor environment
US8316376B2 (en) Optimizing workflow execution against a heterogeneous grid computing topology
Kalé et al. Converse: An interoperable framework for parallel programming
US7080379B2 (en) Multiprocessor load balancing system for prioritizing threads and assigning threads into one of a plurality of run queues based on a priority band and a current load of the run queue
EP0473444B1 (en) Scheduling method for multiprocessor operating system
JP2829078B2 (en) Process distribution method
US20090049429A1 (en) Method and System for Tracing Individual Transactions at the Granularity Level of Method Calls Throughout Distributed Heterogeneous Applications Without Source Code Modifications
CN108595282A (en) A kind of implementation method of high concurrent message queue
CN101464810A (en) Service program processing method and server
Yang et al. Rumr: Robust scheduling for divisible workloads
CN108228330B (en) Serialized multiprocess task scheduling method and device
EP1131704B1 (en) Processing system scheduling
Nakajima et al. Experiments with Real-Time Servers in Real-Time Mach.
EP1652086B1 (en) Kernel-level method of flagging problems in applications
US7703103B2 (en) Serving concurrent TCP/IP connections of multiple virtual internet users with a single thread
Wendorf Implementation and evaluation of a time-driven scheduling processor
Horowitz A run-time execution model for referential integrity maintenance
CN101349975B (en) Method for implementing interrupt bottom semi-section mechanism in embedded operation system
WO2000077611A3 (en) Method and system of deploying an application between computers
WO2001086514A1 (en) Resource consumer structure
JP3772713B2 (en) Priority dynamic control method, priority dynamic control method, and program for priority dynamic control
JPH09101902A (en) Job scheduling system
Nogueira et al. On the use of work-stealing strategies in real-time systems
WO1992003794A1 (en) Dual level scheduling of processes
WO1992003783A1 (en) Method of implementing kernel functions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC (COMMUNICATION DATED 16-05-2003, EPO FORM 1205A)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP