CA1176382A - Method and system for handling sequential data in a hierarchical store - Google Patents

Method and system for handling sequential data in a hierarchical store

Info

Publication number
CA1176382A
CA1176382A CA000406369A CA406369A CA1176382A CA 1176382 A CA1176382 A CA 1176382A CA 000406369 A CA000406369 A CA 000406369A CA 406369 A CA406369 A CA 406369A CA 1176382 A CA1176382 A CA 1176382A
Authority
CA
Canada
Prior art keywords
data
cache
peripheral
commands
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000406369A
Other languages
French (fr)
Inventor
John H. Christian
Michael H. Hartung
Arthur H. Nolta
David G. Reed
Richard E. Rieck
John S. Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1176382A publication Critical patent/CA1176382A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch

Abstract

ABSTRACT OF THE DISCLOSURE

The disclosure relates to promotion of data from a backing store (disk storage apparatus termed DASD) to a random access cache in a storage system such as used for swap and paging data transfers. When a sequential access bit is sent to the storage system, all data specified in a read "paging mode" command is fetched to the cache from DASD. If such prefetched data is replaced from cache and the sequential bit is on, a subsequent host access request for such data causes all related data not yet read to be promoted to cache. For a certain implementation, a maximal amount of related data is promoted; such maximal amount is determined by cache addressing characteristics and DASD access delay boundaries. Without the sequential bit on, only the addressed data block is promoted to cache.

Description

~7~3~

METHOD A~D SYSTEM FOR HANDLING
SEQUENTIA~DATA IN A HI~RARCHICAL STORE

Eield of the Invention Invention relates to peripheral memory systems of the hierarchical type, particularly to transfer of data signals from a backing store to a front store of such memory system.

Discussion of ths Prior Art Peripheral memory systems which are attachable to a host (central processing unit and the like) serve diverse purposes with respect to the host. Some systems are for storing user data while others are for storing so-called paging and swapping data, such as can be used in connection with paging and swapping program data sets.
An example of a paging store is the IBM 2305 Fixed Head Storage Module which is described in publication vA26-1589-3 "Reference Manual for IBM 2835 Storage Control and IBM 2305 Fixed Head Storage Module" available from International Business Machines Corporation, Armonk, New York. This peripheral msmory system consists of a magnetic storage drum which ~rovides -aDid access to the stored data. Because of its limited storage capacity, ia-ger hosts reauiring larger capacities cannot always t~

1 effi~iently use the IBM* 2305 paging store to i-ts maximum efficiency; eYtra capacity is provided by disk type direct-access storage devices (DASD). Such is particularly true when so-called swapping data seks are used, i.e. large sequential sets of data are rapidly transferred between a host and a peripheral memory. To alleviate storage capacity limitations t while not sacrificing performance criteria, a hierarchical store can be substituted for the 2305 storage drum. An example of such a hierarchical store is shown in U.S.
Patent No. 3,569,938, issued March 9, 1971 to Eden.
This patent teaches the concept of an apparent store of high performance and high capacity through the use of a high speed cache memory operatively coupled to a backing store, such as a (DASD) or a magnetic tape recorder. Eden teaches that it is beneficial to page data from the backing store to the cache or front store upon demand, including paging the data that surrounds the requested data. While this arrangement is highly successful for general application of peripheral memories, when a series of backing stores share a common cache and the host is employing multi-tasking, the placement of large sets of serial data in the cache becomes troublesome. One solution is to try to make a larger cache. This solution unnecessarily adds to cost of the peripheral system and hence is undesirable. Accordingly, some better solution is needed.

U.S. Patent No. 3,588,839, issued June 28, 1971, to Belady, shows promoting a next word of data whenever a given word is requested. This arrangement works fine for a cache on a main memory. However, where large sets of data are being transferred, only promoting one additional set of data does not necessarily provide maximal utilization of the peripheral memory system by the host. This lack of *Trade Mark ~.~

1 maximal use is aggravated by the physical characteristics o~ the backing s-tore. For example, in DASD backiny stores there are several significant delay boundaries caused by -the mechanical characteristics of the disk storage apparatus. For example, when selecting one disk storage apparatus or another disk storage apparatus, substantial delays can be incurred.
Additionally, most disk storage apparatus for controlling costs has but a one or two transducers per a recording surface. Access to all of the data areas on the disk storage apparatus is by radially moving the transducers for accessing various ones of the concentric record storage tracks~ Such head movements are called cylinder seeks and requlre substantial delays when measured in terms of electronic speeds.
Accordingly, the selady solution, while eminently satisfactory for many applications, does not solve the problem of handling large sequential data sets in a multi-tasked multi- device environment~

U.S. Patent No. 3,898,624, issued August 5, 1975 to Tobias, shows a cache connected intermediate a host and main memory. According to this patent, an operator control panel is used to select when to promote one line of data from the main memory to the cache. The promotion always includes but one line of data. While this system may be employed to advantage in certain situations, such as in a main memory to host relationship, it does not in any manner address the large sequential data set problem mentioned above.

It is also desired that in any of these peripheral memory systems to minimize the amount of host intervention in the system to achieve desired goals. An example of such minimization of host intervention is shown in Bass, et al ~$~

4 ~

U.S. Patent 4,262,332 which uses high use characteristics and a define extent mechanism for minimizing host access to DASD peripheral systems.
While this certainly provides for a minimization of host intervention in such devices, it does not indicate with any of the prior art how to handle ~arge sequential data sets with a shared cache in a multi-tasking multi-device environment.

5 ~7~

Summary of the Invention It is a principle object of the present invention to provide methods and sys~ems for reducing a number of delays in sequential data signal transfers while minimizing cache front store or buffer size. While transferring such sequential data signals between a backing store through a cache to a host using chains of peripheral ~ommands. The invention contemplates the peripheral storage system receiving an indication from lOi the host that sequential data will be requested by the host in an upcoming set of peripheral operations characterized by a sequence of peripheral commands.
Along with the sequential indication is an electrical indication of a number of blocks of said data signals which will be sequentially accessed through to an end address of the first ones of said indicated sequential blocks. The peripheral system stores the electrical indications of said numbers and address along with said sequential indication such that subsequently received peripheral commands can be efficiently executed in a series of peripheral operations. The operation of the cache is such that each time a cache miss occurs, the peripheral storage system examines the cache contents and assigns a plurality of randomly selected areas of the cache to receive the sequential data. Then a predetermined number of blocks of seouential data are rapidly transerred from the backing store, such as a disk storage apparatus, to the cache. A directory is updated indicating which of the random locations within the cache contain the secuential data. Each tlme a block of such seuential data is requested, the directory points tO the data in the cache as if it wera being received sequentially from the backing store.

6 ~

In another mode of operation of the present invention, all data defined by the host as being sequential is promoted to the cache. For example, if eight blocks of data are indicated as being sequential, initially the eight blocks are promoted to cache. The host can read three blocks of such data. Assume that because of replacement algorithms, the other five blocks are replaced. Next time the host requests the fourth block there is a ,~iss (fourth block was replaced) the peripheral sys_em, when sequential data is indicated, automatically promotes the remainder of the sequential data, i.e. blocks 4 through 8, to ths cache for use by the host.

Promotion of sequential data may incur a given error condition in a given block of data. The block in error is not promoced, but all preceding blocks are promoted to the cache; all other blocks could be promoted to cache.
The host then reads the promoted blocks of data from the cache and any attempted access of the non-promoted block results in a miss. A retry may occur on the miss data but once. If the retry is unsuccessful, a permanent error is indicated to the host and the other sequential data can be read from the cache. Recovery of the block in error can be provided by overriding action from the host for directly accessing the disk storage apparatus. This recovery may not enable recovery from errors in DASD.

The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description o ?referred embodimen's of the invention, as illus~ra~ed in the accompanying drawing.

7 ~'7~3~

Description of the Drawing_ Figure 1 is a logic diagram illustrating a periheral system connected to a host which incorporates the present invention. Also shown are channel ccmmand words and internal command words used in connection with practicing the invention.

Figure 2 i~, a logic block diagram of a preferred implementation of the Figure 1 illustrated system employing a programmed processor for controlling the peripheral system.

Figure 3 illustrates various data structures used in connection with the operation of the Figures 1 and Z
illustrated peripheral memory system.

Figure 4 diagramatically illustrates a mode of operation of the Figures 1 and 2 illustrated peripheral memory system employing ch2nnel command words and internal command words.

Figure 5 is a logic flow diagram showing execution of a set paging parameter command by the Figures 1 and 2 illustrated peripheral memory system.

Figure 6 is a logic flow diagram for a read command execution together with a cache nit logic flow for the Figures 1 and 2 illustrated memory system.

Figures 7 and 8 jointly show a logic flow diagram rela_sd to the promotion of sequential data from a bacXing stors to a front or cache store including preparatory portior,s and internal command word portions in the Figures 1 and 2 illustrated memory system.

Figure 9 is a logic flow chart showing post data-promotion processing for large sequential data sets in connection with practicing the present invention in the Figures 1 and 2 illustrated memory system.

Detailed Description Referring now more particularly to the drawing, like numerals indicate like parts and structural features in the various diagrams. A hierarchical peripheral storage system 10 is attached to a host 11 for receiving and supplying data signals for host and storage utilization.
In a typical application of storage system lO, host 11 consists of,a central processing uni~ (CPU). In other variations, host ll can be a virtual machine, a set of virtual machines running on a hardware CPU. Host 11 may also be a multi-processor, a uni-processor wi h attached processors and the like. While the invention can be applied to a great variety of storage systems lO, the preferred and illustrated embodiment shows a paging peripheral storage system or handllng paging and swapping data sets. Generally, such paging and swapping data sets relate to storage of program data sets for host 11. As such, storage system 10 is attached to a single host while a general application peripheral storage system can be attached to a plurality of hosts. The invention can be applied to either type of peripheral storage system.

Communications between the paging storage system 10 and host 11 is via a plurality of input/output connections 12-15 which are constructed in accordance with the input/output peripheral channels of the IBM 370 series of computers available from International Business Machines Corporation, Armonk, New York. Such input/output connections, commonly referred to as channels and subchannels, are so we].l known, their description is not necessary. Storage svslem 10 has a lower or bacXing storage portion consis~ing of a plurality of direct access storage devices (DASD) 16-18 and separately enumerated D0, D1, .... All accessing of data and storage of data by host 11 with respect to peripheral storage system 10 is by addressing the DASDs 16-18. This addressing is achieved by using the architecture of the input/output connections 12-15 which - are summarized in a set of logic blocks 19. Logic bloc~.s 19 represent a channel command word (CCW) as used in the channels for the IBM input/output connections.
Typically each channel command word 19 includes an address byte 20. Each address byte 20 includes a plurality of bits for designating the control unit (CU) which is to receive the command. A second plurality of bits DEV which uniquely identify the devices 16-18 to be accessed. In a paging and swapping peripheral storage system 10, each of the devices 16-18 is provided with a plurality of logical device addresses, i.e. device D0 for example can be addressed by any one of four addresses. Such multiple addressing has been practiced in the IBM 2305 paging storage system to a limited degree. The logical addresses for each device are indicated in the bits AC of address byte 20.
.Accordingly, AC has two bits for indicating which of the four logical addresses are being used by host 11 to address a device D0. In the presently constructed embodiment, one of the logical addresses 00 designates a direct access to devices 16-18. That is host 11 operates with devices 16-18 as if peripheral storage system 10 were not a hierarchical system; all the hierarchy is by-passed for direct access. For the AC bits being equal to01, 10 or 11, the hierarchy, later described, is accessed for obtaining data from devices 16-18 or supplylng da~a to those devices such that the apparent perfor~ance of those devices is enhanced on those three logical device B~
addresses. The abbreviation AC is intencled to indicate accessed path (logical) to the device indicated by bits DEV.

A second byte of CCW 19 is command byte 21 which contains a code permutation signifying to peripheral memory system 10 what function is to be performed. A third byte 22 is a command modifier byte having a plurality of control field,s.which electrically indicate to peripheral memory system 10 various modes of operation for 10~ executing the command indicated in byte 21. Of interest to the present invention is the bit pattern S~Q which host 11 uses to indicate to peripheral memory system 10 that the data to be transferred in an upcoming set of transfers will be sequential data. When SEQ portion of ~yte 22 indicates sequential data, then the additional command modifier byte 23 is included in the CCW 19 for indicating the number of blocks or segments of data which will be transferred from devices 16-18 to host 11, or in the reverse direction, as a sequential set of data~ Such sequentiaL sets of data in a paging environment are often reîerred to as swapping data sets. Additionally, byte 22 can indicate read and discard in section RD which means that once host 11 obtains data from the hierarchy, that data in the hierarchy cache can be discarded; the data in the devices 16-18 is retained. Further controls are provided by so-called "quest operating system" G0. In a virtual computer environment for host 11, one of the operating systems can have cognizance of the` paging peripheral memory system 10. Access to peripheral memory system lO can be handed over to another operatirg system for accessing or storing data. Such other opera~ing system is a gues' of the 'irst operating system and hQnce is not allowed to modify certain control aspects of ~he peripheral memory system. Other control fields are also used within byte 22 which are beyond the present description of the illustrated embodiment.

The hierarchy includes a system storage 30 of the semiconductor random access type which has a portion 40 designated as a cache for devices 16-18. Caching - ' principles are sufficiently well known that the purposes and intent o~,~ache,40, with respect to devices 16-18, need not be detailed. A control 31 receives the peripheral commands from host 11 for accessing devices 16-18 through one of the logical device addresses AC as well as providing access to cache ~0 based upon the other three logical device addresses of AC. Data is transferred automatically by peripheral memory system 10 between cache 40 and devices 16-18. This transfer is achieved using the same principles of transfer as between host 11 and devices 16. That is, host 11 accesses devices 16-L8 in a direct mode (AC ~ 00) via channel adaptors 32, individually denominated CAA, CAB, CAC and CAD, then over bus 70 through direct access control 56, data circuit 33, device adaptor 34 and device control attachment DCA 35. Received CCWs 19 are interpreted by control 31 for determining the direction of data flow between host 11 and devices 16-18 as well as other functions as is well known for controlling this type of storage apparatus. The relationships of cache 40 to devices 16-18 is substantially identical to the relationships between host 11 and devices 16-18. That is, while host 11 provides control via a series of CCWs 19, the control 31 provides access betwsen cache 40 and devices 16-18 by using a plurality of internal control words (ICW) which are structured in a similar manner to the CCWs as will become apparent. Certain efficienciss and transfer of operation can be provided by aLtering the ICWs 24 with respect to the CCWs 19. Instead of going through the channel adaptors 32, control 31 has a cache access control CAC 61 which operates system storage 30 5 and provides access to devices 16-18 through direct access control DAC 56 usi~g the ICWs 24. Instead of channel adaptors 32, a linkage port LKP 25 provides for transfers between CAC 61 and DAC 56. LKP 25 is described later with re~ ect to Figure 3.

10~ Each ICW 24 includes a command byte 25 correspondinq to command byte 21. It should be appreciated that the code permutations for identical commands are the same. Some additional commands are provided ~hile some of the commands for byte 21 are dispensed with. A command 15 modifier byte 27 includes a chain control bit "CEIAIN"
which replaces the chaining indication normally provided by host 11 to control 31 via channel adaptors 32. (The chaining indication by host 11 is the supplying of a SUPPRESS OUT tag signal.) When final statlls is due to be 20 reported by peripheral memory system 10 to host 11;
SUP"RESS OUT indication of chaining, i.e. an indication of a series of closely related peripheral commands as is fully described and used in connection with the input/output connections 12-15. Since CAC 61 does not 25 use tag signals, command modifier byte 27 is used to replace that tag control signal. The bytes 28 of each ICW 24 points to the stored location of the address of the devices 16-18. No logical addresses are used in the ICWs. In fact, control 31 converts all of the logical 3() addresses directed to the hierarchy into bits DEV.
Address bytes 28 not only points to the stored location of DEV but also points to the cylinder address (C), the head or track address (H) and the record address (~).

;3~;~

The record address corresponds to a sector address used in addressing most disk storage apparatus. In a preferred embodiment, four records were provided on a single track (H address); hence the record address is 1,
2, 3 or 4 corresponding to an effective orientation of 0, 90, 180 and 270 of the disk ~Jith respect to a reference rotational point. Design parameters may dictate actual rotational orientations that may differ ~rom the ortho,g;onal orientations.

Cache 40 transfers data signals through channel adaptors 32 with host 11 via bus 41. In a similar manner, data signals are transferred bet~een devices 16-18 through data circuits 33 to cache 40 via bus 42. When simultaneous transfers between cache 40 and host 11 or DASDs I6-18 are not desired, buses 41 and 42 are combined into a single bus time shared by the data transfers.
Accessing cache 40, which can be a relatively large memory (several megabytes), requires CAC 61 to transfer the device address together with the cylinder and record addresses CHR over bus 64 to hash circuit 44. Hash circuit 44, which may be microcode implemented, converts the DASD address into a hash class indicator. Since the storage capacity of cache 40 is much less than devices 16-18, the address range of devices 16-18 are concentrated into classes called hash classes for ease of access. A scatter index table SIT 45 has one register or each o the classes defined by hash circuit 44. The contents of the registers in SIT 45 are address`pointers to a directory DIR 43 which contains the address DC~R
used to access devices 16-18. When data is stored in cache 40, the DASD 16-18 DCHR address together~i'h the cache 40 address is stored in a so-called entry of DIR
43. Since a plurality of device 16-18 addresses . 15 ~7~

corresponds to one hash class, a singly-linked hash class list is provided in the entries of DIR 43 such that scanning cache 40 using hashing only requires scanning the entries within a given hash class. Based upon the contents of directory 43, cache 40 is accessed using known techniques. If no related entries are found in directory 43, then a miss occurs requiring CAC 61 to either allocate space in cache 40 for receiving data from host 11 or to,,~ransfer data from devices 16-18 using ICWs 24 and linkage port LKP 25.

Control 31 includes the usual portion of control units that attach to hosts. For example, address and command evaluator ACE 50 communicates with channel adaptors 32 via buses 51, 52, 53 and 54 for receiving command signals from host 11 and supplying status signals to host 11. ACE
50 evaluates CCWs 19 and instructs the peripheral memory system 10 to perform the commanded function as well as indicating the chaining conditions and receiving status signals from the other portions of the peripheral system for relaying to host 11. In a direct mode, i.e. AC = 00, ACE 50 supplies command signals over bus 55 to 3AC 56 such that data signaLs can be transferred between data circuits 33 and the appropriate channel adaptor 32 using known DASD peripheral storage device techniques. In executing its functions, DAC 56 exercises control over data circuit 33 in the usual manner.

Of importance to the present description is the operation of the hierarchy such that sec~ential data sets can be placed in cache 40 using a minimal si~e cache with minimal allocation controls wh~le maintaining sequentiality in an efficient manner and mainlaining a su~ficient number of the data blocks in cache to sa~isfy 16 ~7~

the operating requirements of host 11. ACE 50, when receiving a logical device address in byte 20, indicating access to the hierarchy, supplies the received command signals over one of the three buses 60 to CAC 61. The three buses are logical buses indicating the respective cache 40 accesses. CAC 61 stores the received command and modifier data in a channel control block register 63, one register for each of the logical devices. Remjember there are three logical device addresses for each of ~he devices. ThereIore i r there are eight devices 16-18 then there will be 24 registers in LDC~ 62.

The identification and operational status of each logical device is kept in a respective one of logical device control block registers in LDCB 62. Access to the logical device, which is represented by allocation of registers in cache 40 to the address indicated in fields AC and DEV of byte 20, is via address bus 64 to hash circuit 44. In certain situations for sequential data, sequential addresses for devices 16-18 (CHR portion) successive registers in SIT 45 can be accessed.
Accordingly, CAC 61 accesses SIT 45 via bus 65 to avoid the delay in hash circuit 44. This operation enhances the response of peripheral system 10 to host 11 when sequential data is being processed. Wnen CAC 61 receives a miss indication from searching the hash class of DIR
43, a request for a data transfer from devices 16-18 to cache 40 is supplied over bus 66 to DAC 56 via`LKP 25.
The bus 66 signal alerts DAC 56 to the request and indicates the ICWs are addressable via LKP 25. In ~he preferred microcode embodiment, LKP 25 is a mic~ocode lin~age port, as will become apparent. DAC 56 ~esponds to the ICWs 24 in the same manner that it responds to 'he CCWs 19. Upon completion of the data transfer, as requested through LKP 25, DAC 56 supplies sta'~us signals over bus 67 to CAC 61. At that time, cache 40 has data available to host 11. Further communications between 5 CAC 61 and DAC 56 are via bus 68, all such communications including storing message data in LKP 25. Because devices 16-18 are accessed through a plurality of logical device addresses, a set of queuing registers 69 queue device,related operations requested by CAC 61. In this manner, DAC 56 may not be concerned with the queuing requests through the logical devices but can operate in a direct-access DASD mode for either host 11 and for CAC
61. In this manner, DAC 56 cannot only be used in connection with the hierarchy, but can be used in those peripheral storage systems not employing a hierarchy.

` CAC 61 also includes additional controls, for exampLe, register ADEB 76 contains one entry of directory 43 with which CAC 61 is currently operating. That is, the address of device 16-18 resulted in a hit of cache 40 or a portion of cache 40 was allocated to data to be supplied by host 11; by placing the entry in register ADr3 76, operation of CAC 61 is enhanced. That is, directory 43 is a part of system storage 30; by placing the active entry in ADEB 76, system storage 30 is free to transfer data 25 over buses 41 and 42 independent of control 31. ~evice buffer (DEV BUF) registers 77 contain control information relating to a device 16-18 and are used by CAC 61 in setting up accesses ihrough DAC 56. Such registers are found in a writable control store in the microcoded implementation oî the invention. Bufîer 77 is merely an allocated portion of control store wi.h no designated data s~ructure. 3ST 78 is a bu fer sequence table described later with respect to Figure 3 and used
3~2 in connection with practicing the present invention within the illustrated peripheral system 10. It includes pointers to directory 43 for each of the data blocks to be transerred in a sequence of data blocks over bus 42 as well as a scanning control mechanism for determining which directory index is to be used for accessing cache 40 during the sequential transfer. In this manner, a sequential transfer can dispense with addressing s,etups such that a burst of blocks fro~ a device 16-18 can be made without interruption, as will become apparent.

Figure 2 is a block diagram of a preferred embodiment of the Figure 1 illustrated system which employs a programmed microprocessor 31P corresponding to control 31. Bus 70 extends rom channel adaptors 32 tc data circuits 33 and operates in an identical manner as shown for Figure 1. Buses 41 and 42 extend respectively from channel adaptors 32 and data circuits 33 to system storage 30. Buses 41 and 42 may be combined into one bus with data transfers time sharing the single bus.
~rocessor 31P in controlling the transfer between data circuits 33 and system storage 30 provides control signals over bus 71 to circuits 33 and address and sequencing control signals over bus 72 to system storage 30. A plurality of system storage address registers SSAR
79 provide addresses to system storage 30. For example, 8 or 16 SSARs 79 may be provided. Therefore wh~n processor 31P accesses system storage 30, not only does it give the address of ~he system storage 30 to an SSAR 79 but indicates which of the SSARs is to be used in accessing the storage. Multiplex addressing registers to a memory are known and therefore not urther described.

TU9~1014 In practlcing the present invention, the SS~R 79 for ~ach of the burst of sequential data blocks, processor 31P
primes system storage 30 by loading the addresses of cache 40 (a portion of subsystem storaye 30) within an SSAR such that the address need not be loaded in SSAR 79 intermediate the successive sequential blocks.
Therefore during the sequential transfer, processor 31P
merely refers to an SSAR for initiating the transfer of data signals between cache 40 and a device i6-18. It should be noted that cache 40 has a given address space within system storage 30 in a similar manner. Directory 43 has a different range of addresses. SSAR 79 are separate electronic registers outside the memory array of system storage 30. Processor 31P communicates with channel adaptors 32 over a single bus denominated as 51-54.

Operation of processor 31P is in accordance with microcode programs stored in a control store 73 which is preferably writable, although a portion can be wr~table while another portion containing certain programs can be read-only. Bus 74 couples the processor 31P to cont-ol store 73. Within control store 73 are programs ~CE 50P
which implement the function of address and command evaluator 50, DAC 56P which are programmed to implement the function of direct access cont_ol 56, CAC program 61P
which implements the functions of cache access control 61 and OP 75 which are other programs necessary îor operation of the storage system 10 but which are not necessary to an understanding of the present invention.
The registers used by processor 31P to control the sys~em 10 via the programs 50P, 56 and 61P include CCB 63, ~DC3 62, queue registers 69, ~DEB 76, SIT 45, bu.fer 77, L~P
25 and BST 78. For an extremely large cache 40, SIT 45 ` 20 ~7~3~

can be stored in system storage 30. To enhance performance, a set of registers for containing a page of SIT 45 can be reserved in control store 73.

Operation of the Figure 2 illustrated preferred embodiment is best understood by reference to Figures 3 through 9 which illustrates the data structures in detail as well as logic flow diagrams for the microcode portions necessary for an understanding o} ~he operation of the present invention. Figure 3 illustrates the data structures used by a processor 31P to operate peripheral svstem 10 in accordance with the invention. LDCB 62 is a series of registers containing data signals in con~rol store 73 consisting of four sections. A first section 80 is a so-called foundation data structure which defines and supports the functions of peripheral system 10 in a general operational sense. Pparms 81 is that portion of LDCB 62 relating to the parameters defining a paging and swapping function established through the later described set paging parameters command. Cparms 82 contains the command parameters such as set sector, seek, search ID command issued by host 11. These commands are those used in connection with known disk storage apparatus peripheral storage systems. Rparms 83 contain the parameters for supporting read activity;
~5 i.e. transferring data signals from devices 16-18 to cache 40.

The` foundation portion 80 includes a bit ODE 90 which signifies whether or not a device end (DE) is owed by peripheral storage system 10 to host 12. CNL mask 91 cor.tains a bit pat~ern indicating which channel adaptor 32 received ~he current command, i.e. which channel ~he logical device has an aîfinity to. LD~DR 92 contains a ~ 21 ~'7~3~

code permutation indicating a logical address received with the command, i.e. the bit patterns of AC and D~V of byte 20 in Figure 1. CMD 23 contains the code permutation from byte 21 of Figure 1. SEQ 94 contains the contents of SEQ section of byte 22 of Eigure 1. CCR
95 indicates whether a channel command retry has been sent to host 11 by system 10. In this regard, when a ` cache miss is indicated in section 96, a channel command retry was sent,to host 11. Therefore LDCB 62 signifies when a miss has occurred for cache 40 and-~hether or not : system 10 has supplied the appropriate CCR signal.
Channel command retries merely signifies to host 11 that a delay in executing the peripheral command is reouired.
System 10 upon reaching a state in which the command can be executed will send a device end (DE) signal to the host. The host t~en sends the peripheral command for the second time suc~ that the command can then be executed by system 10.

Pparms 81 include a sequential bit 100 corresponding to the sequential bit SEQ in byte 22 as well as the RD
indicator 101 from RD section of byte 22. B COUNT 102 contains the number of blocks from byte 23. As each ` block of the sequential data is transferred to host 11, B
COUNT 102 is decremented by one. Therefore, it indicates the number of blocks yet to be transmitted to host 11 through cache 40. BASE CYL 103 contains the cylinder address C from which the sequential data will be transmitted from devices 16-18, i.e. in a multicylinder request BASE CYL 103 contains the value C of a virtual machine (VM) minidisk.

Cparms 82 contains .he DASD seek add-ess in S~EK L~DDR
104, the last or current search ID argument in SID 105 and the last or current set sector value in SECT 106.

TUq81014 22 ~ Z

Rparms 83 includes REQD 110 indicatirlg that a data transfer from a device 16-18 to cache 40 is required. P~IP
111 indicates that a read is in progress from a device ~6-18 to cache 40. RA 112 indicates that a read has been completed from a device 16-18 and that certain post processing functions are being performed. DADDR 113 contains the bit pattern of DE~ from byte 20 (Fig. 1) for indicating the actual device 16-18 being addrsssed. DIR
INDEX 114 contains a directory 43 index value for indicating which directory entry register contains the entry corresponding to the logical device identified in the p`articular LDC~ 62 register. SSAR 115 identifies which SSAR 79 will be used in accessing cache 40 in a data transfer between a device 16-18 and cache 40. SAVE 119 indicates an area of the LDCB 62 registers which processor 31P uses to save control data signals during various operations, including interruption operations.

ADEB 76 is structured in the same way that each entry of directory 43 is structured. Accordingly, description of ADEB 76 amounts to a description of directory ~3. In each entry of directory 43 as well as ADEB 76, I~EX iO7 is the logical address of the directory entry. This field contains self identifying data for each entry.
Section 108 contains the address of devices 16-18 corresponding to the data stored in cache or allocated for storage. CCP is the physical cylinder address, i.e.
the actual physical address of the cylinder for a device 16-18, H ls the head address, R is the record address, P
is the device address bit pattern cor~esponding to DE~
section of byte 20, sector is the actual sector v-lue, i.e. rotational pcsition of the dlsk f~om which reading will begin. The R value for tracks having four _ecords can vary from one to four whils the sector value is the ~U981014 -~ 23 actual sector address. In addressing the DASD, the R
value is translated into a rotational position indicator at the byte level as in usual DASD addressing techniques.
The R value in some host operating systems can range from 1-120 or other numbers; in such cases the larger R values are reduced to a valued modulo the num~er of records N in a track. Then the R value, modulo N, is converted to a rotational address of the disk. Such sector value is suitable for,initiating access to a record with a minimal latency delay. CCL is the logical cylinder address such as provided for logical devices which are deined on physical devices. Link 10~ contains the data sisrlal code permutation of the singly-linked list for linking all entries of a hash class together. The last entry of a given hash class will have a particular code pattern (zeroes) indicating end of chain or end of class. M bit 269 indicates whether or not the data in cache 40 has been modified since it was received from a device 16-18.
Other code permutations can be added to each directory 43 entry and which are not per~inent to an understanding of the present invention. For example, an MRU-L~U list may be included in each entry.

LKP 25 is an area in control store 73 accessible by programs ACE 50P, DAC 56P and CAC 61P which make up a linkage port or message area for controlling the interaction of the execution of these microcode~lnits.
In one embodiment, ACE 50P and DAC 56P were treated as one code segment such that LKP 25 was accessect by those two microcode sections as a single unit. In any event, the st-ucture of the port includes a code point CP 110 which identifies ~he portion of _he code which lodged the control data in the port. That is when CAC 51P lodges an ent-y in L~;P 25, DAC 56P wll~ fetch ihe con.rol data and TU~81014 24 ~ 3~

execute the function. Then when DAC 56P enters new data in LKP 25 respondiny to the request by CAC 61P, CP 110 indicates the CAC 61P which point in code execution the DAC 56P relates to therefore can continue processing based upon DAC 56P response. Priority section 111 contains code permutations indicating whether the request lodged in LKP 25 is high priority, low priority or a continued processing indication. V bit 112 indicates whether or.not L~P 25 entry is valid, i.e. is it a recent entry requiring action. DADDR section 113 contains the DEV code permutations from byte 20 for identifying which device 16-18 is associated with the current LKP 25 control data signals. PARMS 114 contains various parameters associated with the message, i.e.
what function is to be performed, status and the like.

BST 78 has a set of registers for each of the devices 16-18. A first register includes-section DELEP 120 which contains an index value of 1 to 8 pointing to the directory indices 122-123. These indices identify the directory entries to be deleted. EK 121 contains a count of the number of valid entries in the table. It is also used as an address, for example ~he first directory pointer index is always stored in 122 while the 8th one is always stored at 123. For a value of 3 in EK 121, a third directory index is accessed. Directory index, remember, is a logical address o a directory 43 entry, hence is a rapid access into directory 43.

Figure 4 illustrates a sequence of CCWs and ICWs in a raad or write data transfer. A read trans.er transfers signals from a de~Jice 16-18 to host 11, while the write ~ransfer is a data t-ansfer in the rQ~srse direclion. ~
chain of CCWs 130 begins with set paging parameters ~SPP) ~ 25 ~7~3~

CCW 132. Fi~ure 5 illustrates the execution of such a command by storage system 10. Fundamentally, SPP 132 sets whether or not sequential data is to be transferred from the peripheral storage system 10 to host 11 as well as other parameters identified in byte 22 of CCW 19 (Fig.
1). Once SPP has indicated parameters of operation to system 10, a seek CCW 133 results in a seek command being transferred to the peri~heral storage system; in one embodiment t~e seek parameters were embedded in the SPP
command. Using normal DASD architectu~e, seek is followed by set sector CCW 134 which in turn is followed by a search ID equal (SIDE) 135. Now the storage system is ready to read data from an addressed device 15-18 by read CCW 136. Upon receipt of the read command, peripheral storage system 10 provides the action indicated in column 131. First of all, the seek set sector and ID commands are stacked as at 140. At 137 a directory 43 search, as explained with respect to Figure 1, is conducted. For a hit, i.e. the re~uested data is in cache 40, the data is im~nediately transferred as indicated by arrow 138 from cache 40 to host 11 via the channel adaptor 32 which received the command. On the othèr hand, if directory 43 indicated the data was not in the cache, then a miss has occurred as indicated at ar~ow 141. A channel command retry ~CCR) is supplied by system 10 as indicated by arrow 142. This tells host 11 that when a DEVICE END signal is recei~ed from system 10, that the read CCW 136 must be reexecuted by ~he channel cy sending the same read command to system 10. Whiie ~his is occurring, system 10 constructs a chain of ICWs 143-148 beginning with a seek ICW 143 which is de~ived f~om the stacked seek ccmmands received rom host 11. For a multitrack operation, the ICWs are deri~ed ~rom search ID parameters. The seek ICW 143 is followed by a set . . .

-~ 26 sector ICW 144 which has the sector calculated from the record number. At 145, the local lnput results in a set cache ICW 145. This ICW causes DAC 56P to insert into the appropriate SSAR 79 the address of system storage 30 at which the data to be read will be stored. If a plurality of blocks of data are to be transferred, then a plurality of set cache ICWs occur as indicated by numeral 146.
Then a search ID equal ICW 147 corresponding to the SIDE
CCW 135 occurs. The search ID equal ICW 147 corresponds to the first set cache ICW 145. This means a plurality of blocks of data are read in sequence using but one SIDE
ICW 1~7. Then a number of read ICWs 148 commands equal to the number of data blocks to be transferred are given to DAC 56P for reading a predetermined number of blocks of data indicated by the number of set cache ICWs. Upon completion of the read, which transfers data from the address device 16-18 to cache 40 at the addresses set in SSARs 97, system 10 supplies a device end (DE), as indicated by arrow 150, to host 11. Host 11 immediately responds by reissuing a peripheral command at 151 corresponding to the CCW 136. Of course, system 10 searches directory 43 at 152 resulting in a hit because of the just executed ICW chain. Data is then transferred from cache 40 to host 11 as indicated by arrow 153. In the event that the data was not transferred for the requested data block at 136, another miss will occur and an error status will be reDorted to host 11. This error status will reflect the fact that system 10 was unable to transfer data from the addressed device 16-18 at the cylinder and head address. Host 11 then can use the direct access (AC = 00) for at~emptin~ recovery uslng standard disk storage apparatus recovery _echniques beyond the scope of the present description. ~llipsis 154 indicates that the above-desc~ibed operation is highly repetitive as well as indicating that various CCW
chains for various devices 16-18 can be interleaved. The ICW chains do not necessarily follow the sequence of chains of CCWs. Depending upon the circumstances, an ICW
chain may be constructed and used by a later occurring CCW chain. Such possibility indicates the asynchronous aspect of the ICW chains with respect to the CCW chains.
Usually, the first CCW chain will result in a first occurring IC~,chain. ~t any instant, a separa~e ICW chain can be active for each DASD 16-18.

Figure 5 illustrates the execution by system 10 of the SPP command. ACE 50P receives and decodes the SPP
command. As a result of that decoding, processor 31P
activates CAC 61P. Upon activation, processor 31P ~ia CAC 61P performs certain nonpertinent logic functio~s at 155. Then at 156, LDCB 62 (Fig. 3) is accessed for setting SIO 105 in CPARMS 82 to unity, setting ODE 90 in foundation section 80 to 0, setting CCR bit 95 to 0, setting SEQ 100 to the value received in byte 22 (indicated by X), RD section 101 is set to the value i~ RD
section of byte 22 and B COUNT 102 of PPA~S 81 is se~ to the value indicated in byte 23. Following setting LDCB
62, processor 3lP performs some nonpertinent logic functions at 155. Then processor 31P at 157 examines LDCB
62 section SEO~ 100 to see if seouential data is involved.
I not, processor 31P returns to ACE 50P via LKP 25. Iî
the sequential data is indicated, processor 31P at 158 transfers the number of blocks indicated in B COUNT 102 to an internal register IR (not shown) of processor 31P.
In the illustrated embodi~ent a maximum of eight blocks were to be transferred in a given burst oî data blocks.
In tha' e~bcdiment when B COUNT was greater than or eaual to eight, eight blocks were ~ransferred and B COUNT

`` 28 reduced by eight. For B COU~T less than eight, a number of blocks equal to B COUNT were transferred. Then at 159 the value of the block count is examined. If it is nonzero, then an appropriate SPP command execution has occurred. If the block count is 0, then the sequential indicator or the block count must be in error.
Accordingly, processor 31P leaves step 159 to go to an error status r~porting procedure beyond the scope of the present desc~iption.-The seek, set sector, and SIDE ccwx 133-135 are not described since they are well known. A change in system 10 operation from the prior art occurs upon the receipt of a read command based on read CCW 136 for sequential data. Figure 6 illustrates the machine operations ~or transferring sequential data. At 160 the received command is processPd by ACE 50P. Then through LKP 25, CAC 61P is activated by processor 31P. The command is again decoded at 161. Since it is a read command, directory 43 is searched at 162 as described with respect to Figure 1. At 163 processor ~lP determines whether or not the directory search resulted in a hit or a miss. For a miss, the received command is enqueued at 164 by placing the command and its control information in queue registers 69. A CC~ is sent to host 11. Since queue registers 69 can use any format they are not further described except to say that the queue is a first-in first-out queue for each of the addressable devices, i.e. for 8 devices 16-18 there are 8 queues. The importance o~ having a FIE0 queue is to ensure that the secuence of responses to the host for a given de~Jice corresponds to the sequence of commands sent by the host.
From queue 69, CAC 61P will initiate a read ~rom ~he addressed device 16-18, as explained with respect to ~igures 7-9.

~9 A hit condition in the directory search at 163 results in cache ~0 automatically transferring data to host 11 via the appropriate channel ada2tor 32 at 170. Such automatic cache to host transfers are well known and not described for that reason. During the automatic data transfer an error can occur; accordlngly, upon an error detection, processor 31P goes to an error report-ing and analyzing routine at 171. Generally the data transfers will be error free. At 172 following the successful completion of a data transfer, processor 31P accesses LDCB 62 to examine RD section 101. If discard after read is indicated, processor 31P sets the just read block of data cached for destage if modified; and free, if not modiried. Destaging is performed by processor 31P when no commands are being executed. Destaging the data prior to requirement of a replacement algorithm being invoked reduces the control required for efficiently managing cache 40, i.e. free spaces are made available before they are needed.
Then through logic path 174, from either steps 172 or 173, processor 31P at 175 determines from directory 43 in a field (not shown) whether or not the data is pinned to cache 40. Pinning data to cache 40 means that it cannot be transferred to devices 16-18 until a pinning flag (not shown) of directory 43 has been erased. If the data is not pinned to cache, then the block that was just read is made the most recently used tMRU) block at 176, in the LRU list (not shown) for the replacement algorithm. This is achieved by accessing directory 43 and updating the least recen'ly used list of known design in that directory. ~t 177, nonpertinent logic steps are performed bv ?rocessor 31P. Then at 180, LDCB 62 is again accessed o~ e:~:ami-nation of SEQ 100. If sequential data has ~een indicated, then processbr 31P at 182 e~:amines L~CB B COU~T 10~ to see if the block count is equal to 0, i.e. is the just-transferred block the last block in the sequence of data. If it is not the last block transferred, then at 183 the block count (BK) is ~ecremented by 1. Eollowing steps 180, 182 or 183 logic 5 path 181 leads processor 31P bac~ to ~CE 50P for performing final status reporting to host 11 in the usual manner.

Fi~ures 7 and 8 illustrate scanning the read queues in aueue 69 and generating an ICW chain of internal system r 10 commands. After the requested read has been enqueued, procèssor 3lP causes system 10 to perform di-~erse functions, such as responding to commands received over various ones of the channel adaptors, additional commands received from the channel adaptor which had transferred the read command (which was CCR'd). When there is a lull in receipt of peripheral commands from the host, seek and set sector device commands are sent to devices 16-18. When there is a lull in control activity which may occur while cache 40 is transferring data to host 11, receiving data from host 11, transferring or receiving data from an addressed device i6-18, processor 31P through its dispatcher microcode, which is a portion of OP 75 (Fig. 2) scans its work tables, including aueue registers 69. If the queues are empty, i.e. no reading 25 is to occur, processor 31P follows logic path 192 returning to dispatcher 190. If a read has been enqueued as detected at 191 by scanning the queue flags, the queue entry is transferred from a queue register 69 at 193 to an internal register (not shown) of processor 31P. If an error occurs in this transfer, an error reporting and recovery technique is instituted at l9a. Upon successful reading the oueue entry from queue register 6g, L~CB 62 is accessed at 195 to set ODE section 90 to lnity to ~7~
indicate that a device end is owed upon completion of a successful read (such as indicated in Figure 5 by arrow 150). At 196 some nonpertinent functions are performed. Then at 200, in the device buffer area 77 corresponding to ~he addressed device, a bit is set to indicate that logical chaining will occur, i.e., more than one ICW will be used in the upcoming access to the -addressed device 16-18. At 201, LDcs 62 is again ac-cessed to examine the value of SEQ 100. For sequential 10 data being indicated, processor 31P Froceeds to 202 to set up the block count for the upcoming ICW chain equal to the paging parameters (PA) in the illustrated embodi-ment.

A maximum number of blocks which can be txansferred 15 through a given ICW chain is equal to the number of SSARs 97. For example, for 8 SSARs the number of blocks transferred will be a maximum of 8. Further, delay boundaries are a consideration, for example, if the 8 blocks to be transferred require accessing 2 20 cylinders; then only those blocks in the first cylinder will be transferred. For example, if the 8 blocks have
4 blocks in a first cylinder and 4 blocks in a second cylinder, then the number of blocks would be set to 4.
This action minimizes the time required to transfer a 25 series of blocks and enables all transfers to proceed to completion at electronic speeds. In the event of a miss on the first block of a given cylinder, then up to 8 blocks could be automatically transferred. Also the ma~imum number of blocks is never greater than the 30 remaining value in B COUNT 102. The ICW chains are instrueted such that cylinder boundaries are never crossed by any given ICW chain. These calculations follow usual computer programming techniques and are not described further for that reason. If sequential data is not indicated at 201, then the number of blocks to be transferred is set to 1 at 203. These numbers are supplied to the device buffer 77 along with the chaining flag, device addresses and other device control data At 204, the SSAR 79 identification is set to 0. This means that processor 31P will access the SSAR having identification 0.

At 205, the logical address LDADDR including AC and DEV
from CCW 19 of Figure 1 is converted to a physical device address. In the illustrated embodiment, this action is achieved by merely masking the AC portion from the logical address. Certain nonpertinent functions are performed at 206. Point 207 is reentry point B from a continuation of the logic flow diagram described w th respect to Figure 8, i.e. all of the logic steps from 190 through 206 are preparatory steps with the following described steps being repeatable as a loop for setting up a succession of block transfers.

The first step 210 in the loop allocates a slot or space in cache 40. Usual allocation procedures are followed, i.e. an addressable unit (slot) on a so called free list (not shown) is identified as the one to receive the first block of signals from the addressed device 16-18. That ~5 slot is then removed from the ~ree list and identified within an internal register (not shown) within processor 31P for identifying wXich directory 43 entry is to be used for identifying the slot in cache 40. Note that ~here is one entry register in directory ~13 for each addressable slot in cache 40. Accordingly, ~he ac~ual address in cache 10 of the da~a can be deri~Jed direclly from which resister of directory ~3 contains the en~ry.

33 ~ 2 Upon the attempted allocation of the number of slots equal to the number of blocks set in steps 202 or 203, processor 31P at 211 determines whether or not any error occurred in the allocation process. If an error has occurred, the total number of blocks may not be successfully transferred from the address device 16-18 to cache ~0. Accordingly, for an error condition, at 212 processor 31P examines LDCB 62 SEQ 100 to determine if the data transfer is a sequential transfer. If it is not a sequential transfer, processor 31P follows logic path 213 returning to ACE 50P to wait for a replacement algorithm control to make space available for one block. For a sequential transfer, processor 31P at 214 determines whether or not the error occurred on the first block to be transferred. If so, the entire operation will be aborted. Accordingly, at 215 LDCB 62 is accessed setting a unit check (UC) flag (not shown) to unity, a disconnect owed status flag (DS) to unity and resets the other flags to 0. ThQn processor 31P returns via logic path 216 to ACE 50P.
If the allocation error is not for the first block, - then data transfers of the remaining blocks occur.
Processor 31P follows path 217 to 220 for truncating the number of blocks to be transferred in the unallo-cated area from the ICW list.

Returning to step 211, if there were no allocation errors, then at 218 some nonpertinent functions are performed. These functions include analyzing microcode logic errors not related to allocation. If a slot was not allocated due to such microcode errors, then the truncate step 220 is also performed for reducing the number of blocks transferred from the addressed device _ 34 ~ 3~

16-18 to cache 40. Without an error or after truncation, processor 31P performs some nonpertinent logic steps at 221. At 222, LDCB 62 SEQ 100 is examined. If SEQ is 0, i.e. nonsequential data, then at 223 the index of the directory 43 entry corresponding to the slot in cache 40 to receive the data is entered into LDCB 62 section 114 of RPARMS 83. For sequential data or after the index is entered into LDCB 62, at 22a 'he cache address to be inserted later into an SSAR 79 is generated îrom the directory index just inserted into LDCB 62. This generation is merely adding an offset to each of the directory indices. mhen at 225, ~DCB 62 SEQ 100 indicating sequential mode causes processor 3lP ~o examine B COUNT 102 to see if the count is greater than one. If the count is greater than 1, then at 232 processor 31P examines to see if the first block in the sequence of blocks being transferred is being handled.
If not, at 233 a new cache address for the second block is provided. Then at 234 in the device buffer area 77, the SSAR 79 corresponding to the second or other blocks is set to the cache address, flags are set, pointer to the dlrectory 43 is set and the SSAR 79 to receive the cache address is identified. Other functions to be performed may also be defined in the device buffer 77.

Returning to steps 225, 231 and 232, the logic path 226 leads to nonpertinent steps 227 followed by processor 31P accessing L~CB 62 to store the generated cache address in section 118 of RPARMS 83. Then following nonpertinent steps 229, processor 3lP proceeds through connector 235 to the logic steps shown in Figure 8.

The connection between Figures 7 and 8 is through connector A denominated by numerals 235 and 2 0, respectively, in Figures 7 and 8. At 241 processor 31P
updates the pointer to SSAR 97 by incrementing EK 121 of Figure 3. At 242 processor 31P determines whether or not all of the blocks to be transferred to cache 40 have received allocations in cache 40. If not, through connector B 243 processor 31P returns to Figure 7 flow chart at B 207 to allocate cache 40 space another block.
This loop is repeated untll EK 121 contains a count equal to the number~oI blocks to be transferred (not more than eight).

.~f~er completing the loop, some nonpertinent logic steps are performed at 244. At 245, the read code command is set into the ICW representing a read data command for DASD 16. At 250, LDCB 62 is accessed to determine whether or not the sequential flag SEQ 100 in PP.~RMS 81 is set or reset. When set, processor 31P at 251 determines whether or not the received block count is greater than 1. If it is greater than 1, then a chaining indication is set in command modifier byte 27 of ICW 24 (Fig. l); otherwise from steps 250 or 251 the end of chain indication EOC is indicated in byte 27 by .esetting the chain indicator. At 254 the device buffer 77 in control store 73 receives the ICW, i.e. the code permutation flags and other storage operation (STOROP) indications. At 255, processor 31P again examines SEQ
100 of LDCB 62 for nonsequential, i.e. SEQ = 0. If only one block will be transferred, processor 31P follows logic path 256 to execute logic step 257 or~trànsmitting the just constructed ICW to DAC 56P via LKP 25.

For a sequential data transfer, processor 31P leaves step 255 to ex~cute logic step 260 for ad3us~ing _~.121 to the next eniry (set next). Then a. 251, if _he 36 ~7~

remaining block count is not greater than 1 then the ICrtls are transmi~ted to DAC 56 in step 257. For a number of blocks remaining greater than 1, loop 270 is executed for setting up the remaining ICWs for a chain of such IC'~s.
At 271 the command is set for read count, key, data and multi-track commands. At 272 processor 31P determines whether or not the last block in a sequential group of blocks is to be processed. If not, the chaining flag n byte 27 of the ICW being built is set to unity. Otherwise at 274 the end of chaining condition is indicated by ; resetting the chaining flag. At 275 the ICW is transferred to the device buffer 77. At 276 the cache address is stored in the device buffer such that it can be transferred immediately to SSAR 97 for the burst transfer. At 277 processor 31P determines if the block is the last block; if not, the loop is indexed at 278 adjusting a count in an -nternal register (not shown) using usual control techniques. Otherwise, step 257 is performed. When the loop is indexed at 278 the steps 271 2d through 277 are again performed.

DAC 56P upon receiving the ICW chain executes the chain in the same manner that it executes received cornmands through channel adaptors 32. Since this latter operation is well known, the execution of the ICW chains is not further described. It should be noted that in transferring signals from DASD 16 to cache 40, DAC 56P
not only provides the addressing to DASD 16 but also transfers the cache address contents of device buffer 77 into SSAR 79 such that several blocks of data can be transferred in a single data stream, i.e. can be a multi-track trans er. Upon completion o^ ~hat transfer, DAC
56P loads the resul'ing sta'us, including error indications, into LKP 25. Processor 31P operation then switches from DAC 56P to CAC 51P.

37 ~7~

Figure 9 illustrates the portion of CAC 61P called post processing, i.e. logic steps performed following the transfer of data signals from DASD 16 to cache 40. First the contents of LKP 25 are transferred to work registers at step 280. This includes the device address, the pointer to LDCB 62 and any flags that may be generated by DAC 56P. At 281, processor 3lP accesses LDCB 62 for resetting RIP 111 to 0 for indicating that no read from a DASD is in ~rogress_ At 282 some nonpertinent logic steps are performed. At 283 processor 31P examines the 3AC 56P return code (RC) for an error free condition; for a successful transfer of all requested blocks of data to cache 40 a return code of 0 is provided. For such a successful operation, the contents of an internal work register of processor 31P is set to unity at 284. This initializes the count. At 285, LDCB 6~ is accessed for examining SEQ 100. If the transfer is not se~uential, i.e. only one data block is to be transferred, then the number of blocks being transferred is set to unity at 287. Otherwise at 286 the number of blocks allocated as stored in device buffer 77 is transferred to EK 121 to indicate the number of entries in BST 78.

For an error condition at step 283, i.e. RC does not equal 0, an error analysis set of logic steps are performed at 290. If a permanent error is indicated by such analysis, processor 31P follows path 291 to a permanent error recovery and reporting procedure beyond the scope of the present description. Otherwise at 292, processor 31P determines whether or not the command was a multi-track command, i.e. more than one block of data was to be automatically t-ansferred. If not, an e-ror condition would effect a single block, therefore 2 permanent error has to be har.dled causing processor 3lP

3~
3~3;~
to follow logic path 291. If a plurality of blocks ~ere transferred, then further action can occur, i.e. one block may be in error while all of the preceding blocks were transferred error free. DAC 56P has identified via LKP 25 which block caused the error. CAC 61P from the DAC
56P information can identify which ICW is associated with the error. Processor 31P then fetches the - immediately preceding ICW at 293 from device buffer 77 At 294, if t~,e command was for reading data f~om DASD 16 to cache 40, processor 31P at 295 adjusts the block count by subtracting the number of blocks in error from the slots allocated. At 29~ for a command in error other than a read data command, the error occurred before any data transfer; processor 31P goes to a permanent error routine (not described). From steps 286, 287 or 295, some nonpertinent logic steps are performed at 300. Such logic steps pertain to internal addressing, not pertinent to an understanding of the present invention.
At 301, processor 31P accesses LDCB 62 SEQ 100 to determine whether or not a sequential transfer is indicated. For a sequential transfer logic path 302 is followed to step 304 to determine whether or not the block transferred was the first block. For the first block being transferred in a sequence of blocks or in a 2S nonsequential mode, wherein only one block is transferrsd, processor 31P at 305 transfers the contents of LDCB 62 to a work register (not shown) for the ensuing logic steps. This includes transferring the logical cylinder indication CCL (see 108), record numb`er R, the logical device address D and other cont.ol data not per_inent to an unde-stancing of ths invention. ~or blocks which re not the ^irst block, processor 31~ at 306 accesses 3ST 78 (~ig. 3) to obtain _he direc_ory index îor that block as indicated by ~E~P 121. ~or the first block or nonsequential data, directory 43 index 114 of LDCB provides the same information.

For the directory ~3 search, processor 31P at 307 looks for the cache address corresponding to the de-~ice address. This search includes a hashing operation to determine ~hether or not the directory 43 has an entry corresponding to the block of data just trarsferred from DASD 16 to ~ache 40. Remember that several parallel accesses to the same DASD 16 are possibie in an 10 asynchronous manner; therefore it is important that one and only one replication oî DASD 16 data be in cache 40.
This requirement provides data integrity, i.e. if duplicate copies were in cache 40, one copy could be updated while a second copy could be erroneous. Then the 15 updated copy could be stored in DASD 16, that entry being erased; later access by a host to the system 10 could result in the erroneous data residing in cache being sent to the host via channel adaptor 32.

Following a search, error indications are checked at 20 308. For no errors, at 31C processor 31P determines whether or not a duplicate copy was found. For no duplicate at 311, the BST 78 entry is calculated, i.e.
the DELEP 120 value, BST 78 is accessed for pointing to the directory index (numerals 122-123 of Figure 3) such 25 that the directory 43 entry corresponding to the DASD 16 address is trànsferred from directory 43 to ADEB 76 for convenient access by processor 31P, all action`occurring at 312. At 313 processor 31P resets M 269 in ADEB 76. M
indicates modified data in cache 40. Resetting M to 0 30 indicates that the Co?y in cache 40 is lcen~ical ~o the copy on DASD 16. At 314, ~;ST 78 is again accessed or incrementing DELEP 120 and decrementing E~; 121. At 315 a directory 43 entry is added corrssponding to the data just transferred to cache 40; that is, when DAC 56P
caused the block of data to be transferred to cache 40, directory 43 had not yet been updated~ i.e. the data in cache 40 is not yet addressable. By creating a directory entry in the usual manner at 315, the just-transferred data from DASD 16-18 to cache 40 becomes addressable.

On the other,~and, i~ a duplicate is found at 310, it is - assumed that the data already-in cache 40 is the correct 10~ copy, i.e. it may have been modified. Therefore, it is desired not to make the just-transferred data block addressable and to proceed to the next data block. At 316 BST 78 is accessed for decrementing EK and incrementing DELEP. The block just transferred is freed at 317 leaving only a single copy of the data in cache 40.
Freeing the block makes it nonaddressable.

The final housekeeping logic functions occur beginning with step 320 determining whether or not all of the data blocks transferred from DASD 16 to cache 40 have been post processed. If not, at 321 ~he number of blocks to be - post processed is decremented by 1. Processor 31P then follows logic path 302 to execute a loop including steps 304 through 315.

At 323 ~ST 78 is accessed for resetting EK and DELEP to 0.
Step 323 is also entered from PE logic path 291 via logic path 325. At 324, queue regis~ers 69 are accèssed for removing the queue entry from the queue such that a duplicate read will not occur. ~ou will recall that in Figure 5 the read was enqueued 2t 164. Thsn at 322 L~CL
62 is accessed for r~setting control 'lags, such as CCR
95, miss 96, RA 112 and the like and for pinning for a devlce end, such as by setting ODE 90. ~t 1~0 status is presented to the host via ACE 50P. In all of the above description it should be noted that the transfer of data signals from DASD 16 to cache 40 and in the reverse direction is on an asynchronous basis with respect to the operation of the channel adaptors 32. Under certain circumstances, a request from the host through channel adaptor 32 can take priority over the transfer, therefore so~ of the just-desGribed operations may be interleaved with higher priority operation. Since multi-processing is well known this detail is not further discussed.

While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention:

Claims (17)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A storage system having connection means to be connected to a host processor and having a high-speed data cache and a slow-speed backing store connected to said connection means, means connected to said connection means and to said cache and backing store for transferring data between said cache and backing store, said cache and backing store each having a plurality of addressable data storage registers;
the improvement comprising:
first means coupled to said connection means for receiving from said connection means and storing an intent signal identifying sequential data blocks and the numerical extent of such blocks;
read means in said transferring means and connected to said connection means and to said cache for transferring data as a peripheral data storage operation to said connection means from said cache and for electrically indicating which blocks of signals identified by said stored intent signal were sent to the connection means; and
1. (continued) second means connected to said connection means for receiving read peripheral commands from said connection means, connected to said transfer means and connected to said first means for being responsive to said stored intent signal and a received request from the connection means for any of said indicated sequential blocks to activate said transfer means to transfer data, as a peripheral data storage operation, from said backing store to said cache all the sequential data blocks identified by said received intent signal up to a predetermined maximum number of the data blocks indicated by said received intent signal.
2. The system set forth in claim 1 wherein said cache has a predetermined number of address registers, each said address register for storing an address of said addressable storage registers of said cache, and said predetermined number of address registers equalling said predetermined maximum number.
3. The system set forth in claims 1 or 2 wherein said backing store has a plurality of cylinders of storage registers, access means in said transferring means to access said registers in different ones of said cylinders but having substantial access time delays in accessing a cylinder other than a cylinder last accessed, said access means connected to said second means for limiting the number of data blocks in each said sequence of data block transfers of said transfer means to a transfer of data blocks from said backing store to said cache only to those data blocks stored in the same cylinder of said backing store.
4. The system set forth in claim 1 further comprising:
said second means further having command means connected to said connection means for receiving peripheral commands from said connection means which indicate by any one of a plurality of logical device addresses addressed ones of said backing stored addressable data storage registers; and logical device means coupled to said command means and read means for storing said received peripheral commands for assigning such received peripheral commands to said addressed ones of said data storage registers and to said logical device addresses such that said data storage operations for respective ones of said logical device addresses are simultaneously and separately identifiable.
5. The storage system set forth in claim 1 wherein said command means includes means for receiving a chain of said peripheral commands for activating the storage system to perform a sequence of operations with respect to said slow-speed backing store; and control means connected to said cache for receiving predetermined ones of said chained peripheral commands from said command means and for deriving a plurality of chained ICW commands from said received predetermined ones of said chained peripheral commands for commanding data transfers between said cache and said backing store and connected to said second means for supplying said ICW commands thereto for activating said second means to effect the commanded data transfers between said cache and backing store in a
5. (continued) same sequence as said chained peripheral commands were received from said host and said second means interleaving execution of said chained peripheral commands by said transfer means intermediate execution of said chained control means commands.
6. The storage system set forth in claim 2 wherein said command means includes means for receiving a chain of said peripheral commands for activating the storage system to perform a sequence of operations with respect to said slow-speed backing store; and control means connected to said cache for receiving predetermined ones of said chained peripheral commands from said command means and for deriving a plurality of chained ICW commands from said received predetermined ones of said chained peripheral commands for commanding data transfers between said cache and said backing store and connected to said second means for supplying said ICW commands thereto for activating said second means to effect the commanded data transfers between said cache and backing store in a same sequence as said chained peripheral commands were received from said host and said second means interleaving execution of said chained peripheral commands by said transfer means intermediate execution of said chained control means commands.
7. The storage system set forth in claim 4 wherein said command means includes means for receiving a chain of said peripheral commands for activating the storage system to perform a sequence of operations with respect to said slow-speed backing store; and control means connected to said cache for receiving predetermined ones of said chained peripheral commands from said command means and for deriving a plurality of chained ICW commands from said received predetermined ones of said chained peripheral commands for commanding data transfers between said cache and said backing store and connected to said second means for supplying said ICW commands thereto for activating said second means to effect the commanded data transfers between said cache and backing store in a same sequence as said chained peripheral commands were received from said host and said second means interleaving execution of said chained peripheral commands by said transfer means intermediate execution of said chained control means commands.
8. The storage system set forth in claims 5, 6 or 7 further including a first plurality of cache address registers connected to said data cache and a one of said ICW commands and including means for indicating addresses for a one of said first plurality of cache address registers such that a plurality of said one ICW
commands in a chain causes said first plurality of data blocks to be inserted at arbitrary ones of said addressable registers, respectively, of said data cache.
9. A peripheral data storage system having a plurality of addressable direct access data storage devices (DASD), each of said DASDs including a plurality of addressable data storage cylinders with each cylinder having a plurality of addressable data storage segments, each of said data storage segments having a predetermined data storage capacity;
a high-speed random-access buffer having a first plurality of storage address registers and a plurality of addressable buffer segments having the same data storage capacity as the data storage capacity of said data storage segments and said buffer segments being addressable by any one of said first plurality of storage address registers of said buffer;
means for assigning random ones of said buffer segments to store a copy of data storage or to be stored in predetermined sequential ones of said data storage segments;
means for storing addresses of said assigned random ones of said buffer segment addresses into said first plurality of storage address registers, respectively; and means for transferring a first plurality of said blocks of data from a one of said DASDs to said buffer at said buffer addresses stored in said storage address registers, said data transfer of said first plurality of said blocks of data being without interruption.
10. The peripheral data storage system set forth in claim 9 further including means coupled to said means for transferring for limiting said data transfer of a first plurality of blocks to blocks stored within the same data storage cylinder.
11. The peripheral storage system set forth in claim 10 further including means connected to said means for transferring for defining an addressable extent of said DASD for indicating data storage segments by addresses within the addressable extent and said limiting means limiting the number of said first plurality of blocks to those blocks of data stored within one of said data storage cylinders and within said defined extent.
12. In a peripheral storage system having a plurality of addressable direct access storage devices (DASDs), each of said DASDs having a plurality of addressable cylinders, and each said cylinder having a plurality of addressable memory segments, a high-speed buffer store having a plurality of buffer segments each having a capacity equal to the capacity of said memory segments, means for attaching the system to a host processor for receiving peripheral commands therefrom and for transferring data therewith;
means to receive peripheral commands from an attached host processor to fetch data stored in said DASDs;
a digital processor having a control store for storing programs of instructions for operating the storage system;
12. (continued) signal means connected to said DASDs, said buffer and said means for attaching for transferring data between said DASD, said buffer, and said attached host;
the improvement comprising:
program means in said control store for enabling said processor to operate the storage system to transfer one segment of data from said DASDs to said high-speed buffer store for each request for a segment of data from said host;
further program means in said control store for enabling said processor to receive an indication from said host that a number of segments of data are to be transferred and for indicating the number of memory segments in said DASD from which said plurality of blocks of data will be fetched, program means for decrementing said number of segments to be transferred to the host each time a segment is transferred to said host; and transfer program means for enabling said digital processor to operate said storage system upon receipt of a read command received from said host via said means for attaching for a given segment of data contained with a memory segment defined within said extent to transfer a copy of all segments of data beginning with the requested segment and all sequentially subsequent segments identified in said extent, whether requested or not, and stored within said extent in said DASD to said buffer.
13. The peripheral storage system set forth in claim 12 further including program means in said control store for enabling the processor to operate the storage system to limit the number of segments of data being transferred in a sequence of data transfers between a one of said DASDs and said buffer store to segments stored in a given one cylinder of said one DASD during one sequence of segment transfers.
14. The peripheral storage system set forth in claims 12 or 13 further including a plurality of address registers in said buffer store arranged such that any one address register address any of said buffer segments; and said program means further enabling said processor to only transfer a maximum number of segments of data from a DASD in a single sequence of segment transfers equal to the plurality of said address registers.
15. The peripheral storage system set forth in claim 12 further including a program means enabling said processor to operate said system to separately enqueue a plurality of read operations to be performed with for each of said addressable DASDs, respectively, such that a plurality of read operations independently occur from said addressable DASDs, respectively, to the buffer store.
16. The peripheral storage system set forth in claim 13 further including a program means enabling said processor to operate said system to separately enqueue a plurality of read operations to be performed with for each of said addressable DASDs, respectively, such that a plurality of read operations independently occur from said addressable DASDs, respectively, to the buffer store.
17. The peripheral storage system set forth in claim 15 or 16 further including program means in said control store for enabling said processor to accept any one of a plurality of logical device addresses received from said means for attaching with a requested read operation to be performed for each of said logical device addresses, respectively, for each of said addressable DASDs, respectively, such that each of said memory segments can be addressed through any of a said plurality of said logical addresses;
additional program means stored in said control store for additionally enabling said processor for enqueuing said read operations for each said DASD in accordance with the order of receipt of said received requests for read operations using any of said logical addresses; and further program means stored in said control store for further enabling said processor to access such DASD
via a fourth address for each of said DASDs, which fourth address enables directly accessing each such DASD to thereby bypass said buffer store.
CA000406369A 1981-08-03 1982-06-30 Method and system for handling sequential data in a hierarchical store Expired CA1176382A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US289,632 1981-08-03
US06/289,632 US4533995A (en) 1981-08-03 1981-08-03 Method and system for handling sequential data in a hierarchical store

Publications (1)

Publication Number Publication Date
CA1176382A true CA1176382A (en) 1984-10-16

Family

ID=23112373

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000406369A Expired CA1176382A (en) 1981-08-03 1982-06-30 Method and system for handling sequential data in a hierarchical store

Country Status (8)

Country Link
US (1) US4533995A (en)
EP (1) EP0072108B1 (en)
JP (1) JPS5823376A (en)
AU (1) AU548909B2 (en)
CA (1) CA1176382A (en)
DE (1) DE3279135D1 (en)
ES (1) ES8305146A1 (en)
SG (1) SG25991G (en)

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE36989E (en) * 1979-10-18 2000-12-12 Storage Technology Corporation Virtual storage system and method
JPS59100964A (en) * 1982-12-01 1984-06-11 Hitachi Ltd Parallel transfer type director device
US4774654A (en) * 1984-12-24 1988-09-27 International Business Machines Corporation Apparatus and method for prefetching subblocks from a low speed memory to a high speed memory of a memory hierarchy depending upon state of replacing bit in the low speed memory
JPS62243044A (en) * 1986-04-16 1987-10-23 Hitachi Ltd Control system for disk cache memory
US4928239A (en) * 1986-06-27 1990-05-22 Hewlett-Packard Company Cache memory with variable fetch and replacement schemes
US4853846A (en) * 1986-07-29 1989-08-01 Intel Corporation Bus expander with logic for virtualizing single cache control into dual channels with separate directories and prefetch for different processors
US5008820A (en) * 1987-03-30 1991-04-16 International Business Machines Corporation Method of rapidly opening disk files identified by path names
US5134563A (en) * 1987-07-02 1992-07-28 International Business Machines Corporation Sequentially processing data in a cached data storage system
US4926317A (en) * 1987-07-24 1990-05-15 Convex Computer Corporation Hierarchical memory system with logical cache, physical cache, and address translation unit for generating a sequence of physical addresses
US5133061A (en) * 1987-10-29 1992-07-21 International Business Machines Corporation Mechanism for improving the randomization of cache accesses utilizing abit-matrix multiplication permutation of cache addresses
US5321823A (en) * 1988-07-20 1994-06-14 Digital Equipment Corporation Digital processor with bit mask for counting registers for fast register saves
US5253351A (en) * 1988-08-11 1993-10-12 Hitachi, Ltd. Memory controller with a cache memory and control method of cache memory including steps of determining memory access threshold values
US5140683A (en) * 1989-03-01 1992-08-18 International Business Machines Corporation Method for dispatching work requests in a data storage hierarchy
US5155831A (en) * 1989-04-24 1992-10-13 International Business Machines Corporation Data processing system with fast queue store interposed between store-through caches and a main memory
US5133060A (en) * 1989-06-05 1992-07-21 Compuadd Corporation Disk controller includes cache memory and a local processor which limits data transfers from memory to cache in accordance with a maximum look ahead parameter
US5185694A (en) * 1989-06-26 1993-02-09 Motorola, Inc. Data processing system utilizes block move instruction for burst transferring blocks of data entries where width of data blocks varies
US5257370A (en) * 1989-08-29 1993-10-26 Microsoft Corporation Method and system for optimizing data caching in a disk-based computer system
US5133058A (en) * 1989-09-18 1992-07-21 Sun Microsystems, Inc. Page-tagging translation look-aside buffer for a computer memory system
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
JPH05137393A (en) * 1991-11-08 1993-06-01 Victor Co Of Japan Ltd Information recorder/reproducer
JP3087429B2 (en) * 1992-04-03 2000-09-11 株式会社日立製作所 Storage system
US5408653A (en) * 1992-04-15 1995-04-18 International Business Machines Corporation Efficient data base access using a shared electronic store in a multi-system environment with shared disks
US5659713A (en) * 1992-04-24 1997-08-19 Digital Equipment Corporation Memory stream buffer with variable-size prefetch depending on memory interleaving configuration
US5381539A (en) * 1992-06-04 1995-01-10 Emc Corporation System and method for dynamically controlling cache management
JP3390482B2 (en) * 1992-06-12 2003-03-24 株式会社リコー Facsimile machine
US5715424A (en) * 1992-12-10 1998-02-03 International Business Machines Corporation Apparatus and method for writing data onto rewritable optical media
WO1995009397A1 (en) * 1993-09-30 1995-04-06 Apple Computer, Inc. System for decentralized backing store control of virtual memory in a computer
JPH08328752A (en) * 1994-06-10 1996-12-13 Canon Inc Device and method for recording information
US5684986A (en) * 1995-06-07 1997-11-04 International Business Machines Corporation Embedded directory method and record for direct access storage device (DASD) data compression
JPH09190465A (en) * 1996-01-11 1997-07-22 Yamaha Corp Method for referring to classified and stored information
WO1998040810A2 (en) 1997-03-12 1998-09-17 Storage Technology Corporation Network attached virtual tape data storage subsystem
US6658526B2 (en) 1997-03-12 2003-12-02 Storage Technology Corporation Network attached virtual data storage subsystem
US6154813A (en) * 1997-12-23 2000-11-28 Lucent Technologies Inc. Cache management system for continuous media system
US6070225A (en) * 1998-06-01 2000-05-30 International Business Machines Corporation Method and apparatus for optimizing access to coded indicia hierarchically stored on at least one surface of a cyclic, multitracked recording device
US6094605A (en) * 1998-07-06 2000-07-25 Storage Technology Corporation Virtual automated cartridge system
US6327644B1 (en) 1998-08-18 2001-12-04 International Business Machines Corporation Method and system for managing data in cache
US6381677B1 (en) 1998-08-19 2002-04-30 International Business Machines Corporation Method and system for staging data into cache
US6141731A (en) * 1998-08-19 2000-10-31 International Business Machines Corporation Method and system for managing data in cache using multiple data structures
US6330621B1 (en) 1999-01-15 2001-12-11 Storage Technology Corporation Intelligent data storage manager
US6728823B1 (en) * 2000-02-18 2004-04-27 Hewlett-Packard Development Company, L.P. Cache connection with bypassing feature
US6834324B1 (en) 2000-04-10 2004-12-21 Storage Technology Corporation System and method for virtual tape volumes
JP4162184B2 (en) * 2001-11-14 2008-10-08 株式会社日立製作所 Storage device having means for acquiring execution information of database management system
DE10156749B4 (en) * 2001-11-19 2007-05-10 Infineon Technologies Ag Memory, processor system and method for performing write operations on a memory area
US20030126132A1 (en) * 2001-12-27 2003-07-03 Kavuri Ravi K. Virtual volume management system and method
US7437593B2 (en) * 2003-07-14 2008-10-14 International Business Machines Corporation Apparatus, system, and method for managing errors in prefetched data
TWI399651B (en) * 2008-09-12 2013-06-21 Communication protocol method and system for input / output device
US20130326113A1 (en) * 2012-05-29 2013-12-05 Apple Inc. Usage of a flag bit to suppress data transfer in a mass storage system having non-volatile memory
CN108932206B (en) * 2018-05-21 2023-07-21 南京航空航天大学 Hybrid cache architecture and method of three-dimensional multi-core processor
CN110879687B (en) * 2019-10-18 2021-03-16 蚂蚁区块链科技(上海)有限公司 Data reading method, device and equipment based on disk storage
CN111198750A (en) * 2020-01-06 2020-05-26 紫光云技术有限公司 Method for improving read-write performance of virtual disk

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL226419A (en) * 1957-04-01
US3341817A (en) * 1964-06-12 1967-09-12 Bunker Ramo Memory transfer apparatus
US3521240A (en) * 1968-03-06 1970-07-21 Massachusetts Inst Technology Synchronized storage control apparatus for a multiprogrammed data processing system
US3820078A (en) * 1972-10-05 1974-06-25 Honeywell Inf Systems Multi-level storage system having a buffer store with variable mapping modes
JPS5323052B2 (en) * 1973-09-11 1978-07-12
US4047243A (en) * 1975-05-27 1977-09-06 Burroughs Corporation Segment replacement mechanism for varying program window sizes in a data processing system having virtual memory
US4126893A (en) * 1977-02-17 1978-11-21 Xerox Corporation Interrupt request controller for data processing system
JPS5444176A (en) * 1977-09-12 1979-04-07 Mitsubishi Electric Corp Tilt signal generating circuit
US4371927A (en) * 1977-11-22 1983-02-01 Honeywell Information Systems Inc. Data processing system programmable pre-read capability
US4157587A (en) * 1977-12-22 1979-06-05 Honeywell Information Systems Inc. High speed buffer memory system with word prefetch
US4195342A (en) * 1977-12-22 1980-03-25 Honeywell Information Systems Inc. Multi-configurable cache store system
US4399503A (en) * 1978-06-30 1983-08-16 Bunker Ramo Corporation Dynamic disk buffer control unit
US4262332A (en) * 1978-12-28 1981-04-14 International Business Machines Corporation Command pair to improve performance and device independence
JPS5911135B2 (en) * 1979-01-17 1984-03-13 株式会社日立製作所 Data transfer method of data processing system
US4371924A (en) * 1979-11-09 1983-02-01 Rockwell International Corp. Computer system apparatus for prefetching data requested by a peripheral device from memory
JPS5680872A (en) * 1979-12-06 1981-07-02 Fujitsu Ltd Buffer memory control system
US4370710A (en) * 1980-08-26 1983-01-25 Control Data Corporation Cache memory organization utilizing miss information holding registers to prevent lockup from cache misses
US4394732A (en) * 1980-11-14 1983-07-19 Sperry Corporation Cache/disk subsystem trickle

Also Published As

Publication number Publication date
EP0072108B1 (en) 1988-10-19
JPS5823376A (en) 1983-02-12
AU548909B2 (en) 1986-01-09
JPH0147813B2 (en) 1989-10-17
ES514648A0 (en) 1983-03-16
US4533995A (en) 1985-08-06
EP0072108A2 (en) 1983-02-16
DE3279135D1 (en) 1988-11-24
AU8670182A (en) 1983-02-10
EP0072108A3 (en) 1986-05-14
SG25991G (en) 1991-06-21
ES8305146A1 (en) 1983-03-16

Similar Documents

Publication Publication Date Title
CA1176382A (en) Method and system for handling sequential data in a hierarchical store
US4636946A (en) Method and apparatus for grouping asynchronous recording operations
EP0106212B1 (en) Roll mode for cached data storage
EP0071719B1 (en) Data processing apparatus including a paging storage subsystem
US5530829A (en) Track and record mode caching scheme for a storage system employing a scatter index table with pointer and a track directory
EP0075688B1 (en) Data processing apparatus including a selectively resettable peripheral system
US4298929A (en) Integrated multilevel storage hierarchy for a data processing system with improved channel to memory write capability
EP0405882B1 (en) Move 16 block move and coprocessor interface instruction
US5864876A (en) DMA device with local page table
US4499539A (en) Method and apparatus for limiting allocated data-storage space in a data-storage unit
US5590379A (en) Method and apparatus for cache memory access with separate fetch and store queues
EP0377970B1 (en) I/O caching
JPS624745B2 (en)
JPH0727495B2 (en) Data transfer method
JPS589277A (en) Data processor
JPS6367686B2 (en)
JPS6015760A (en) Staging of dasd cash information
US5696931A (en) Disc drive controller with apparatus and method for automatic transfer of cache data
US5446844A (en) Peripheral memory interface controller as a cache for a large data processing system
US5287482A (en) Input/output cache
US6480940B1 (en) Method of controlling cache memory in multiprocessor system and the multiprocessor system based on detection of predetermined software module
EP0072107A2 (en) Peripheral sub-systems accommodating guest operating systems
JP2002024007A (en) Processor system
JPH086858A (en) Cache controller
JPH06301600A (en) Storage device

Legal Events

Date Code Title Description
MKEC Expiry (correction)
MKEX Expiry