US20010049768A1 - Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof - Google Patents

Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof Download PDF

Info

Publication number
US20010049768A1
US20010049768A1 US09/779,845 US77984501A US2001049768A1 US 20010049768 A1 US20010049768 A1 US 20010049768A1 US 77984501 A US77984501 A US 77984501A US 2001049768 A1 US2001049768 A1 US 2001049768A1
Authority
US
United States
Prior art keywords
cache memory
data
memory modules
configuration information
control device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/779,845
Other versions
US6615313B2 (en
Inventor
Tadaomi Kato
Hideaki Omura
Hiromi Kubota
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Assigned to FUJITSU LIMITED reassignment FUJITSU LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KATO, TADAOMI, KUBOTA, HIROMI, OMURA, HIDEAKI
Publication of US20010049768A1 publication Critical patent/US20010049768A1/en
Application granted granted Critical
Publication of US6615313B2 publication Critical patent/US6615313B2/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1666Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • G06F2212/283Plural cache memories

Definitions

  • the present invention relates to an input/output control device for disks, and, more particularly, to an input/output control device that maintains write data in multiple cache memory modules.
  • RAID control devices typically include cache memory.
  • the RAID control devices In response to a write request received from a server, the RAID control devices return a write request complete response to the server just by writing the data to the cache memory.
  • the write data stored in the cache memory is written to disk devices asynchronously to the write complete response sent to the server. This type of operation is called a write back operation.
  • the response time of a disk device that uses a cache memory to perform a write back operation in response to a write request is many times (approximately 10 times) shorter than disk devices that have no cache memory.
  • RAID control devices In order to guarantee permanence of the write data, RAID control devices normally have two cache memory modules and each cache memory module holds its own write data.
  • the write data that is held in the two cache memory modules is referred to as primary data and secondary data below.
  • One method considered as a countermeasure to the above loss of performance when shifting to a write through operation when having trouble with the cache memory is to prepare a spare cache memory to use when there are cache memory problems.
  • Cache X 106 that is, in addition to two cache memories, Cache 1 102 and Cache 2 107 , shown in FIG. 10( a ), a spare cache memory, Cache X 106 , which is not normally used, is prepared.
  • the spare cache memory Cache X 106 can be used to hold the primary data and secondary data of the write data, as shown in FIG. 10( b ).
  • the write back operation can be performed.
  • the above system is referred to as a hot spare system 100 .
  • the hot spare system 100 requires that a cache memory module 106 be prepared that is normally not used, as shown in FIG. 11( a ), so the cache memory cannot all be used effectively.
  • An object of the present invention is to solve the above-mentioned problems.
  • Another object of the present invention is to provide an input/output control device that allows the effective use of all of the cache memory.
  • a further object of the present invention is to allow the amount of cache memory to be increased one cache memory module at a time.
  • the present invention comprises a computer system comprising an input/output control device coupled to one or more disk devices and coupled to and receiving a write request including data from a processing device.
  • the input/output control device of the present invention comprises n (n>2) cache memory modules storing the data upon receiving the write request.
  • the input/output control device transmits to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, stores the data from the cache memory modules to the one or more disk devices.
  • FIG. 1( a ) is a diagram showing an input/output control device of the present invention.
  • FIG. 1( b ) shows an example of configuration information when the cache memory of the input/output control device of the present invention is functioning normally, is having problems, and when the cache memory is increased.
  • FIG. 2 is a diagram showing the overall configuration of the system of an embodiment of the present invention.
  • FIG. 3 is a diagram showing an example of the hardware configuration of the system of an embodiment of the present invention.
  • FIG. 4 is a diagram showing an example of the configuration information when there are three cache memory modules.
  • FIG. 5 is a diagram showing an example of the logical volumes supervised by the cache memory modules during normal operation and problem operation.
  • FIG. 6 is a diagram showing the process flow during normal operation of the cache memory.
  • FIG. 7 is a diagram showing the process flow during problem operation of the cache memory.
  • FIG. 8 is a diagram showing the process flow when the number of cache memory modules is increased.
  • FIG. 9 is a diagram showing the operation when increasing the cache memory.
  • FIGS. 10 ( a ) and 10 ( b ) are diagrams showing the conventional hot spare system.
  • FIGS. 11 ( a ) and 11 ( b ) are diagrams showing the problems with the conventional hot spare system.
  • FIG. 1( a ) is a diagram showing an input/output control device 200 of the present invention.
  • Input/output device 200 is also referred to as RAID (Redundant Array of Inexpensive Disks) control device 200 .
  • FIG. 1( b ) shows an example of configuration information 202 when cache memory of the input/output control device 200 of the present invention is functioning normally, is having problems, and when the amount of cache memory is increased.
  • 1 - 1 ⁇ 1 -n are the cache memory modules, referred to as cache memory 1 .
  • Each of the each cache memory modules 1 - 1 ⁇ 1 -n duplicates and stores write data as primary data and secondary data.
  • the input/output device 200 of the present invention shown in FIG. 1( a ) includes cache control modules 3 - 1 , 3 - 2 , . . . 3 - n , each corresponding, respectively, to one of the cache memory modules 1 - 1 , 1 - 2 , . . . , 1 - n .
  • a write request complete response is returned in response to the write request from the processing device (not shown in FIGS. 1 ( a ) or 1 ( b )).
  • the write data stored in the cache memory 1 is written out to one or more of the disk devices 2 asynchronously with the response that the write request complete response.
  • the region of the one or more disk devices 2 is divided by the number of normal cache memory modules m (m ⁇ n).
  • the secondary data corresponding to the primary data that was held in the cache memory 1 at the time of the problem is written immediately out to the disk device 2 . Then, using the configuration information 202 at the time the problem occurred, the data is written into the cache memory 2 in response to a write request from the processing device and data is written out to the disk device 2 from the cache memory 1 .
  • the configuration information 202 is used in response to the write request from the processing device to write the data from the cache memory 1 and the data is written out to the disk device from the cache memory
  • the input/output device 200 of the present invention circulates and holds duplicate write data in three or more cache memory modules 1 .
  • the write data to be written to the region that is supervised by the cache memory 1 that had the problem is taken over by the remaining cache memory 1 , providing for any configuration with n>2 cache memory modules 1 .
  • FIG. 2 is a diagram of a computer system 300 of an embodiment of the present invention.
  • computer system 300 comprises server 11 and RAID control device 12 .
  • RAID control device 12 corresponds to RAID control device 200 shown in FIG. 1( a ).
  • Server 11 transmits a write request to RAID control device 12 to write data from the server 11 to logical volumes 12 g allocated among disk devices 12 f by disk control module 12 e.
  • the RAID control device 12 also comprises interface control module 12 a , configuration information management module 12 b , cache control module 12 c , the cache memory 12 d , and disk control module 12 e that controls the disk and several disk devices 12 f.
  • the configuration information (corresponding to configuration information 202 shown in FIG. 1( b )) that keeps track of which cache memory 12 d the write data from the server 11 is held in, is stored in the configuration information management module 12 b.
  • FIG. 3 shows a typical hardware configuration for a computer system 400 corresponding to the computer system 300 shown in FIG. 2.
  • the subsystem control module 101 is coupled to an upper device 116 by an I/F (interface) module 118 , while the subsystem control module 101 comprises memory 101 a , MPU 101 b , and the bus interface module 101 c .
  • the above MP 101 b operates according to a program stored in the memory 101 a .
  • transfer data and control data are also stored in the memory 101 a .
  • the subsystem control module 101 shown in FIG. 3 comprises the cache memory 12 d and the cache control module 12 c shown in FIG. 2.
  • the subsystem control module 101 in FIG. 3 corresponds to the section in the computer system 300 of FIG. 2 comprising the interface control module 12 a , the configuration information management module 12 b , the cache control module 12 c , and the cache memory 12 d.
  • device control module 103 comprises buffer 103 a , MPU 103 b , the memory 103 c (which stores among other things, the program for running the aforementioned MPU 103 b ), and bus interface module 103 d.
  • the above subsystem control module 101 and device control module 103 are connected by bus 120 .
  • the device control module 103 is connected to the disk drive group 105 by device I/F (interface) module 104 .
  • the device control module 103 shown in FIG. 3 corresponds to the disk control module 12 e shown in FIG. 2.
  • cache memory comprises the three cache memory modules 12 d - 1 ⁇ 12 d - 3 shown in FIG. 2.
  • Each cache memory module 12 d - 1 ⁇ 12 d - 3 duplicates and stores the write data (received from upper device 116 corresponding to server 11 ) as primary data and secondary data.
  • the write data is to be written to disks 105 - 1 , 105 - 2 , 105 - 3 , 105 - 4 ,. 105 - x , of the disk drive group 105 shown in FIG. 3.
  • FIG. 4 shows an example of the configuration information 202 when there are three cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 . Moreover,
  • FIG. 4 indicates the supervisory logical volumes 12 g corresponding to the primary data and the secondary data for each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 .
  • FIG. 5( a ) shows the supervisory logical volume of each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 during normal operation.
  • FIG. 5( b ) shows the supervisory logical volume 12 g of each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 when there is a problem with cache memory 12 d.
  • the supervisory logical volumes 12 g of the cache memory module 12 d - 1 is 1 ⁇ 10 for the primary data and 21 ⁇ 30 for the secondary data
  • the supervisory logical volumes 12 g of the cache memory 12 d - 2 is 11 ⁇ 20 for the primary data and 1 ⁇ 10 for the secondary data
  • the supervisory logical volumes 12 g of the cache memory 12 d - 3 is 21 ⁇ 30 for the primary data and 11 ⁇ 20 for the secondary data.
  • the configuration information 202 defines the logical volume names of the primary data and secondary data that the cache memory supervises. Then, whenever there is a problem with one of the cache memory modules and the number of cache memory modules is reduced, each of the remaining cache memory modules re-defines the logical volume names of the primary data and secondary data that it supervises.
  • the region of the one or more disks 105 would be divided into the number of cache memory modules ( 12 d ) n.
  • the configuration information 202 is set up as follows.
  • the region of the one or more disks 105 is divided up into the number of normally functioning cache memory modules ( 12 d ) m (n ⁇ m).
  • the primary data and the secondary data to be written to the k th (0 ⁇ k ⁇ m-1) region of the disk 105 are held sequentially in the k th cache memory 12 d and in the cache memory 12 d that is not the k th cache memory 12 d , respectively.
  • This allows configurations of the computer system of the present invention with the desired number n (n ⁇ 3) of cache memory modules 12 d . It is possible to increase the number of cache memory modules 12 d during operation of the computer system of the present invention if this configuration information 202 is defined.
  • FIG. 6, FIG. 7, and FIG. 8 are flowcharts describing how each of the cache memory modules 12 d functions during normal operation ( 600 ), when there is a problem with one of the cache memory modules 12 d ( 700 ), and when the number of cache memory modules 12 d is increased ( 800 ).
  • the interface control module 12 a refers ( 602 ) to the normal operation ( 600 ) configuration information 202 .
  • the interface control module 12 a writes the data (both the primary data and the secondary data) to the cache memory 12 d corresponding to the supervisory logical volume 12 g determined by the configuration information 202 and returns a completed response to the server 11 ( 602 ).
  • each of the cache control modules 12 c refers to the configuration information and, as shown in FIG. 6, writes out ( 604 ) to the disk device 12 f , the data that was written to the cache memory modules 12 d ⁇ 12 d by the cache control module 12 c that manages the primary data determined in the configuration information.
  • the primary data will be written to the cache memory module 12 d - 1 and the secondary data will be written to the cache memory module 12 d - 2 .
  • the primary data written to the cache memory module 12 d - 1 will be written out to the disk device 12 f through the disk control module 12 e by the cache control module 12 c that manages the primary data.
  • the above secondary data that was written to the cache memory module 12 d - 2 will be deleted from the cache memory module 12 d - 2 .
  • the configuration information management module 12 b notifies ( 702 ) all of the interface control modules 12 a and all of the cache memory control modules 12 c as shown in FIG. 7.
  • the cache control module 12 c immediately ( 704 ) writes data out to the disk device 12 f by the disk control module 12 e , that is, the primary data held in the cache memory 12 d - i where the problem occurred and the secondary data which is in the 12 d -(i +1) th cache memory and contains the same data as the primary data. This is as shown in FIG. 7 and as set forth in the configuration information 202 for normal operation 600 .
  • the interface control modules 12 b refer to the configuration information for a problem 700 with the operation of cache memory 12 d - i.
  • the interface control modules 12 b write ( 706 ) the primary data and secondary data to the cache memory 12 d determined by the configuration information for a problem 700 with the operation of cache memory i and return a completed response to the server.
  • the cache control modules 12 c (including the (i+1) th cache control module described above) use the configuration information for problems ( 700 ) with cache memory 12 d - i and write out ( 708 ) the primary data, which is managed by the cache control module 12 c and written to the cache memory, to the disk device 12 f.
  • the interface cache memory 12 b writes the primary data to the cache memory 12 d - 2 and the secondary data to the cache memory 12 d - 3 and returns a completed response to the server 11 . Then, based on the configuration information for problem operation, each of the cache control modules 12 c would write out to disk device 12 f , the primary data written to the cache memory by the cache control module 12 c that manages the primary data.
  • the configuration information management module 12 b includes the configuration information settings beginning when the cache memory 12 d was set up ( 802 ) and when the amount of cache memory 12 d is increased ( 804 ). As shown in FIG. 8, the configuration information management module 12 b notifies ( 806 ) all of the interface control modules 12 a and the cache control modules 12 c of the increase in the number of cache memory modules 12 d.
  • each of the cache control modules 12 c arranges the data according to the configuration information after the cache memory was set up ( 804 ) and then moves ( 808 ) the data among the cache memory modules.
  • the cache control module 12 c which manages the primary data as set forth in the configuration information 202 after the cache memory is set up, writes ( 810 ) out to the disk device 12 f through the disk control module 12 e.
  • each of the interface control modules 12 a write ( 812 ) the primary data and secondary data to the cache memory as set forth in the configuration information after the cache memory has been set up ( 804 ) and return a completed response to the server 11 .
  • FIG. 9 shows an operation 900 of the present invention of increasing the number of cache memory modules 12 d from two to three.
  • the corresponding secondary data is deleted from the cache memory 12 d.
  • the secondary data of the cache memory 12 d - 2 corresponding to the logical volumes 12 g 11 ⁇ 15 is in either the cache memory 12 d - 1 or the cache memory 12 d - 3 , so the cache control module 12 c - 2 asks the cache control modules 12 c - 1 and 12 c - 3 to delete the secondary data from both the cache memory 12 d - 1 and the cache memory 12 d - 3 . If there is any secondary data, the secondary data is deleted from the cache memory 12 d.
  • the interface control module 12 a When the interface control module 12 a receives a write data request from the server 11 , the interface control module 12 a writes the write data to the cache memory 12 d (for both the primary data and the secondary data) of the supervisory logical volumes 12 g determined by the new configuration information and returns a completed response to the server 11 .
  • the present invention includes at least three cache memory modules and duplicates and saves write data while circulating the write data among the three cache memory modules.
  • the write data that was to be written to a region that was controlled by the cache memory module in which the problem occurred splits the data up among the remaining cache memory modules. This makes it possible to increase cache memory modules in units of one and allows more effective use of cache memory than the conventional hot pair system.

Abstract

An input/output control device uses all of its cache memory effectively and allows cache memory modules to be added in increments of one. When cache memory included in the input/output control device is operating normally and the input/output control device receives a write request from a processing device, the input/output control device returns a write request completed response after writing data to cache memory as set forth in configuration information included in the input/output control device. The write data in the cache memory is then written to one or more disk devices asynchronously with the write completed response. When there is a problem with a cache memory module, the write data that was to be written to the region controlled by the cache memory module where the problem occurred is divided among the remaining cache memory modules. When adding more cache memory modules, the input/output control device writes data to the cache memory in response to a write request from a processing device based on the configuration information after the cache memory was increased and after the data was moved as set forth in the configuration information corresponding to the cache memory increase.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to an input/output control device for disks, and, more particularly, to an input/output control device that maintains write data in multiple cache memory modules. [0002]
  • 2. Description of the Related Art [0003]
  • RAID control devices typically include cache memory. In response to a write request received from a server, the RAID control devices return a write request complete response to the server just by writing the data to the cache memory. The write data stored in the cache memory is written to disk devices asynchronously to the write complete response sent to the server. This type of operation is called a write back operation. The response time of a disk device that uses a cache memory to perform a write back operation in response to a write request is many times (approximately 10 times) shorter than disk devices that have no cache memory. [0004]
  • In order to guarantee permanence of the write data, RAID control devices normally have two cache memory modules and each cache memory module holds its own write data. The write data that is held in the two cache memory modules is referred to as primary data and secondary data below. By using this sort of configuration, even if there is a problem with one of the cache memory modules, the other cache memory module contains write data, so the write data will not be lost. [0005]
  • Normally, in order to guarantee the permanence of the write data, when there is a problem with the cache memory and there is only one cache memory, the write data stored in the cache memory is written back immediately to the disk. In this situation, the writing of the write data to the disk is synchronized and a write data complete response is returned to the server. This sort of operation is referred to as a write through operation. Switching from a write back operation to a write through operation when there is a problem with the cache memory requires a longer period of time (approximately 10 times) to respond to a write request. [0006]
  • One method considered as a countermeasure to the above loss of performance when shifting to a write through operation when having trouble with the cache memory is to prepare a spare cache memory to use when there are cache memory problems. [0007]
  • That is, in addition to two cache memories, Cache 1 [0008] 102 and Cache 2 107, shown in FIG. 10(a), a spare cache memory, Cache X 106, which is not normally used, is prepared.
  • Then if, for instance, there is a problem with the cache memory Cache 1 [0009] 102, the spare cache memory Cache X 106 can be used to hold the primary data and secondary data of the write data, as shown in FIG. 10(b). By using this sort of configuration, even if there are problems with the cache memory Cache 1 102 or Cache 2 107, the write back operation can be performed. The above system is referred to as a hot spare system 100.
  • However, the aforementioned conventional methods have the following disadvantages: [0010]
  • (1) The hot spare system [0011] 100 requires that a cache memory module 106 be prepared that is normally not used, as shown in FIG. 11(a), so the cache memory cannot all be used effectively.
  • (2) To ensure the permanence of the write data, when using two cache memory modules [0012] 102, 107 as described above, the cache memory modules 110, 112 must always be increased in pairs 108 when expanding the cache memory as shown in FIG. 11(b). In addition, when using the hot spare system 100, a cache memory module 106 must be prepared that is normally not used.
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to solve the above-mentioned problems. [0013]
  • Another object of the present invention is to provide an input/output control device that allows the effective use of all of the cache memory. [0014]
  • A further object of the present invention is to allow the amount of cache memory to be increased one cache memory module at a time. [0015]
  • The present invention comprises a computer system comprising an input/output control device coupled to one or more disk devices and coupled to and receiving a write request including data from a processing device. The input/output control device of the present invention comprises n (n>2) cache memory modules storing the data upon receiving the write request. The input/output control device transmits to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, stores the data from the cache memory modules to the one or more disk devices. The input/output control device divides the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information and sets up the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a k[0016] th region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
  • Moreover, the present invention comprises a method and a computer readable medium which, when executed by a computer, causes the computer to execute the processes comprising storing in n (n>2) cache memory modules of an input/output control device data received in a write request from a processing device, transmitting by the input/output control device to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to one or more disk devices, dividing by the input/output processing device the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information, and modifying by the input/output control device the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a k[0017] th region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
  • These together with other objects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.[0018]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1([0019] a) is a diagram showing an input/output control device of the present invention.
  • FIG. 1([0020] b) shows an example of configuration information when the cache memory of the input/output control device of the present invention is functioning normally, is having problems, and when the cache memory is increased.
  • FIG. 2 is a diagram showing the overall configuration of the system of an embodiment of the present invention. [0021]
  • FIG. 3 is a diagram showing an example of the hardware configuration of the system of an embodiment of the present invention. [0022]
  • FIG. 4 is a diagram showing an example of the configuration information when there are three cache memory modules. [0023]
  • FIG. 5 is a diagram showing an example of the logical volumes supervised by the cache memory modules during normal operation and problem operation. FIG. 6 is a diagram showing the process flow during normal operation of the cache memory. [0024]
  • FIG. 7 is a diagram showing the process flow during problem operation of the cache memory. [0025]
  • FIG. 8 is a diagram showing the process flow when the number of cache memory modules is increased. [0026]
  • FIG. 9 is a diagram showing the operation when increasing the cache memory. [0027]
  • FIGS. [0028] 10(a) and 10(b) are diagrams showing the conventional hot spare system.
  • FIGS. [0029] 11(a) and 11(b) are diagrams showing the problems with the conventional hot spare system.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1([0030] a) is a diagram showing an input/output control device 200 of the present invention. Input/output device 200 is also referred to as RAID (Redundant Array of Inexpensive Disks) control device 200.
  • FIG. 1([0031] b) shows an example of configuration information 202 when cache memory of the input/output control device 200 of the present invention is functioning normally, is having problems, and when the amount of cache memory is increased.
  • In FIG. 1([0032] a), 1-1˜1-n are the cache memory modules, referred to as cache memory 1. Each of the each cache memory modules 1-1˜1-n duplicates and stores write data as primary data and secondary data. Moreover, the input/output device 200 of the present invention shown in FIG. 1(a) includes cache control modules 3-1, 3-2, . . . 3-n, each corresponding, respectively, to one of the cache memory modules 1-1, 1-2, . . . , 1-n.
  • When the cache memory [0033] 1 is functioning normally, as shown in the configuration information 202 corresponding to <NORMAL OPERATION> shown in FIG. 1(b), a region on one or more disks 2-1, 2-2, . . . , 2-q of disk device 2 is divided into the number of cache memory modules n. For instance, the write data (the primary data and the secondary data) that is written out to the kth (k=1˜n-1) region Rk of the disk device 2 is held in the kth cache memory and the (k+1)th cache memory respectively, while the write data (the primary data and the secondary data) that is written out to the nth region Rn of the disk is held in the nth cache memory and the 15 cache memory respectively.
  • Then, after the data has been written out to cache memory [0034] 1 as set forth in the above configuration information 202, a write request complete response is returned in response to the write request from the processing device (not shown in FIGS. 1(a) or 1(b)). The write data stored in the cache memory 1 is written out to one or more of the disk devices 2 asynchronously with the response that the write request complete response.
  • When there is a problem in the cache memory [0035] 1, the write data written out to the region supervising the cache memory that had the problem is taken over by the remaining cache memory.
  • As shown in the <PROBLEM OPERATION> section of the configuration information [0036] 202 shown in FIG. 1(b), the region of the one or more disk devices 2 is divided by the number of normal cache memory modules m (m<n). For example, the write data (the primary data and the secondary data) that is written out to the kth (k=1˜m) region R′k of the disk device 2 is held in the kth cache memory and the (k+I)th cache memory respectively, while the write data (the primary data and the secondary data) that is written out to the nth region Rn of the disk is held in the nth cache memory and the 1 st cache memory respectively.
  • When a problem occurs in the cache memory [0037] 1, the secondary data corresponding to the primary data that was held in the cache memory 1 at the time of the problem is written immediately out to the disk device 2. Then, using the configuration information 202 at the time the problem occurred, the data is written into the cache memory 2 in response to a write request from the processing device and data is written out to the disk device 2 from the cache memory 1.
  • When increasing the amount of cache memory [0038] 1, the data held in the cache memory 1 is transferred as set forth in the configuration information 202 when the increase was made. Then, as shown in <CACHE MEMORY INCREASE>section of the configuration information 202 of the FIG. 1(b), after the increase, the configuration information 202 is used in response to the write request from the processing device to write the data from the cache memory 1 and the data is written out to the disk device from the cache memory
  • As above, the input/output device [0039] 200 of the present invention circulates and holds duplicate write data in three or more cache memory modules 1. When there is a problem with one of the cache memory modules 1, the write data to be written to the region that is supervised by the cache memory 1 that had the problem is taken over by the remaining cache memory 1, providing for any configuration with n>2 cache memory modules 1. There is also no need to prepare any more cache memory 1 than is normally used and all of the cache memory 1 is used effectively.
  • Embodiments of the Present Invention [0040]
  • FIG. 2 is a diagram of a computer system [0041] 300 of an embodiment of the present invention. As shown in FIG. 2, computer system 300 comprises server 11 and RAID control device 12. RAID control device 12 corresponds to RAID control device 200 shown in FIG. 1(a).
  • Server [0042] 11 transmits a write request to RAID control device 12 to write data from the server 11 to logical volumes 12 g allocated among disk devices 12 f by disk control module 12 e.
  • As described above with respect to the RAID control device [0043] 200 shown in FIG. 1(a), when the write request is received from the server 11 to write data from the server 11 to the disk devices 12 f, a write request complete response is returned to the server 11 upon writing the data to the cache memory 12 d. The write data stored in the cache memory 12 d is written out to the disk devices 12 f asynchronously to the write complete response (write back operation).
  • The RAID control device [0044] 12 also comprises interface control module 12 a, configuration information management module 12 b, cache control module 12 c, the cache memory 12 d, and disk control module 12 e that controls the disk and several disk devices 12 f. The configuration information (corresponding to configuration information 202 shown in FIG. 1(b)) that keeps track of which cache memory 12 d the write data from the server 11 is held in, is stored in the configuration information management module 12 b.
  • FIG. 3 shows a typical hardware configuration for a computer system [0045] 400 corresponding to the computer system 300 shown in FIG. 2.
  • In the computer system [0046] 400 shown in FIG.3, the subsystem control module 101 is coupled to an upper device 116 by an I/F (interface) module 118, while the subsystem control module 101 comprises memory 101 a, MPU 101 b, and the bus interface module 101 c. The above MP 101 b operates according to a program stored in the memory 101 a. In addition to the program, transfer data and control data are also stored in the memory 101 a. The subsystem control module 101 shown in FIG. 3 comprises the cache memory 12 d and the cache control module 12 c shown in FIG. 2. The subsystem control module 101 in FIG. 3 corresponds to the section in the computer system 300 of FIG. 2 comprising the interface control module 12 a, the configuration information management module 12 b, the cache control module 12 c, and the cache memory 12 d.
  • Referring again to FIG. 3, device control module [0047] 103 comprises buffer 103 a, MPU 103 b, the memory 103 c (which stores among other things, the program for running the aforementioned MPU 103 b), and bus interface module 103 d.
  • The above subsystem control module [0048] 101 and device control module 103 are connected by bus 120. The device control module 103 is connected to the disk drive group 105 by device I/F (interface) module 104. The device control module 103 shown in FIG. 3 corresponds to the disk control module 12 e shown in FIG. 2.
  • In the embodiment of the computer system [0049] 400 of the present invention shown in FIG. 3, corresponding to the computer system 300 shown in FIG. 2, cache memory comprises the three cache memory modules 12 d-1˜12 d-3 shown in FIG. 2. Each cache memory module 12 d-1˜12 d-3 duplicates and stores the write data (received from upper device 116 corresponding to server 11) as primary data and secondary data.
  • The write data is to be written to disks [0050] 105-1, 105-2, 105-3, 105-4,. 105-x, of the disk drive group 105 shown in FIG. 3.
  • FIG. 4 shows an example of the configuration information [0051] 202 when there are three cache memory modules 12 d-1 , 12 d-2, and 12 d-3. Moreover,
  • FIG. 4 indicates the supervisory logical volumes [0052] 12 g corresponding to the primary data and the secondary data for each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3.
  • FIG. 5([0053] a) shows the supervisory logical volume of each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3 during normal operation. FIG. 5(b) shows the supervisory logical volume 12 g of each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3 when there is a problem with cache memory 12 d.
  • As shown in FIG. 4 and FIG. 5([0054] a) during normal operation, the supervisory logical volumes 12 g of the cache memory module 12 d-1 is 1˜10 for the primary data and 21˜30 for the secondary data, the supervisory logical volumes 12 g of the cache memory 12 d-2 is 11˜20 for the primary data and 1 ˜10 for the secondary data, the supervisory logical volumes 12 g of the cache memory 12 d-3 is 21˜30 for the primary data and 11˜20 for the secondary data.
  • In this state, if, for example, there were to be a problem on the cache memory [0055] 12 d-1, based on the configuration information shown in FIG. 4, the supervisory logical volumes 12 g in the cache memory modules 12 d-1˜12 d-3 would change as shown in FIG. 5(b). The cache memory 12 d-2 the supervisory logical volumes 12g would be 1˜20 for the primary data, 21˜30 for the secondary data while the supervisory logical volumes 12 g for the cache memory 12 d-3 would be 21˜30 for the primary data and 1˜20 for the secondary data.
  • That is, with three cache memory modules [0056] 12 d-1, 12 d-2, and 12 d-3, the write data circulates between the three cache memory modules 12 d-1, 12 d- 2, and 12 d-3 while being duplicated. Therefore, when there is a problem with one of the cache memory modules (cache memory module 12 d-1 in FIG. 5(b)), the logical volume that was in charge (that is, the supervisory logical volume) of the cache memory module (cache memory module 12 d-1 in FIG. 5(b)) where the problem occurred would be shared among the remaining cache memory modules (cache memory modules 12 d-2 and 12 d-3 in FIG. 5(b)).
  • For this reason, the configuration information [0057] 202 defines the logical volume names of the primary data and secondary data that the cache memory supervises. Then, whenever there is a problem with one of the cache memory modules and the number of cache memory modules is reduced, each of the remaining cache memory modules re-defines the logical volume names of the primary data and secondary data that it supervises.
  • In the above example, there were three cache memory modules [0058] 12 d-1, 12 d-2, and 12 d-3. Generally, however, the region of the one or more disks 105 would be divided into the number of cache memory modules (12 d) n. The cache memory configuration information would be set up so that the write data (the primary data and the secondary data) to be written to the kth (k =1˜n-1) region of the disk 105 would be held sequentially in the kth cache memory 12 d and in the cache memory other than the kth respectively.
  • When a problem occurs, the configuration information [0059] 202 is set up as follows. The region of the one or more disks 105 is divided up into the number of normally functioning cache memory modules (12 d) m (n<m). The primary data and the secondary data to be written to the kth (0<k≦m-1) region of the disk 105 are held sequentially in the kth cache memory 12 d and in the cache memory 12 d that is not the kth cache memory 12 d , respectively. This allows configurations of the computer system of the present invention with the desired number n (n≧3) of cache memory modules 12 d. It is possible to increase the number of cache memory modules 12 d during operation of the computer system of the present invention if this configuration information 202 is defined.
  • FIG. 6, FIG. 7, and FIG. 8 are flowcharts describing how each of the cache memory modules [0060] 12 d functions during normal operation (600), when there is a problem with one of the cache memory modules 12 d (700), and when the number of cache memory modules 12 d is increased (800).
  • (1) Normal Operation [0061]
  • When all of the cache memory modules [0062] 12 d are functioning normally, as shown in FIG. 6, the interface control module 12 a refers (602) to the normal operation (600) configuration information 202. When requested by server 11 to write data, the interface control module 12 a writes the data (both the primary data and the secondary data) to the cache memory 12 d corresponding to the supervisory logical volume 12 g determined by the configuration information 202 and returns a completed response to the server 11 (602).
  • At the same time, during normal operation each of the cache control modules [0063] 12 c refers to the configuration information and, as shown in FIG. 6, writes out (604) to the disk device 12 f, the data that was written to the cache memory modules 12 d˜12 d by the cache control module 12 c that manages the primary data determined in the configuration information.
  • For example, if the logical volume [0064] 12 g makes a write request in the range of 1˜10, the primary data will be written to the cache memory module 12 d-1 and the secondary data will be written to the cache memory module 12 d -2. The primary data written to the cache memory module 12 d-1 will be written out to the disk device 12 f through the disk control module 12 eby the cache control module 12 c that manages the primary data. Once the data has been completely written out to the disk device 12 f, the above secondary data that was written to the cache memory module 12 d-2 will be deleted from the cache memory module 12 d-2.
  • (2) When There is a Problem With the Cache Memory. [0065]
  • When there is a problem with the cache memory [0066] 12 d-i (when there are three cache memory modules, i=1˜3), the configuration information management module 12 b notifies (702) all of the interface control modules 12 a and all of the cache memory control modules 12 c as shown in FIG. 7.
  • In order to guarantee the permanence of the write data, when the (i+1)[0067] th cache control module 12 c receives a report about a problem, the cache control module 12 c immediately (704) writes data out to the disk device 12 f by the disk control module 12 e, that is, the primary data held in the cache memory 12 d-i where the problem occurred and the secondary data which is in the 12 d-(i +1)th cache memory and contains the same data as the primary data. This is as shown in FIG. 7 and as set forth in the configuration information 202 for normal operation 600.
  • At the same time, when each of the interface control modules [0068] 12 b are notified of a problem, the interface control modules 12 b refer to the configuration information for a problem 700 with the operation of cache memory 12 d-i. The interface control modules 12 b write (706) the primary data and secondary data to the cache memory 12 d determined by the configuration information for a problem 700 with the operation of cache memory i and return a completed response to the server.
  • The cache control modules [0069] 12 c (including the (i+1)th cache control module described above) use the configuration information for problems (700) with cache memory 12 d-i and write out (708) the primary data, which is managed by the cache control module 12 c and written to the cache memory, to the disk device 12 f.
  • If, for example, as shown in FIG. 5([0070] b), there were a problem with the cache memory 12 d-1, the secondary data corresponding to the logical volumes 12 g 1 through 10 managed by cache memory 12 d-1 which is in cache memory 12 d-2, would be immediately written out to disk device 12 f through the disk control module 12 e.
  • If the write request from the server [0071] 11 is, for example, a write request to the range of logical volumes 12 g 1˜10, the interface cache memory 12 b writes the primary data to the cache memory 12 d-2 and the secondary data to the cache memory 12 d-3 and returns a completed response to the server 11. Then, based on the configuration information for problem operation, each of the cache control modules 12 c would write out to disk device 12 f, the primary data written to the cache memory by the cache control module 12 c that manages the primary data.
  • (3) When Increasing the Numb of Cache Memory Modules [0072]
  • In addition to the configuration information [0073] 202 before the cache module increase, the configuration information management module 12 b includes the configuration information settings beginning when the cache memory 12 d was set up (802) and when the amount of cache memory 12 d is increased (804). As shown in FIG. 8, the configuration information management module 12 b notifies (806) all of the interface control modules 12 a and the cache control modules 12 c of the increase in the number of cache memory modules 12 d.
  • Also as shown in FIG. 8, each of the cache control modules [0074] 12 c arranges the data according to the configuration information after the cache memory was set up (804) and then moves (808) the data among the cache memory modules.
  • Concerning the data written in the cache memory modules [0075] 12 d , the cache control module 12 c , which manages the primary data as set forth in the configuration information 202 after the cache memory is set up, writes (810) out to the disk device 12 f through the disk control module 12 e.
  • At the same time, each of the interface control modules [0076] 12 a , as shown in FIG. 8, write (812) the primary data and secondary data to the cache memory as set forth in the configuration information after the cache memory has been set up (804) and return a completed response to the server 11.
  • FIG. 9 shows an operation [0077] 900 of the present invention of increasing the number of cache memory modules 12 d from two to three.
  • As shown in the operation [0078] 900 of FIG. 9, if the two cache memory modules 12 d-1 and 12 d-2 are to be increased to three cache memory modules (that is, if cache memory module 12 d-3 is being added), all of the interface control modules 12 a and all of the cache control modules 12 c are notified of the addition of the cache memory 12 d-3.
  • This will cause the cache control module [0079] 12 c to shift the primary data corresponding to the logical volumes 12 g 16˜20 of the cache memory 12 d-2 to the primary data corresponding to the logical volumes 12 g 16˜20 of the cache memory 12 d- 3 as shown by the arrow drawn with a dotted line in FIG. 9. At the same time, the secondary data corresponding to the logical volumes 12 g 11˜15 of the cache memory 12 d-1 will be shifted to the secondary data corresponding to the logical volumes 12 g 11˜15 of the cache memory 12 d-3 3 as shown by the arrow drawn with a dotted line in FIG. 9.
  • The write back operation that takes place while data is being shifted among these cache memory modules [0080] 12 d-1, 12 d-2, and 12 d-3 is, as stated earlier, written out to the disk device 12 f through the disk control module 12 e by the cache control module 12 c that manages the primary data as new configuration information.
  • That is, the cache control module [0081] 12 c -1 of the cache memory module 12 d-1 of the logical volumes 12 g 1˜10, the cache control module 12 c -2 of the cache memory module 12 d-2 of the logical volumes 12 g 11˜15 and the cache control module 12 c -3 of the cache memory module 12 d-3 of the logical volumes 12 g 16˜20 write data out to the disk device 12 f through the disk control module 12 e. When the write back operation has been completed, the corresponding secondary data is deleted from the cache memory 12 d.
  • The secondary data of the cache memory [0082] 12 d-2 corresponding to the logical volumes 12 g 11˜15 is in either the cache memory 12 d-1 or the cache memory 12 d-3, so the cache control module 12 c -2 asks the cache control modules 12 c -1 and 12 c -3 to delete the secondary data from both the cache memory 12 d-1 and the cache memory 12 d-3. If there is any secondary data, the secondary data is deleted from the cache memory 12 d.
  • When the interface control module [0083] 12 a receives a write data request from the server 11, the interface control module 12 a writes the write data to the cache memory 12 d (for both the primary data and the secondary data) of the supervisory logical volumes 12 g determined by the new configuration information and returns a completed response to the server 11.
  • For example, if there is a write request for the logical volumes [0084] 12 g in a range of 11˜15, the primary data will be written to the cache memory 12 d-2 and the secondary data will be written to the cache memory 12 d-3.
  • Effects of the Present Invention [0085]
  • As described above, the present invention includes at least three cache memory modules and duplicates and saves write data while circulating the write data among the three cache memory modules. When there is a problem in the cache memory, the write data that was to be written to a region that was controlled by the cache memory module in which the problem occurred splits the data up among the remaining cache memory modules. This makes it possible to increase cache memory modules in units of one and allows more effective use of cache memory than the conventional hot pair system. [0086]
  • The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention which fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention. [0087]
  • Element Number List [0088]
  • [0089] 1-1˜1-n Cache Memory
  • [0090] 2 Disk Devices
  • [0091] 3 Cache control modules
  • [0092] 11 Server
  • [0093] 12 RAID Control Device
  • [0094] 12 a Interface Control Module
  • [0095] 12 b Configuration Information Management Module
  • [0096] 12 c Cache Control Module
  • [0097] 12 d Cache Memory
  • [0098] 12 eDisk Control Module
  • [0099] 12 f Disk Device
  • [0100] 12 g Logical Volume
  • [0101] 100 Hot spare system
  • [0102] 101 Subsystem Control Module
  • [0103] 101 a Memory
  • [0104] 101 b MPU
  • [0105] 101 c Bus I/F (interface) Module
  • [0106] 102 Cache 1
  • [0107] 103 Device Control Module
  • [0108] 103 a Buffer
  • [0109] 103 b MPU
  • [0110] 103 c Memory
  • [0111] 103 d Bus I/F (interface) control module
  • [0112] 104 Device I/F (interface) Module
  • [0113] 105 Disk Drive Group
  • [0114] 106 Cache X
  • [0115] 107 Cache 2
  • [0116] 108 Pair
  • [0117] 110 Cache 3
  • [0118] 112 Cache 4
  • [0119] 116 Upper Device
  • [0120] 118 Channel I/F (interface) Module
  • [0121] 120 Bus
  • [0122] 200 RAID control device
  • [0123] 202 Configuration information
  • [0124] 300 Computer system 400 Computer system

Claims (25)

What is claimed is:
1. An input/output control device coupled to one or more disk devices and coupled to and receiving a write request including data from a processing device, said input/output control device comprising:
n (n>2) cache memory modules storing the data upon receiving the write request, wherein said input/output control device transmitting to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to the one or more disk devices, wherein the input/output control device dividing the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information and setting up the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
2. The input/output control device as in
claim 1
, wherein when the write request is received from the processing device, the primary data and the secondary data that are to be written to the regions of the one or more disk devices are allocated to and stored in the cache memory modules by the input/output control device before returning a write request completed response to the processing device, wherein after the primary data stored in the cache memory modules has been written out to the one or more disk devices, the secondary data is deleted from the cache memory modules.
3. The input/output control device as in
claim 1
, wherein:
configuration information is set up by splitting up the regions of the one or more disk devices into the normally functioning cache memory modules m (m<n), sequentially allocating the primary data and secondary data of the write data to be written out to the kth region (k=1˜m) of the disk device, to the kth cache memory module, and to the non- kth cache memory module as the configuration information when there is a problem with one of the cache memory modules,
when there is a problem with one of the cache memory modules, the secondary data corresponding to the primary data stored in the cache memory module affected by the problem, is immediately written out to the disk device, and
using problem operation configuration information, the data is written out to cache memory module in response to the write request from the processing device and the data is also written from the cache memory module to the disk device.
4. The input/output control device as in
claim 1
, wherein:
after increasing the number of cache memory modules by p, the regions on one or more disk devices are divided into the number of normally functioning cache memory modules p, and the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk are allocated sequentially to the kth cache memory module and the non-kth cache memory module, respectively, and set up as configuration information, and
when the number of cache memory modules has been increased, the configuration information set up when the increase was made, is used to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
5. The input/output control device as in
claim 2
, wherein:
the configuration information is set up by splitting up the regions of the one or more disk devices into the normally functioning cache memory modules m (m<n), sequentially allocating the primary data and secondary data of the write data to be written out to the kth region (k=1˜m) of the disk device, to the kth cache memory module, and to the non- kth cache memory module as the configuration information when there is a problem with one of the cache memory modules,
when there is a problem with one of the cache memory modules, the secondary data corresponding to the primary data stored in the cache memory module affected by the problem, is immediately written out to the disk device, and
using problem operation configuration information, the data is written out to cache memory module in response to the write request from the processing device and the data is also written from the cache memory module to the disk device.
6. The input/output control device as in
claim 2
wherein:
after increasing the number of cache memory modules by p, the regions on one or more disk devices are divided into the number of normally functioning cache memory modules p, and the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk are allocated sequentially to the kth cache memory module and the non-kth cache memory module, respectively, and set up as configuration information, and
when the number of cache memory modules has been increased, the configuration information set up when the increase was made, is used to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
7. The input/output control device as in
claim 3
, wherein:
after increasing the number of cache memory modules by p, the regions on one or more disk devices are divided into the number of normally functioning cache memory modules p, and the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk are allocated sequentially to the kth cache memory module and the non-kth cache memory module, respectively, and set up as configuration information, and
when the number of cache memory modules has been increased, the configuration information set up when the increase was made, is used to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
8. The input/output control device as in
claim 1
%further comprising a configuration information management module storing the configuration information.
9. The input/output control device as in
claim 1
, wherein when the write request is received from the processing device, the primary data and the secondary data that are to be written to the regions of the one or more disk devices are allocated to and stored in the cache memory modules before the input/output control device before returning a write request completed response to the processing device, wherein after the primary data stored in the cache memory modules has been written out to the one or more disk devices, the secondary data is deleted from the cache memory modules.
10. The input/output control device as in
claim 1
, wherein:
the configuration information is set up by splitting up the regions of the one or more disk devices into the normally functioning cache memory modules m(m<n), sequentially allocating the primary data and secondary data of the write data to be written out to the kth region (k=1˜m) of the disk device, to the kth cache memory module, and to the non- kth cache memory module as the configuration information when there is a problem with one of the cache memory modules.
11. The input/output control device as in
claim 1
, wherein:
when there is a problem with one of the cache memory modules, the secondary data corresponding to the primary data stored in the cache memory module affected by the problem, is immediately written out to the disk device, and
using problem operation configuration information, the data is written out to cache memory module in response to the write request from the processing device and the data is also written from the cache memory module to the disk device.
12. The input/output control device as in
claim 1
, wherein:
after increasing the number of cache memory modules by p, the regions on one or more disk devices are divided into the number of normally functioning cache memory modules p, and the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk are allocated sequentially to the kth cache memory module and the non-kth cache memory module, respectively, and set up as configuration information.
13. The input/output control device as in
claim 12
, wherein when the number of cache memory modules has been increased, the configuration information set up when the increase was made, is used to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
14. The input/output control device as in
claim 1
, Wherein the input/output control device comprises a RAID control device.
15. An apparatus comprising:
one or more disk devices;
a server transmitting a write request including data to be stored in the one or more disk devices; and
a control device, coupled to the disk devices and to the server and receiving the write request, comprising:
n (n>2) cache memory modules storing the data upon receiving the write request, wherein said input/output control device transmitting to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to the one or more disk devices, wherein the input/output control device dividing the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information and modifying the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
16. The apparatus as in
claim 15
, wherein the control device comprises an input/output control device.
17. The apparatus as in
claim 15
, wherein the one or more disk devices are provided in a RAID configuration and the control device comprises a RAID control device.
18. A method comprising:
storing in n (n>2) cache memory modules of an input/output control device data received in a write request from a processing device;
transmitting by the input/output control device to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to one or more disk devices;
dividing by the input/output processing device the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information; and
setting up by the input/output control device the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
19. The method as in
claim 18
, further comprising:
when the write request is received from the processing device, allocating to and storing in the cache memory modules by the input/output processing device the primary data and the secondary data that are to be written to the regions of the one or more disk devices before returning a write request completed response to the processing device; and
deleting the secondary data from the cache memory modules after the primary data stored in the cache memory modules has been written out to the one or more disk devices.
20. The method as in
claim 18
, further comprising:
setting up the configuration information by splitting up the regions of the one or more disk devices into the normally functioning cache memory modules m(m<n), sequentially allocating the primary data and secondary data of the write data to be written out to the kth region (k=1˜m) of the disk device, to the kth cache memory module, and to the non- kth cache memory module as the configuration information when there is a problem with one of the cache memory modules,
immediately writing out to the disk device when there is a problem with one of the cache memory modules, the secondary data corresponding to the primary data stored in the cache memory module affected by the problem, and
using problem operation configuration information, writing the data out to cache memory module in response to the write request from the processing device and writing the data from the cache memory module to the disk device.
21. The method as in
claim 18
, wherein:
after increasing the number of cache memory modules by p, dividing the regions on one or more disk devices into the number of normally functioning cache memory modules p, and allocating and setting up as configuration information the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk sequentially to the kth cache memory module and the non-kth cache memory module, respectively, and
when the number of cache memory modules has been increased, using the configuration information set up when the increase was made to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
22. A computer-readable medium storing a program which when executed by a computer, causes the computer to execute the processes comprising:
storing in n (n>2) cache memory modules of an input/output control device data received in a write request from a processing device;
transmitting by the input/output control device to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to one or more disk devices;
dividing by the input/output processing device the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information; and
modifying by the input/output control device the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
23. The computer-readable medium as in
claim 22
, further comprising:
when the write request is received from the processing device, allocating to and storing in the cache memory modules by the input/output processing device the primary data and the secondary data that are to be written to the regions of the one or more disk devices before returning a write request completed response to the processing device; and
deleting the secondary data from the cache memory modules after the primary data stored in the cache memory modules has been written out to the one or more disk devices.
24. The computer-readable medium as in
claim 22
, further comprising:
setting up the configuration information by splitting up the regions of the one or more disk devices into the normally functioning cache memory modules m(m<n), sequentially allocating the primary data and secondary data of the write data to be written out to the kth region (k=1˜m) of the disk device, to the kth cache memory module, and to the non- kth cache memory module as the configuration information when there is a problem with one of the cache memory modules,
immediately writing out to the disk device when there is a problem with one of the cache memory modules, the secondary data corresponding to the primary data stored in the cache memory module affected by the problem, and
using problem operation configuration information, writing the data out to cache memory module in response to the write request from the processing device and writing the data from the cache memory module to the disk device.
25. The computer-readable medium as in
claim 22
, wherein:
after increasing the number of cache memory modules by p, dividing the regions on one or more disk devices into the number of normally functioning cache memory modules p, and allocating and setting up as configuration information the primary data and the secondary data of the data to be written out to the kth region (k=0˜p) of the disk sequentially to the kth cache memory a module and the non-kth cache memory module, respectively, and
when the number of cache memory modules has been increased, using the configuration information set up when the increase was made to shift the data stored in the cache memory modules and then to write out a category to the cache memory modules in response to the write request from the processing device using the configuration information after the increase.
US09/779,845 2000-06-05 2001-02-09 Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof Expired - Lifetime US6615313B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2000-167483 2000-06-05
JP2000167483A JP3705731B2 (en) 2000-06-05 2000-06-05 I / O controller

Publications (2)

Publication Number Publication Date
US20010049768A1 true US20010049768A1 (en) 2001-12-06
US6615313B2 US6615313B2 (en) 2003-09-02

Family

ID=18670633

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/779,845 Expired - Lifetime US6615313B2 (en) 2000-06-05 2001-02-09 Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof

Country Status (2)

Country Link
US (1) US6615313B2 (en)
JP (1) JP3705731B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004114116A1 (en) * 2003-06-19 2004-12-29 Fujitsu Limited Method for write back from mirror cache in cache duplicating method
EP1507204A2 (en) * 2003-07-22 2005-02-16 Hitachi, Ltd. Storage system with cache memory
US20050102582A1 (en) * 2003-11-11 2005-05-12 International Business Machines Corporation Method and apparatus for controlling data storage within a data storage system
US20070118712A1 (en) * 2005-11-21 2007-05-24 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20080046538A1 (en) * 2006-08-21 2008-02-21 Network Appliance, Inc. Automatic load spreading in a clustered network storage system
US20080276032A1 (en) * 2004-08-27 2008-11-06 Junichi Iida Arrangements which write same data as data stored in a first cache memory module, to a second cache memory module
US10061667B2 (en) * 2014-06-30 2018-08-28 Hitachi, Ltd. Storage system for a memory control method
US10234929B2 (en) 2015-04-30 2019-03-19 Fujitsu Limited Storage system and control apparatus

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7103653B2 (en) * 2000-06-05 2006-09-05 Fujitsu Limited Storage area network management system, method, and computer-readable medium
JP4209108B2 (en) * 2001-12-20 2009-01-14 株式会社日立製作所 Storage device control method, storage device used in this method, disk array device, and disk controller
US7136966B2 (en) 2002-03-18 2006-11-14 Lsi Logic Corporation Method and apparatus for using a solid state disk device as a storage controller cache
US7149846B2 (en) * 2002-04-17 2006-12-12 Lsi Logic Corporation RAID protected external secondary memory
JP4412981B2 (en) 2003-11-26 2010-02-10 株式会社日立製作所 Storage system and data caching method in the same system
JP4429763B2 (en) * 2004-02-26 2010-03-10 株式会社日立製作所 Information processing apparatus control method, information processing apparatus, and storage apparatus control method
US7644239B2 (en) 2004-05-03 2010-01-05 Microsoft Corporation Non-volatile memory cache performance improvement
JP4715286B2 (en) * 2004-05-11 2011-07-06 株式会社日立製作所 Computer system and computer system control method
JP4555040B2 (en) * 2004-09-22 2010-09-29 株式会社日立製作所 Storage device and storage device write access processing method
US7490197B2 (en) 2004-10-21 2009-02-10 Microsoft Corporation Using external memory devices to improve system performance
JP4688514B2 (en) * 2005-02-14 2011-05-25 株式会社日立製作所 Storage controller
JP2006259945A (en) * 2005-03-16 2006-09-28 Nec Corp Redundant system, its configuration control method and its program
JP4561462B2 (en) * 2005-05-06 2010-10-13 富士通株式会社 Dirty data processing method, dirty data processing device, and dirty data processing program
US8914557B2 (en) 2005-12-16 2014-12-16 Microsoft Corporation Optimizing write and wear performance for a memory
JP2007265271A (en) * 2006-03-29 2007-10-11 Nec Corp Storage device, data arrangement method and program
JP4836647B2 (en) * 2006-04-21 2011-12-14 株式会社東芝 Storage device using nonvolatile cache memory and control method thereof
JP2008217575A (en) * 2007-03-06 2008-09-18 Nec Corp Storage device and configuration optimization method thereof
US7975109B2 (en) 2007-05-30 2011-07-05 Schooner Information Technology, Inc. System including a fine-grained memory and a less-fine-grained memory
US8631203B2 (en) 2007-12-10 2014-01-14 Microsoft Corporation Management of external memory functioning as virtual cache
JP4985391B2 (en) * 2007-12-28 2012-07-25 日本電気株式会社 Disk array device, physical disk recovery method, and physical disk recovery program
JP4862841B2 (en) * 2008-02-25 2012-01-25 日本電気株式会社 Storage apparatus, system, method, and program
US8229945B2 (en) 2008-03-20 2012-07-24 Schooner Information Technology, Inc. Scalable database management software on a cluster of nodes using a shared-distributed flash memory
US8732386B2 (en) * 2008-03-20 2014-05-20 Sandisk Enterprise IP LLC. Sharing data fabric for coherent-distributed caching of multi-node shared-distributed flash memory
US9032151B2 (en) 2008-09-15 2015-05-12 Microsoft Technology Licensing, Llc Method and system for ensuring reliability of cache data and metadata subsequent to a reboot
US8032707B2 (en) 2008-09-15 2011-10-04 Microsoft Corporation Managing cache data and metadata
US7953774B2 (en) 2008-09-19 2011-05-31 Microsoft Corporation Aggregation of write traffic to a data store
US8868487B2 (en) 2010-04-12 2014-10-21 Sandisk Enterprise Ip Llc Event processing in a flash memory-based object store
US8725951B2 (en) 2010-04-12 2014-05-13 Sandisk Enterprise Ip Llc Efficient flash memory-based object store
US9047351B2 (en) 2010-04-12 2015-06-02 Sandisk Enterprise Ip Llc Cluster of processing nodes with distributed global flash memory using commodity server technology
US8856593B2 (en) 2010-04-12 2014-10-07 Sandisk Enterprise Ip Llc Failure recovery using consensus replication in a distributed flash memory system
US9164554B2 (en) 2010-04-12 2015-10-20 Sandisk Enterprise Ip Llc Non-volatile solid-state storage system supporting high bandwidth and random access
US8954385B2 (en) 2010-06-28 2015-02-10 Sandisk Enterprise Ip Llc Efficient recovery of transactional data stores
US8694733B2 (en) 2011-01-03 2014-04-08 Sandisk Enterprise Ip Llc Slave consistency in a synchronous replication environment
US8874515B2 (en) 2011-04-11 2014-10-28 Sandisk Enterprise Ip Llc Low level object version tracking using non-volatile memory write generations
US9135064B2 (en) 2012-03-07 2015-09-15 Sandisk Enterprise Ip Llc Fine grained adaptive throttling of background processes
WO2014009994A1 (en) * 2012-07-10 2014-01-16 Hitachi, Ltd. Disk subsystem and method for controlling memory access
US9535612B2 (en) 2013-10-23 2017-01-03 International Business Machines Corporation Selecting a primary storage device
CN109154906B (en) * 2016-07-11 2021-09-21 株式会社日立制作所 Storage device, control method of storage device, and controller for storage device
JP7056874B2 (en) * 2019-03-13 2022-04-19 Necプラットフォームズ株式会社 Controls, disk array devices, control methods, and programs
JP7318367B2 (en) * 2019-06-28 2023-08-01 富士通株式会社 Storage control device and storage control program

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3409859B2 (en) * 1991-01-31 2003-05-26 株式会社日立製作所 Control method of control device
US5615329A (en) * 1994-02-22 1997-03-25 International Business Machines Corporation Remote data duplexing
JPH07281959A (en) 1994-04-12 1995-10-27 Fuji Electric Co Ltd Redundancy system for disk storage
JP3457394B2 (en) 1994-09-16 2003-10-14 株式会社東芝 Information storage device
US5412668A (en) * 1994-09-22 1995-05-02 International Business Machines Corporation Parity striping feature for optical disks
US6041396A (en) * 1996-03-14 2000-03-21 Advanced Micro Devices, Inc. Segment descriptor cache addressed by part of the physical address of the desired descriptor
JP3411451B2 (en) 1996-08-30 2003-06-03 株式会社日立製作所 Disk array device
US6457098B1 (en) * 1998-12-23 2002-09-24 Lsi Logic Corporation Methods and apparatus for coordinating shared multiple raid controller access to common storage devices
US6460122B1 (en) * 1999-03-31 2002-10-01 International Business Machine Corporation System, apparatus and method for multi-level cache in a multi-processor/multi-controller environment
US6341331B1 (en) * 1999-10-01 2002-01-22 International Business Machines Corporation Method and system for managing a raid storage system with cache

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7610446B2 (en) * 2003-06-19 2009-10-27 Fujitsu Limited RAID apparatus, RAID control method, and RAID control program
WO2004114115A1 (en) * 2003-06-19 2004-12-29 Fujitsu Limited Raid device, raid control method, and raid control program
WO2004114116A1 (en) * 2003-06-19 2004-12-29 Fujitsu Limited Method for write back from mirror cache in cache duplicating method
US20050216660A1 (en) * 2003-06-19 2005-09-29 Fujitsu Limited RAID apparatus, RAID control method, and RAID control program
EP1507204A3 (en) * 2003-07-22 2006-12-27 Hitachi, Ltd. Storage system with cache memory
EP1507204A2 (en) * 2003-07-22 2005-02-16 Hitachi, Ltd. Storage system with cache memory
US20050102582A1 (en) * 2003-11-11 2005-05-12 International Business Machines Corporation Method and apparatus for controlling data storage within a data storage system
US7219256B2 (en) * 2003-11-12 2007-05-15 International Business Machines Corporation Method and apparatus for controlling data storage within a data storage system
US20080276032A1 (en) * 2004-08-27 2008-11-06 Junichi Iida Arrangements which write same data as data stored in a first cache memory module, to a second cache memory module
US7516291B2 (en) 2005-11-21 2009-04-07 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20070118712A1 (en) * 2005-11-21 2007-05-24 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20090172337A1 (en) * 2005-11-21 2009-07-02 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US8321638B2 (en) 2005-11-21 2012-11-27 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20080046538A1 (en) * 2006-08-21 2008-02-21 Network Appliance, Inc. Automatic load spreading in a clustered network storage system
US8046422B2 (en) * 2006-08-21 2011-10-25 Netapp, Inc. Automatic load spreading in a clustered network storage system
US10061667B2 (en) * 2014-06-30 2018-08-28 Hitachi, Ltd. Storage system for a memory control method
US10234929B2 (en) 2015-04-30 2019-03-19 Fujitsu Limited Storage system and control apparatus

Also Published As

Publication number Publication date
JP2001344154A (en) 2001-12-14
JP3705731B2 (en) 2005-10-12
US6615313B2 (en) 2003-09-02

Similar Documents

Publication Publication Date Title
US6615313B2 (en) Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof
US4603380A (en) DASD cache block staging
US7725445B2 (en) Data replication among storage systems
US10664177B2 (en) Replicating tracks from a first storage site to a second and third storage sites
US7634617B2 (en) Methods, systems, and computer program products for optimized copying of logical units (LUNs) in a redundant array of inexpensive disks (RAID) environment using buffers that are larger than LUN delta map chunks
US7634618B2 (en) Methods, systems, and computer program products for optimized copying of logical units (LUNs) in a redundant array of inexpensive disks (RAID) environment using buffers that are smaller than LUN delta map chunks
CN101571822B (en) Storage controller and data management method
US7496718B2 (en) Data transfer and access control between disk array systems
US7127557B2 (en) RAID apparatus and logical device expansion method thereof
US5845295A (en) System for providing instantaneous access to a snapshot Op data stored on a storage medium for offline analysis
EP0727745A1 (en) Memory control apparatus and its control method
US7373470B2 (en) Remote copy control in a storage system
JPH01251258A (en) Shared area managing system in network system
US6510491B1 (en) System and method for accomplishing data storage migration between raid levels
JPH10198607A (en) Data multiplexing system
US7451285B2 (en) Computer systems, management computers and storage system management method
KR0175983B1 (en) Data processing system having demand based write through cache with enforced ordering
US20060143313A1 (en) Method for accessing a storage device
JPH0452743A (en) Control system for duplex external storage
US20060265559A1 (en) Data processing system
CN111208942B (en) Distributed storage system and storage method thereof
US20240111456A1 (en) Storage device controller and method capable of allowing incoming out-of-sequence write command signals
JPH0659819A (en) Uninterruptible expanding method for storage device capacity
JPH09326832A (en) Common use buffer device and its control method
JPH064494A (en) Plural file merging system

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJITSU LIMITED, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KATO, TADAOMI;OMURA, HIDEAKI;KUBOTA, HIROMI;REEL/FRAME:011544/0469

Effective date: 20010113

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12