US20010049768A1 - Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof - Google Patents
Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof Download PDFInfo
- Publication number
- US20010049768A1 US20010049768A1 US09/779,845 US77984501A US2001049768A1 US 20010049768 A1 US20010049768 A1 US 20010049768A1 US 77984501 A US77984501 A US 77984501A US 2001049768 A1 US2001049768 A1 US 2001049768A1
- Authority
- US
- United States
- Prior art keywords
- cache memory
- data
- memory modules
- configuration information
- control device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1666—Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/283—Plural cache memories
Definitions
- the present invention relates to an input/output control device for disks, and, more particularly, to an input/output control device that maintains write data in multiple cache memory modules.
- RAID control devices typically include cache memory.
- the RAID control devices In response to a write request received from a server, the RAID control devices return a write request complete response to the server just by writing the data to the cache memory.
- the write data stored in the cache memory is written to disk devices asynchronously to the write complete response sent to the server. This type of operation is called a write back operation.
- the response time of a disk device that uses a cache memory to perform a write back operation in response to a write request is many times (approximately 10 times) shorter than disk devices that have no cache memory.
- RAID control devices In order to guarantee permanence of the write data, RAID control devices normally have two cache memory modules and each cache memory module holds its own write data.
- the write data that is held in the two cache memory modules is referred to as primary data and secondary data below.
- One method considered as a countermeasure to the above loss of performance when shifting to a write through operation when having trouble with the cache memory is to prepare a spare cache memory to use when there are cache memory problems.
- Cache X 106 that is, in addition to two cache memories, Cache 1 102 and Cache 2 107 , shown in FIG. 10( a ), a spare cache memory, Cache X 106 , which is not normally used, is prepared.
- the spare cache memory Cache X 106 can be used to hold the primary data and secondary data of the write data, as shown in FIG. 10( b ).
- the write back operation can be performed.
- the above system is referred to as a hot spare system 100 .
- the hot spare system 100 requires that a cache memory module 106 be prepared that is normally not used, as shown in FIG. 11( a ), so the cache memory cannot all be used effectively.
- An object of the present invention is to solve the above-mentioned problems.
- Another object of the present invention is to provide an input/output control device that allows the effective use of all of the cache memory.
- a further object of the present invention is to allow the amount of cache memory to be increased one cache memory module at a time.
- the present invention comprises a computer system comprising an input/output control device coupled to one or more disk devices and coupled to and receiving a write request including data from a processing device.
- the input/output control device of the present invention comprises n (n>2) cache memory modules storing the data upon receiving the write request.
- the input/output control device transmits to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, stores the data from the cache memory modules to the one or more disk devices.
- FIG. 1( a ) is a diagram showing an input/output control device of the present invention.
- FIG. 1( b ) shows an example of configuration information when the cache memory of the input/output control device of the present invention is functioning normally, is having problems, and when the cache memory is increased.
- FIG. 2 is a diagram showing the overall configuration of the system of an embodiment of the present invention.
- FIG. 3 is a diagram showing an example of the hardware configuration of the system of an embodiment of the present invention.
- FIG. 4 is a diagram showing an example of the configuration information when there are three cache memory modules.
- FIG. 5 is a diagram showing an example of the logical volumes supervised by the cache memory modules during normal operation and problem operation.
- FIG. 6 is a diagram showing the process flow during normal operation of the cache memory.
- FIG. 7 is a diagram showing the process flow during problem operation of the cache memory.
- FIG. 8 is a diagram showing the process flow when the number of cache memory modules is increased.
- FIG. 9 is a diagram showing the operation when increasing the cache memory.
- FIGS. 10 ( a ) and 10 ( b ) are diagrams showing the conventional hot spare system.
- FIGS. 11 ( a ) and 11 ( b ) are diagrams showing the problems with the conventional hot spare system.
- FIG. 1( a ) is a diagram showing an input/output control device 200 of the present invention.
- Input/output device 200 is also referred to as RAID (Redundant Array of Inexpensive Disks) control device 200 .
- FIG. 1( b ) shows an example of configuration information 202 when cache memory of the input/output control device 200 of the present invention is functioning normally, is having problems, and when the amount of cache memory is increased.
- 1 - 1 ⁇ 1 -n are the cache memory modules, referred to as cache memory 1 .
- Each of the each cache memory modules 1 - 1 ⁇ 1 -n duplicates and stores write data as primary data and secondary data.
- the input/output device 200 of the present invention shown in FIG. 1( a ) includes cache control modules 3 - 1 , 3 - 2 , . . . 3 - n , each corresponding, respectively, to one of the cache memory modules 1 - 1 , 1 - 2 , . . . , 1 - n .
- a write request complete response is returned in response to the write request from the processing device (not shown in FIGS. 1 ( a ) or 1 ( b )).
- the write data stored in the cache memory 1 is written out to one or more of the disk devices 2 asynchronously with the response that the write request complete response.
- the region of the one or more disk devices 2 is divided by the number of normal cache memory modules m (m ⁇ n).
- the secondary data corresponding to the primary data that was held in the cache memory 1 at the time of the problem is written immediately out to the disk device 2 . Then, using the configuration information 202 at the time the problem occurred, the data is written into the cache memory 2 in response to a write request from the processing device and data is written out to the disk device 2 from the cache memory 1 .
- the configuration information 202 is used in response to the write request from the processing device to write the data from the cache memory 1 and the data is written out to the disk device from the cache memory
- the input/output device 200 of the present invention circulates and holds duplicate write data in three or more cache memory modules 1 .
- the write data to be written to the region that is supervised by the cache memory 1 that had the problem is taken over by the remaining cache memory 1 , providing for any configuration with n>2 cache memory modules 1 .
- FIG. 2 is a diagram of a computer system 300 of an embodiment of the present invention.
- computer system 300 comprises server 11 and RAID control device 12 .
- RAID control device 12 corresponds to RAID control device 200 shown in FIG. 1( a ).
- Server 11 transmits a write request to RAID control device 12 to write data from the server 11 to logical volumes 12 g allocated among disk devices 12 f by disk control module 12 e.
- the RAID control device 12 also comprises interface control module 12 a , configuration information management module 12 b , cache control module 12 c , the cache memory 12 d , and disk control module 12 e that controls the disk and several disk devices 12 f.
- the configuration information (corresponding to configuration information 202 shown in FIG. 1( b )) that keeps track of which cache memory 12 d the write data from the server 11 is held in, is stored in the configuration information management module 12 b.
- FIG. 3 shows a typical hardware configuration for a computer system 400 corresponding to the computer system 300 shown in FIG. 2.
- the subsystem control module 101 is coupled to an upper device 116 by an I/F (interface) module 118 , while the subsystem control module 101 comprises memory 101 a , MPU 101 b , and the bus interface module 101 c .
- the above MP 101 b operates according to a program stored in the memory 101 a .
- transfer data and control data are also stored in the memory 101 a .
- the subsystem control module 101 shown in FIG. 3 comprises the cache memory 12 d and the cache control module 12 c shown in FIG. 2.
- the subsystem control module 101 in FIG. 3 corresponds to the section in the computer system 300 of FIG. 2 comprising the interface control module 12 a , the configuration information management module 12 b , the cache control module 12 c , and the cache memory 12 d.
- device control module 103 comprises buffer 103 a , MPU 103 b , the memory 103 c (which stores among other things, the program for running the aforementioned MPU 103 b ), and bus interface module 103 d.
- the above subsystem control module 101 and device control module 103 are connected by bus 120 .
- the device control module 103 is connected to the disk drive group 105 by device I/F (interface) module 104 .
- the device control module 103 shown in FIG. 3 corresponds to the disk control module 12 e shown in FIG. 2.
- cache memory comprises the three cache memory modules 12 d - 1 ⁇ 12 d - 3 shown in FIG. 2.
- Each cache memory module 12 d - 1 ⁇ 12 d - 3 duplicates and stores the write data (received from upper device 116 corresponding to server 11 ) as primary data and secondary data.
- the write data is to be written to disks 105 - 1 , 105 - 2 , 105 - 3 , 105 - 4 ,. 105 - x , of the disk drive group 105 shown in FIG. 3.
- FIG. 4 shows an example of the configuration information 202 when there are three cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 . Moreover,
- FIG. 4 indicates the supervisory logical volumes 12 g corresponding to the primary data and the secondary data for each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 .
- FIG. 5( a ) shows the supervisory logical volume of each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 during normal operation.
- FIG. 5( b ) shows the supervisory logical volume 12 g of each of the cache memory modules 12 d - 1 , 12 d - 2 , and 12 d - 3 when there is a problem with cache memory 12 d.
- the supervisory logical volumes 12 g of the cache memory module 12 d - 1 is 1 ⁇ 10 for the primary data and 21 ⁇ 30 for the secondary data
- the supervisory logical volumes 12 g of the cache memory 12 d - 2 is 11 ⁇ 20 for the primary data and 1 ⁇ 10 for the secondary data
- the supervisory logical volumes 12 g of the cache memory 12 d - 3 is 21 ⁇ 30 for the primary data and 11 ⁇ 20 for the secondary data.
- the configuration information 202 defines the logical volume names of the primary data and secondary data that the cache memory supervises. Then, whenever there is a problem with one of the cache memory modules and the number of cache memory modules is reduced, each of the remaining cache memory modules re-defines the logical volume names of the primary data and secondary data that it supervises.
- the region of the one or more disks 105 would be divided into the number of cache memory modules ( 12 d ) n.
- the configuration information 202 is set up as follows.
- the region of the one or more disks 105 is divided up into the number of normally functioning cache memory modules ( 12 d ) m (n ⁇ m).
- the primary data and the secondary data to be written to the k th (0 ⁇ k ⁇ m-1) region of the disk 105 are held sequentially in the k th cache memory 12 d and in the cache memory 12 d that is not the k th cache memory 12 d , respectively.
- This allows configurations of the computer system of the present invention with the desired number n (n ⁇ 3) of cache memory modules 12 d . It is possible to increase the number of cache memory modules 12 d during operation of the computer system of the present invention if this configuration information 202 is defined.
- FIG. 6, FIG. 7, and FIG. 8 are flowcharts describing how each of the cache memory modules 12 d functions during normal operation ( 600 ), when there is a problem with one of the cache memory modules 12 d ( 700 ), and when the number of cache memory modules 12 d is increased ( 800 ).
- the interface control module 12 a refers ( 602 ) to the normal operation ( 600 ) configuration information 202 .
- the interface control module 12 a writes the data (both the primary data and the secondary data) to the cache memory 12 d corresponding to the supervisory logical volume 12 g determined by the configuration information 202 and returns a completed response to the server 11 ( 602 ).
- each of the cache control modules 12 c refers to the configuration information and, as shown in FIG. 6, writes out ( 604 ) to the disk device 12 f , the data that was written to the cache memory modules 12 d ⁇ 12 d by the cache control module 12 c that manages the primary data determined in the configuration information.
- the primary data will be written to the cache memory module 12 d - 1 and the secondary data will be written to the cache memory module 12 d - 2 .
- the primary data written to the cache memory module 12 d - 1 will be written out to the disk device 12 f through the disk control module 12 e by the cache control module 12 c that manages the primary data.
- the above secondary data that was written to the cache memory module 12 d - 2 will be deleted from the cache memory module 12 d - 2 .
- the configuration information management module 12 b notifies ( 702 ) all of the interface control modules 12 a and all of the cache memory control modules 12 c as shown in FIG. 7.
- the cache control module 12 c immediately ( 704 ) writes data out to the disk device 12 f by the disk control module 12 e , that is, the primary data held in the cache memory 12 d - i where the problem occurred and the secondary data which is in the 12 d -(i +1) th cache memory and contains the same data as the primary data. This is as shown in FIG. 7 and as set forth in the configuration information 202 for normal operation 600 .
- the interface control modules 12 b refer to the configuration information for a problem 700 with the operation of cache memory 12 d - i.
- the interface control modules 12 b write ( 706 ) the primary data and secondary data to the cache memory 12 d determined by the configuration information for a problem 700 with the operation of cache memory i and return a completed response to the server.
- the cache control modules 12 c (including the (i+1) th cache control module described above) use the configuration information for problems ( 700 ) with cache memory 12 d - i and write out ( 708 ) the primary data, which is managed by the cache control module 12 c and written to the cache memory, to the disk device 12 f.
- the interface cache memory 12 b writes the primary data to the cache memory 12 d - 2 and the secondary data to the cache memory 12 d - 3 and returns a completed response to the server 11 . Then, based on the configuration information for problem operation, each of the cache control modules 12 c would write out to disk device 12 f , the primary data written to the cache memory by the cache control module 12 c that manages the primary data.
- the configuration information management module 12 b includes the configuration information settings beginning when the cache memory 12 d was set up ( 802 ) and when the amount of cache memory 12 d is increased ( 804 ). As shown in FIG. 8, the configuration information management module 12 b notifies ( 806 ) all of the interface control modules 12 a and the cache control modules 12 c of the increase in the number of cache memory modules 12 d.
- each of the cache control modules 12 c arranges the data according to the configuration information after the cache memory was set up ( 804 ) and then moves ( 808 ) the data among the cache memory modules.
- the cache control module 12 c which manages the primary data as set forth in the configuration information 202 after the cache memory is set up, writes ( 810 ) out to the disk device 12 f through the disk control module 12 e.
- each of the interface control modules 12 a write ( 812 ) the primary data and secondary data to the cache memory as set forth in the configuration information after the cache memory has been set up ( 804 ) and return a completed response to the server 11 .
- FIG. 9 shows an operation 900 of the present invention of increasing the number of cache memory modules 12 d from two to three.
- the corresponding secondary data is deleted from the cache memory 12 d.
- the secondary data of the cache memory 12 d - 2 corresponding to the logical volumes 12 g 11 ⁇ 15 is in either the cache memory 12 d - 1 or the cache memory 12 d - 3 , so the cache control module 12 c - 2 asks the cache control modules 12 c - 1 and 12 c - 3 to delete the secondary data from both the cache memory 12 d - 1 and the cache memory 12 d - 3 . If there is any secondary data, the secondary data is deleted from the cache memory 12 d.
- the interface control module 12 a When the interface control module 12 a receives a write data request from the server 11 , the interface control module 12 a writes the write data to the cache memory 12 d (for both the primary data and the secondary data) of the supervisory logical volumes 12 g determined by the new configuration information and returns a completed response to the server 11 .
- the present invention includes at least three cache memory modules and duplicates and saves write data while circulating the write data among the three cache memory modules.
- the write data that was to be written to a region that was controlled by the cache memory module in which the problem occurred splits the data up among the remaining cache memory modules. This makes it possible to increase cache memory modules in units of one and allows more effective use of cache memory than the conventional hot pair system.
Abstract
Description
- 1. Field of the Invention
- The present invention relates to an input/output control device for disks, and, more particularly, to an input/output control device that maintains write data in multiple cache memory modules.
- 2. Description of the Related Art
- RAID control devices typically include cache memory. In response to a write request received from a server, the RAID control devices return a write request complete response to the server just by writing the data to the cache memory. The write data stored in the cache memory is written to disk devices asynchronously to the write complete response sent to the server. This type of operation is called a write back operation. The response time of a disk device that uses a cache memory to perform a write back operation in response to a write request is many times (approximately 10 times) shorter than disk devices that have no cache memory.
- In order to guarantee permanence of the write data, RAID control devices normally have two cache memory modules and each cache memory module holds its own write data. The write data that is held in the two cache memory modules is referred to as primary data and secondary data below. By using this sort of configuration, even if there is a problem with one of the cache memory modules, the other cache memory module contains write data, so the write data will not be lost.
- Normally, in order to guarantee the permanence of the write data, when there is a problem with the cache memory and there is only one cache memory, the write data stored in the cache memory is written back immediately to the disk. In this situation, the writing of the write data to the disk is synchronized and a write data complete response is returned to the server. This sort of operation is referred to as a write through operation. Switching from a write back operation to a write through operation when there is a problem with the cache memory requires a longer period of time (approximately 10 times) to respond to a write request.
- One method considered as a countermeasure to the above loss of performance when shifting to a write through operation when having trouble with the cache memory is to prepare a spare cache memory to use when there are cache memory problems.
- That is, in addition to two cache memories, Cache 1102 and Cache 2 107, shown in FIG. 10(a), a spare cache memory, Cache X 106, which is not normally used, is prepared.
- Then if, for instance, there is a problem with the cache memory Cache 1102, the spare cache memory Cache X 106 can be used to hold the primary data and secondary data of the write data, as shown in FIG. 10(b). By using this sort of configuration, even if there are problems with the cache memory Cache 1 102 or Cache 2 107, the write back operation can be performed. The above system is referred to as a hot spare system 100.
- However, the aforementioned conventional methods have the following disadvantages:
- (1) The hot spare system100 requires that a cache memory module 106 be prepared that is normally not used, as shown in FIG. 11(a), so the cache memory cannot all be used effectively.
- (2) To ensure the permanence of the write data, when using two cache memory modules102, 107 as described above, the cache memory modules 110, 112 must always be increased in pairs 108 when expanding the cache memory as shown in FIG. 11(b). In addition, when using the hot spare system 100, a cache memory module 106 must be prepared that is normally not used.
- An object of the present invention is to solve the above-mentioned problems.
- Another object of the present invention is to provide an input/output control device that allows the effective use of all of the cache memory.
- A further object of the present invention is to allow the amount of cache memory to be increased one cache memory module at a time.
- The present invention comprises a computer system comprising an input/output control device coupled to one or more disk devices and coupled to and receiving a write request including data from a processing device. The input/output control device of the present invention comprises n (n>2) cache memory modules storing the data upon receiving the write request. The input/output control device transmits to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, stores the data from the cache memory modules to the one or more disk devices. The input/output control device divides the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information and sets up the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
- Moreover, the present invention comprises a method and a computer readable medium which, when executed by a computer, causes the computer to execute the processes comprising storing in n (n>2) cache memory modules of an input/output control device data received in a write request from a processing device, transmitting by the input/output control device to the processing device a write request complete response, and, asynchronously with transmitting the write request complete response, storing the data from the cache memory modules to one or more disk devices, dividing by the input/output processing device the regions of the one or more disk devices into a number of n of the cache memory modules in accordance with configuration information, and modifying by the input/output control device the configuration information to allocate sequentially primary data and secondary data of the write data, which are written out to a kth region (k=1˜n) of a disk device, to the kth cache memory module, and a non-kth cache memory module, respectively.
- These together with other objects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
- FIG. 1(a) is a diagram showing an input/output control device of the present invention.
- FIG. 1(b) shows an example of configuration information when the cache memory of the input/output control device of the present invention is functioning normally, is having problems, and when the cache memory is increased.
- FIG. 2 is a diagram showing the overall configuration of the system of an embodiment of the present invention.
- FIG. 3 is a diagram showing an example of the hardware configuration of the system of an embodiment of the present invention.
- FIG. 4 is a diagram showing an example of the configuration information when there are three cache memory modules.
- FIG. 5 is a diagram showing an example of the logical volumes supervised by the cache memory modules during normal operation and problem operation. FIG. 6 is a diagram showing the process flow during normal operation of the cache memory.
- FIG. 7 is a diagram showing the process flow during problem operation of the cache memory.
- FIG. 8 is a diagram showing the process flow when the number of cache memory modules is increased.
- FIG. 9 is a diagram showing the operation when increasing the cache memory.
- FIGS.10(a) and 10(b) are diagrams showing the conventional hot spare system.
- FIGS.11(a) and 11(b) are diagrams showing the problems with the conventional hot spare system.
- FIG. 1(a) is a diagram showing an input/output control device 200 of the present invention. Input/output device 200 is also referred to as RAID (Redundant Array of Inexpensive Disks) control device 200.
- FIG. 1(b) shows an example of configuration information 202 when cache memory of the input/output control device 200 of the present invention is functioning normally, is having problems, and when the amount of cache memory is increased.
- In FIG. 1(a), 1-1˜1-n are the cache memory modules, referred to as cache memory 1. Each of the each cache memory modules 1-1˜1-n duplicates and stores write data as primary data and secondary data. Moreover, the input/output device 200 of the present invention shown in FIG. 1(a) includes cache control modules 3-1, 3-2, . . . 3-n, each corresponding, respectively, to one of the cache memory modules 1-1, 1-2, . . . , 1-n.
- When the cache memory1 is functioning normally, as shown in the configuration information 202 corresponding to <NORMAL OPERATION> shown in FIG. 1(b), a region on one or more disks 2-1, 2-2, . . . , 2-q of disk device 2 is divided into the number of cache memory modules n. For instance, the write data (the primary data and the secondary data) that is written out to the kth (k=1˜n-1) region Rk of the disk device 2 is held in the kth cache memory and the (k+1)th cache memory respectively, while the write data (the primary data and the secondary data) that is written out to the nth region Rn of the disk is held in the nth cache memory and the 15 cache memory respectively.
- Then, after the data has been written out to cache memory1 as set forth in the above configuration information 202, a write request complete response is returned in response to the write request from the processing device (not shown in FIGS. 1(a) or 1(b)). The write data stored in the cache memory 1 is written out to one or more of the disk devices 2 asynchronously with the response that the write request complete response.
- When there is a problem in the cache memory1, the write data written out to the region supervising the cache memory that had the problem is taken over by the remaining cache memory.
- As shown in the <PROBLEM OPERATION> section of the configuration information202 shown in FIG. 1(b), the region of the one or more disk devices 2 is divided by the number of normal cache memory modules m (m<n). For example, the write data (the primary data and the secondary data) that is written out to the kth (k=1˜m) region R′k of the disk device 2 is held in the kth cache memory and the (k+I)th cache memory respectively, while the write data (the primary data and the secondary data) that is written out to the nth region Rn of the disk is held in the nth cache memory and the 1 st cache memory respectively.
- When a problem occurs in the cache memory1, the secondary data corresponding to the primary data that was held in the cache memory 1 at the time of the problem is written immediately out to the disk device 2. Then, using the configuration information 202 at the time the problem occurred, the data is written into the cache memory 2 in response to a write request from the processing device and data is written out to the disk device 2 from the cache memory 1.
- When increasing the amount of cache memory1, the data held in the cache memory 1 is transferred as set forth in the configuration information 202 when the increase was made. Then, as shown in <CACHE MEMORY INCREASE>section of the configuration information 202 of the FIG. 1(b), after the increase, the configuration information 202 is used in response to the write request from the processing device to write the data from the cache memory 1 and the data is written out to the disk device from the cache memory
- As above, the input/output device200 of the present invention circulates and holds duplicate write data in three or more cache memory modules 1. When there is a problem with one of the cache memory modules 1, the write data to be written to the region that is supervised by the cache memory 1 that had the problem is taken over by the remaining cache memory 1, providing for any configuration with n>2 cache memory modules 1. There is also no need to prepare any more cache memory 1 than is normally used and all of the cache memory 1 is used effectively.
- Embodiments of the Present Invention
- FIG. 2 is a diagram of a computer system300 of an embodiment of the present invention. As shown in FIG. 2, computer system 300 comprises server 11 and RAID control device 12. RAID control device 12 corresponds to RAID control device 200 shown in FIG. 1(a).
- Server11 transmits a write request to RAID control device 12 to write data from the server 11 to logical volumes 12 g allocated among disk devices 12 f by disk control module 12 e.
- As described above with respect to the RAID control device200 shown in FIG. 1(a), when the write request is received from the server 11 to write data from the server 11 to the disk devices 12 f, a write request complete response is returned to the server 11 upon writing the data to the cache memory 12 d. The write data stored in the cache memory 12 d is written out to the disk devices 12 f asynchronously to the write complete response (write back operation).
- The RAID control device12 also comprises interface control module 12 a, configuration information management module 12 b, cache control module 12 c, the cache memory 12 d, and disk control module 12 e that controls the disk and several disk devices 12 f. The configuration information (corresponding to configuration information 202 shown in FIG. 1(b)) that keeps track of which cache memory 12 d the write data from the server 11 is held in, is stored in the configuration information management module 12 b.
- FIG. 3 shows a typical hardware configuration for a computer system400 corresponding to the computer system 300 shown in FIG. 2.
- In the computer system400 shown in FIG.3, the subsystem control module 101 is coupled to an upper device 116 by an I/F (interface) module 118, while the subsystem control module 101 comprises memory 101 a, MPU 101 b, and the bus interface module 101 c. The above MP 101 b operates according to a program stored in the memory 101 a. In addition to the program, transfer data and control data are also stored in the memory 101 a. The subsystem control module 101 shown in FIG. 3 comprises the cache memory 12 d and the cache control module 12 c shown in FIG. 2. The subsystem control module 101 in FIG. 3 corresponds to the section in the computer system 300 of FIG. 2 comprising the interface control module 12 a, the configuration information management module 12 b, the cache control module 12 c, and the cache memory 12 d.
- Referring again to FIG. 3, device control module103 comprises buffer 103 a, MPU 103 b, the memory 103 c (which stores among other things, the program for running the aforementioned MPU 103 b), and bus interface module 103 d.
- The above subsystem control module101 and device control module 103 are connected by bus 120. The device control module 103 is connected to the disk drive group 105 by device I/F (interface) module 104. The device control module 103 shown in FIG. 3 corresponds to the disk control module 12 e shown in FIG. 2.
- In the embodiment of the computer system400 of the present invention shown in FIG. 3, corresponding to the computer system 300 shown in FIG. 2, cache memory comprises the three cache memory modules 12 d-1˜12 d-3 shown in FIG. 2. Each cache memory module 12 d-1˜12 d-3 duplicates and stores the write data (received from upper device 116 corresponding to server 11) as primary data and secondary data.
- The write data is to be written to disks105-1, 105-2, 105-3, 105-4,. 105-x, of the disk drive group 105 shown in FIG. 3.
- FIG. 4 shows an example of the configuration information202 when there are three cache memory modules 12 d-1 , 12 d-2, and 12 d-3. Moreover,
- FIG. 4 indicates the supervisory logical volumes12 g corresponding to the primary data and the secondary data for each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3.
- FIG. 5(a) shows the supervisory logical volume of each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3 during normal operation. FIG. 5(b) shows the supervisory logical volume 12 g of each of the cache memory modules 12 d-1, 12 d-2, and 12 d-3 when there is a problem with cache memory 12 d.
- As shown in FIG. 4 and FIG. 5(a) during normal operation, the supervisory logical volumes 12 g of the cache memory module 12 d-1 is 1˜10 for the primary data and 21˜30 for the secondary data, the supervisory logical volumes 12 g of the cache memory 12 d-2 is 11˜20 for the primary data and 1 ˜10 for the secondary data, the supervisory logical volumes 12 g of the cache memory 12 d-3 is 21˜30 for the primary data and 11˜20 for the secondary data.
- In this state, if, for example, there were to be a problem on the cache memory12 d-1, based on the configuration information shown in FIG. 4, the supervisory logical volumes 12 g in the cache memory modules 12 d-1˜12 d-3 would change as shown in FIG. 5(b). The cache memory 12 d-2 the supervisory logical volumes 12g would be 1˜20 for the primary data, 21˜30 for the secondary data while the supervisory logical volumes 12 g for the cache memory 12 d-3 would be 21˜30 for the primary data and 1˜20 for the secondary data.
- That is, with three cache memory modules12 d-1, 12 d-2, and 12 d-3, the write data circulates between the three cache memory modules 12 d-1, 12 d- 2, and 12 d-3 while being duplicated. Therefore, when there is a problem with one of the cache memory modules (cache memory module 12 d-1 in FIG. 5(b)), the logical volume that was in charge (that is, the supervisory logical volume) of the cache memory module (cache memory module 12 d-1 in FIG. 5(b)) where the problem occurred would be shared among the remaining cache memory modules (cache memory modules 12 d-2 and 12 d-3 in FIG. 5(b)).
- For this reason, the configuration information202 defines the logical volume names of the primary data and secondary data that the cache memory supervises. Then, whenever there is a problem with one of the cache memory modules and the number of cache memory modules is reduced, each of the remaining cache memory modules re-defines the logical volume names of the primary data and secondary data that it supervises.
- In the above example, there were three cache memory modules12 d-1, 12 d-2, and 12 d-3. Generally, however, the region of the one or more disks 105 would be divided into the number of cache memory modules (12 d) n. The cache memory configuration information would be set up so that the write data (the primary data and the secondary data) to be written to the kth (k =1˜n-1) region of the disk 105 would be held sequentially in the kth cache memory 12 d and in the cache memory other than the kth respectively.
- When a problem occurs, the configuration information202 is set up as follows. The region of the one or more disks 105 is divided up into the number of normally functioning cache memory modules (12 d) m (n<m). The primary data and the secondary data to be written to the kth (0<k≦m-1) region of the disk 105 are held sequentially in the kth cache memory 12 d and in the cache memory 12 d that is not the kth cache memory 12 d , respectively. This allows configurations of the computer system of the present invention with the desired number n (n≧3) of cache memory modules 12 d. It is possible to increase the number of cache memory modules 12 d during operation of the computer system of the present invention if this configuration information 202 is defined.
- FIG. 6, FIG. 7, and FIG. 8 are flowcharts describing how each of the cache memory modules12 d functions during normal operation (600), when there is a problem with one of the cache memory modules 12 d (700), and when the number of cache memory modules 12 d is increased (800).
- (1) Normal Operation
- When all of the cache memory modules12 d are functioning normally, as shown in FIG. 6, the interface control module 12 a refers (602) to the normal operation (600) configuration information 202. When requested by server 11 to write data, the interface control module 12 a writes the data (both the primary data and the secondary data) to the cache memory 12 d corresponding to the supervisory logical volume 12 g determined by the configuration information 202 and returns a completed response to the server 11 (602).
- At the same time, during normal operation each of the cache control modules12 c refers to the configuration information and, as shown in FIG. 6, writes out (604) to the disk device 12 f, the data that was written to the cache memory modules 12 d˜12 d by the cache control module 12 c that manages the primary data determined in the configuration information.
- For example, if the logical volume12 g makes a write request in the range of 1˜10, the primary data will be written to the cache memory module 12 d-1 and the secondary data will be written to the cache memory module 12 d -2. The primary data written to the cache memory module 12 d-1 will be written out to the disk device 12 f through the disk control module 12 eby the cache control module 12 c that manages the primary data. Once the data has been completely written out to the disk device 12 f, the above secondary data that was written to the cache memory module 12 d-2 will be deleted from the cache memory module 12 d-2.
- (2) When There is a Problem With the Cache Memory.
- When there is a problem with the cache memory12 d-i (when there are three cache memory modules, i=1˜3), the configuration information management module 12 b notifies (702) all of the interface control modules 12 a and all of the cache memory control modules 12 c as shown in FIG. 7.
- In order to guarantee the permanence of the write data, when the (i+1)th cache control module 12 c receives a report about a problem, the cache control module 12 c immediately (704) writes data out to the disk device 12 f by the disk control module 12 e, that is, the primary data held in the cache memory 12 d-i where the problem occurred and the secondary data which is in the 12 d-(i +1)th cache memory and contains the same data as the primary data. This is as shown in FIG. 7 and as set forth in the configuration information 202 for normal operation 600.
- At the same time, when each of the interface control modules12 b are notified of a problem, the interface control modules 12 b refer to the configuration information for a problem 700 with the operation of cache memory 12 d-i. The interface control modules 12 b write (706) the primary data and secondary data to the cache memory 12 d determined by the configuration information for a problem 700 with the operation of cache memory i and return a completed response to the server.
- The cache control modules12 c (including the (i+1)th cache control module described above) use the configuration information for problems (700) with cache memory 12 d-i and write out (708) the primary data, which is managed by the cache control module 12 c and written to the cache memory, to the disk device 12 f.
- If, for example, as shown in FIG. 5(b), there were a problem with the cache memory 12 d-1, the secondary data corresponding to the logical volumes 12 g 1 through 10 managed by cache memory 12 d-1 which is in cache memory 12 d-2, would be immediately written out to disk device 12 f through the disk control module 12 e.
- If the write request from the server11 is, for example, a write request to the range of logical volumes 12 g 1˜10, the interface cache memory 12 b writes the primary data to the cache memory 12 d-2 and the secondary data to the cache memory 12 d-3 and returns a completed response to the server 11. Then, based on the configuration information for problem operation, each of the cache control modules 12 c would write out to disk device 12 f, the primary data written to the cache memory by the cache control module 12 c that manages the primary data.
- (3) When Increasing the Numb of Cache Memory Modules
- In addition to the configuration information202 before the cache module increase, the configuration information management module 12 b includes the configuration information settings beginning when the cache memory 12 d was set up (802) and when the amount of cache memory 12 d is increased (804). As shown in FIG. 8, the configuration information management module 12 b notifies (806) all of the interface control modules 12 a and the cache control modules 12 c of the increase in the number of cache memory modules 12 d.
- Also as shown in FIG. 8, each of the cache control modules12 c arranges the data according to the configuration information after the cache memory was set up (804) and then moves (808) the data among the cache memory modules.
- Concerning the data written in the cache memory modules12 d , the cache control module 12 c , which manages the primary data as set forth in the configuration information 202 after the cache memory is set up, writes (810) out to the disk device 12 f through the disk control module 12 e.
- At the same time, each of the interface control modules12 a , as shown in FIG. 8, write (812) the primary data and secondary data to the cache memory as set forth in the configuration information after the cache memory has been set up (804) and return a completed response to the server 11.
- FIG. 9 shows an operation900 of the present invention of increasing the number of cache memory modules 12 d from two to three.
- As shown in the operation900 of FIG. 9, if the two cache memory modules 12 d-1 and 12 d-2 are to be increased to three cache memory modules (that is, if cache memory module 12 d-3 is being added), all of the interface control modules 12 a and all of the cache control modules 12 c are notified of the addition of the cache memory 12 d-3.
- This will cause the cache control module12 c to shift the primary data corresponding to the logical volumes 12 g 16˜20 of the cache memory 12 d-2 to the primary data corresponding to the logical volumes 12 g 16˜20 of the cache memory 12 d- 3 as shown by the arrow drawn with a dotted line in FIG. 9. At the same time, the secondary data corresponding to the logical volumes 12 g 11˜15 of the cache memory 12 d-1 will be shifted to the secondary data corresponding to the logical volumes 12 g 11˜15 of the cache memory 12 d-3 3 as shown by the arrow drawn with a dotted line in FIG. 9.
- The write back operation that takes place while data is being shifted among these cache memory modules12 d-1, 12 d-2, and 12 d-3 is, as stated earlier, written out to the disk device 12 f through the disk control module 12 e by the cache control module 12 c that manages the primary data as new configuration information.
- That is, the cache control module12 c -1 of the cache memory module 12 d-1 of the logical volumes 12 g 1˜10, the cache control module 12 c -2 of the cache memory module 12 d-2 of the logical volumes 12 g 11˜15 and the cache control module 12 c -3 of the cache memory module 12 d-3 of the logical volumes 12 g 16˜20 write data out to the disk device 12 f through the disk control module 12 e. When the write back operation has been completed, the corresponding secondary data is deleted from the cache memory 12 d.
- The secondary data of the cache memory12 d-2 corresponding to the logical volumes 12 g 11˜15 is in either the cache memory 12 d-1 or the cache memory 12 d-3, so the cache control module 12 c -2 asks the cache control modules 12 c -1 and 12 c -3 to delete the secondary data from both the cache memory 12 d-1 and the cache memory 12 d-3. If there is any secondary data, the secondary data is deleted from the cache memory 12 d.
- When the interface control module12 a receives a write data request from the server 11, the interface control module 12 a writes the write data to the cache memory 12 d (for both the primary data and the secondary data) of the supervisory logical volumes 12 g determined by the new configuration information and returns a completed response to the server 11.
- For example, if there is a write request for the logical volumes12 g in a range of 11˜15, the primary data will be written to the cache memory 12 d-2 and the secondary data will be written to the cache memory 12 d-3.
- Effects of the Present Invention
- As described above, the present invention includes at least three cache memory modules and duplicates and saves write data while circulating the write data among the three cache memory modules. When there is a problem in the cache memory, the write data that was to be written to a region that was controlled by the cache memory module in which the problem occurred splits the data up among the remaining cache memory modules. This makes it possible to increase cache memory modules in units of one and allows more effective use of cache memory than the conventional hot pair system.
- The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention which fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
- Element Number List
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Claims (25)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000-167483 | 2000-06-05 | ||
JP2000167483A JP3705731B2 (en) | 2000-06-05 | 2000-06-05 | I / O controller |
Publications (2)
Publication Number | Publication Date |
---|---|
US20010049768A1 true US20010049768A1 (en) | 2001-12-06 |
US6615313B2 US6615313B2 (en) | 2003-09-02 |
Family
ID=18670633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/779,845 Expired - Lifetime US6615313B2 (en) | 2000-06-05 | 2001-02-09 | Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof |
Country Status (2)
Country | Link |
---|---|
US (1) | US6615313B2 (en) |
JP (1) | JP3705731B2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004114116A1 (en) * | 2003-06-19 | 2004-12-29 | Fujitsu Limited | Method for write back from mirror cache in cache duplicating method |
EP1507204A2 (en) * | 2003-07-22 | 2005-02-16 | Hitachi, Ltd. | Storage system with cache memory |
US20050102582A1 (en) * | 2003-11-11 | 2005-05-12 | International Business Machines Corporation | Method and apparatus for controlling data storage within a data storage system |
US20070118712A1 (en) * | 2005-11-21 | 2007-05-24 | Red Hat, Inc. | Cooperative mechanism for efficient application memory allocation |
US20080046538A1 (en) * | 2006-08-21 | 2008-02-21 | Network Appliance, Inc. | Automatic load spreading in a clustered network storage system |
US20080276032A1 (en) * | 2004-08-27 | 2008-11-06 | Junichi Iida | Arrangements which write same data as data stored in a first cache memory module, to a second cache memory module |
US10061667B2 (en) * | 2014-06-30 | 2018-08-28 | Hitachi, Ltd. | Storage system for a memory control method |
US10234929B2 (en) | 2015-04-30 | 2019-03-19 | Fujitsu Limited | Storage system and control apparatus |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7103653B2 (en) * | 2000-06-05 | 2006-09-05 | Fujitsu Limited | Storage area network management system, method, and computer-readable medium |
JP4209108B2 (en) * | 2001-12-20 | 2009-01-14 | 株式会社日立製作所 | Storage device control method, storage device used in this method, disk array device, and disk controller |
US7136966B2 (en) | 2002-03-18 | 2006-11-14 | Lsi Logic Corporation | Method and apparatus for using a solid state disk device as a storage controller cache |
US7149846B2 (en) * | 2002-04-17 | 2006-12-12 | Lsi Logic Corporation | RAID protected external secondary memory |
JP4412981B2 (en) | 2003-11-26 | 2010-02-10 | 株式会社日立製作所 | Storage system and data caching method in the same system |
JP4429763B2 (en) * | 2004-02-26 | 2010-03-10 | 株式会社日立製作所 | Information processing apparatus control method, information processing apparatus, and storage apparatus control method |
US7644239B2 (en) | 2004-05-03 | 2010-01-05 | Microsoft Corporation | Non-volatile memory cache performance improvement |
JP4715286B2 (en) * | 2004-05-11 | 2011-07-06 | 株式会社日立製作所 | Computer system and computer system control method |
JP4555040B2 (en) * | 2004-09-22 | 2010-09-29 | 株式会社日立製作所 | Storage device and storage device write access processing method |
US7490197B2 (en) | 2004-10-21 | 2009-02-10 | Microsoft Corporation | Using external memory devices to improve system performance |
JP4688514B2 (en) * | 2005-02-14 | 2011-05-25 | 株式会社日立製作所 | Storage controller |
JP2006259945A (en) * | 2005-03-16 | 2006-09-28 | Nec Corp | Redundant system, its configuration control method and its program |
JP4561462B2 (en) * | 2005-05-06 | 2010-10-13 | 富士通株式会社 | Dirty data processing method, dirty data processing device, and dirty data processing program |
US8914557B2 (en) | 2005-12-16 | 2014-12-16 | Microsoft Corporation | Optimizing write and wear performance for a memory |
JP2007265271A (en) * | 2006-03-29 | 2007-10-11 | Nec Corp | Storage device, data arrangement method and program |
JP4836647B2 (en) * | 2006-04-21 | 2011-12-14 | 株式会社東芝 | Storage device using nonvolatile cache memory and control method thereof |
JP2008217575A (en) * | 2007-03-06 | 2008-09-18 | Nec Corp | Storage device and configuration optimization method thereof |
US7975109B2 (en) | 2007-05-30 | 2011-07-05 | Schooner Information Technology, Inc. | System including a fine-grained memory and a less-fine-grained memory |
US8631203B2 (en) | 2007-12-10 | 2014-01-14 | Microsoft Corporation | Management of external memory functioning as virtual cache |
JP4985391B2 (en) * | 2007-12-28 | 2012-07-25 | 日本電気株式会社 | Disk array device, physical disk recovery method, and physical disk recovery program |
JP4862841B2 (en) * | 2008-02-25 | 2012-01-25 | 日本電気株式会社 | Storage apparatus, system, method, and program |
US8229945B2 (en) | 2008-03-20 | 2012-07-24 | Schooner Information Technology, Inc. | Scalable database management software on a cluster of nodes using a shared-distributed flash memory |
US8732386B2 (en) * | 2008-03-20 | 2014-05-20 | Sandisk Enterprise IP LLC. | Sharing data fabric for coherent-distributed caching of multi-node shared-distributed flash memory |
US9032151B2 (en) | 2008-09-15 | 2015-05-12 | Microsoft Technology Licensing, Llc | Method and system for ensuring reliability of cache data and metadata subsequent to a reboot |
US8032707B2 (en) | 2008-09-15 | 2011-10-04 | Microsoft Corporation | Managing cache data and metadata |
US7953774B2 (en) | 2008-09-19 | 2011-05-31 | Microsoft Corporation | Aggregation of write traffic to a data store |
US8868487B2 (en) | 2010-04-12 | 2014-10-21 | Sandisk Enterprise Ip Llc | Event processing in a flash memory-based object store |
US8725951B2 (en) | 2010-04-12 | 2014-05-13 | Sandisk Enterprise Ip Llc | Efficient flash memory-based object store |
US9047351B2 (en) | 2010-04-12 | 2015-06-02 | Sandisk Enterprise Ip Llc | Cluster of processing nodes with distributed global flash memory using commodity server technology |
US8856593B2 (en) | 2010-04-12 | 2014-10-07 | Sandisk Enterprise Ip Llc | Failure recovery using consensus replication in a distributed flash memory system |
US9164554B2 (en) | 2010-04-12 | 2015-10-20 | Sandisk Enterprise Ip Llc | Non-volatile solid-state storage system supporting high bandwidth and random access |
US8954385B2 (en) | 2010-06-28 | 2015-02-10 | Sandisk Enterprise Ip Llc | Efficient recovery of transactional data stores |
US8694733B2 (en) | 2011-01-03 | 2014-04-08 | Sandisk Enterprise Ip Llc | Slave consistency in a synchronous replication environment |
US8874515B2 (en) | 2011-04-11 | 2014-10-28 | Sandisk Enterprise Ip Llc | Low level object version tracking using non-volatile memory write generations |
US9135064B2 (en) | 2012-03-07 | 2015-09-15 | Sandisk Enterprise Ip Llc | Fine grained adaptive throttling of background processes |
WO2014009994A1 (en) * | 2012-07-10 | 2014-01-16 | Hitachi, Ltd. | Disk subsystem and method for controlling memory access |
US9535612B2 (en) | 2013-10-23 | 2017-01-03 | International Business Machines Corporation | Selecting a primary storage device |
CN109154906B (en) * | 2016-07-11 | 2021-09-21 | 株式会社日立制作所 | Storage device, control method of storage device, and controller for storage device |
JP7056874B2 (en) * | 2019-03-13 | 2022-04-19 | Necプラットフォームズ株式会社 | Controls, disk array devices, control methods, and programs |
JP7318367B2 (en) * | 2019-06-28 | 2023-08-01 | 富士通株式会社 | Storage control device and storage control program |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3409859B2 (en) * | 1991-01-31 | 2003-05-26 | 株式会社日立製作所 | Control method of control device |
US5615329A (en) * | 1994-02-22 | 1997-03-25 | International Business Machines Corporation | Remote data duplexing |
JPH07281959A (en) | 1994-04-12 | 1995-10-27 | Fuji Electric Co Ltd | Redundancy system for disk storage |
JP3457394B2 (en) | 1994-09-16 | 2003-10-14 | 株式会社東芝 | Information storage device |
US5412668A (en) * | 1994-09-22 | 1995-05-02 | International Business Machines Corporation | Parity striping feature for optical disks |
US6041396A (en) * | 1996-03-14 | 2000-03-21 | Advanced Micro Devices, Inc. | Segment descriptor cache addressed by part of the physical address of the desired descriptor |
JP3411451B2 (en) | 1996-08-30 | 2003-06-03 | 株式会社日立製作所 | Disk array device |
US6457098B1 (en) * | 1998-12-23 | 2002-09-24 | Lsi Logic Corporation | Methods and apparatus for coordinating shared multiple raid controller access to common storage devices |
US6460122B1 (en) * | 1999-03-31 | 2002-10-01 | International Business Machine Corporation | System, apparatus and method for multi-level cache in a multi-processor/multi-controller environment |
US6341331B1 (en) * | 1999-10-01 | 2002-01-22 | International Business Machines Corporation | Method and system for managing a raid storage system with cache |
-
2000
- 2000-06-05 JP JP2000167483A patent/JP3705731B2/en not_active Expired - Lifetime
-
2001
- 2001-02-09 US US09/779,845 patent/US6615313B2/en not_active Expired - Lifetime
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7610446B2 (en) * | 2003-06-19 | 2009-10-27 | Fujitsu Limited | RAID apparatus, RAID control method, and RAID control program |
WO2004114115A1 (en) * | 2003-06-19 | 2004-12-29 | Fujitsu Limited | Raid device, raid control method, and raid control program |
WO2004114116A1 (en) * | 2003-06-19 | 2004-12-29 | Fujitsu Limited | Method for write back from mirror cache in cache duplicating method |
US20050216660A1 (en) * | 2003-06-19 | 2005-09-29 | Fujitsu Limited | RAID apparatus, RAID control method, and RAID control program |
EP1507204A3 (en) * | 2003-07-22 | 2006-12-27 | Hitachi, Ltd. | Storage system with cache memory |
EP1507204A2 (en) * | 2003-07-22 | 2005-02-16 | Hitachi, Ltd. | Storage system with cache memory |
US20050102582A1 (en) * | 2003-11-11 | 2005-05-12 | International Business Machines Corporation | Method and apparatus for controlling data storage within a data storage system |
US7219256B2 (en) * | 2003-11-12 | 2007-05-15 | International Business Machines Corporation | Method and apparatus for controlling data storage within a data storage system |
US20080276032A1 (en) * | 2004-08-27 | 2008-11-06 | Junichi Iida | Arrangements which write same data as data stored in a first cache memory module, to a second cache memory module |
US7516291B2 (en) | 2005-11-21 | 2009-04-07 | Red Hat, Inc. | Cooperative mechanism for efficient application memory allocation |
US20070118712A1 (en) * | 2005-11-21 | 2007-05-24 | Red Hat, Inc. | Cooperative mechanism for efficient application memory allocation |
US20090172337A1 (en) * | 2005-11-21 | 2009-07-02 | Red Hat, Inc. | Cooperative mechanism for efficient application memory allocation |
US8321638B2 (en) | 2005-11-21 | 2012-11-27 | Red Hat, Inc. | Cooperative mechanism for efficient application memory allocation |
US20080046538A1 (en) * | 2006-08-21 | 2008-02-21 | Network Appliance, Inc. | Automatic load spreading in a clustered network storage system |
US8046422B2 (en) * | 2006-08-21 | 2011-10-25 | Netapp, Inc. | Automatic load spreading in a clustered network storage system |
US10061667B2 (en) * | 2014-06-30 | 2018-08-28 | Hitachi, Ltd. | Storage system for a memory control method |
US10234929B2 (en) | 2015-04-30 | 2019-03-19 | Fujitsu Limited | Storage system and control apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2001344154A (en) | 2001-12-14 |
JP3705731B2 (en) | 2005-10-12 |
US6615313B2 (en) | 2003-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6615313B2 (en) | Disk input/output control device maintaining write data in multiple cache memory modules and method and medium thereof | |
US4603380A (en) | DASD cache block staging | |
US7725445B2 (en) | Data replication among storage systems | |
US10664177B2 (en) | Replicating tracks from a first storage site to a second and third storage sites | |
US7634617B2 (en) | Methods, systems, and computer program products for optimized copying of logical units (LUNs) in a redundant array of inexpensive disks (RAID) environment using buffers that are larger than LUN delta map chunks | |
US7634618B2 (en) | Methods, systems, and computer program products for optimized copying of logical units (LUNs) in a redundant array of inexpensive disks (RAID) environment using buffers that are smaller than LUN delta map chunks | |
CN101571822B (en) | Storage controller and data management method | |
US7496718B2 (en) | Data transfer and access control between disk array systems | |
US7127557B2 (en) | RAID apparatus and logical device expansion method thereof | |
US5845295A (en) | System for providing instantaneous access to a snapshot Op data stored on a storage medium for offline analysis | |
EP0727745A1 (en) | Memory control apparatus and its control method | |
US7373470B2 (en) | Remote copy control in a storage system | |
JPH01251258A (en) | Shared area managing system in network system | |
US6510491B1 (en) | System and method for accomplishing data storage migration between raid levels | |
JPH10198607A (en) | Data multiplexing system | |
US7451285B2 (en) | Computer systems, management computers and storage system management method | |
KR0175983B1 (en) | Data processing system having demand based write through cache with enforced ordering | |
US20060143313A1 (en) | Method for accessing a storage device | |
JPH0452743A (en) | Control system for duplex external storage | |
US20060265559A1 (en) | Data processing system | |
CN111208942B (en) | Distributed storage system and storage method thereof | |
US20240111456A1 (en) | Storage device controller and method capable of allowing incoming out-of-sequence write command signals | |
JPH0659819A (en) | Uninterruptible expanding method for storage device capacity | |
JPH09326832A (en) | Common use buffer device and its control method | |
JPH064494A (en) | Plural file merging system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUJITSU LIMITED, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KATO, TADAOMI;OMURA, HIDEAKI;KUBOTA, HIROMI;REEL/FRAME:011544/0469 Effective date: 20010113 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |