WO2010056571A2 - Managing cache data and metadata - Google Patents

Managing cache data and metadata Download PDF

Info

Publication number
WO2010056571A2
WO2010056571A2 PCT/US2009/063127 US2009063127W WO2010056571A2 WO 2010056571 A2 WO2010056571 A2 WO 2010056571A2 US 2009063127 W US2009063127 W US 2009063127W WO 2010056571 A2 WO2010056571 A2 WO 2010056571A2
Authority
WO
WIPO (PCT)
Prior art keywords
cache
memory
metadata
stored
computer
Prior art date
Application number
PCT/US2009/063127
Other languages
French (fr)
Other versions
WO2010056571A3 (en
Inventor
Mehmet Iyigun
Yevgeniy Bak
Michael Fortin
David Fields
Cenk Ergan
Alexander Kirshenbaum
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Priority to JP2011536387A priority Critical patent/JP2012508932A/en
Priority to ES09826570.5T priority patent/ES2663701T3/en
Priority to EP09826570.5A priority patent/EP2353081B1/en
Priority to CN200980145878.1A priority patent/CN102216899B/en
Publication of WO2010056571A2 publication Critical patent/WO2010056571A2/en
Publication of WO2010056571A3 publication Critical patent/WO2010056571A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1052Security improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/22Employing cache memory using specific memory technology
    • G06F2212/222Non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/466Metadata, control data

Definitions

  • This invention relates to techniques for employing non-volatile memory devices, such as removable and non-removable non- volatile random access memory (NVRAM) devices.
  • non-volatile memory devices such as removable and non-removable non- volatile random access memory (NVRAM) devices.
  • NVRAM non-volatile random access memory
  • Some conventional operating systems provide a capability to employ a non-volatile memory device (i.e., a peripheral device operable to provide auxiliary storage and/or memory to a computer, such as a flash memory USB drive) as a block or file-level cache for slower storage devices (e.g., a disk storage medium, or one or more storage devices accessible via a network), to improve the performance of the operating system and/or applications.
  • a non-volatile memory device i.e., a peripheral device operable to provide auxiliary storage and/or memory to a computer, such as a flash memory USB drive
  • a block or file-level cache for slower storage devices (e.g., a disk storage medium, or one or more storage devices accessible via a network), to improve the performance of the operating system and/or applications.
  • a cache device to cache data stored on such a slower device offers opportunities to significantly improve the speed of input/output (I/O) operations of operating systems and/or applications.
  • the Microsoft Windows Vista operating system produced by Microsoft Corporation of Redmond, WA, includes a feature known as ReadyBoost which allows users to employ cache devices to cache data also residing in a slower storage device (referred to hereinafter as “disk storage” or “disk” for convenience, although it should be understood that these terms refer generally to any storage mechanism(s) and/or device(s) to which I/O is typically performed more slowly than a cache device, including storage devices accessible via a network).
  • FIGS. 1A-1B depict example high-level processes 10A- 1OB whereby a cache manager component 100 manages the caching of data to cache device 110.
  • Cache device 110 may be coupled, using wired and/or wireless communications infrastructure and protocol(s), to a computer (not shown) on which cache manager 100 resides.
  • cache device 110 may be removable from the computer (e.g., comprise a flash memory USB drive), non-removable and/or accessible to the computer via one or more wired and/or wireless networks.
  • a write request is received by cache manager 100 specifying that data should be written to address X on disk storage (i.e., cached volume 120).
  • Cache manager 100 processes the request by causing the data to be written to address X on cached volume 120 in operation 101, and also to address Y on cache device 110 in operation 102.
  • Process 1OB includes operations performed thereafter when a read request is received specifying that the data stored at address X on cached volume 120 should be read.
  • Cache manager 100 determines that the data is cached on cache device 110 at address Y, and causes the data at address Y to be read in operation 103. The data is then served from the cache device to satisfy the read request in operation 104.
  • the cache manager maintains a mapping of disk addresses (e.g., address X) to corresponding cache addresses (e.g., address Y) in metadata, and this "cache metadata" is usually employed in reading from or writing to the cache device.
  • cache metadata is maintained in memory and accessed by the cache manager when I/O requests are received.
  • the cache manager uses the cache metadata to determine that the data is also stored at cache offset Y, and to satisfy the request by causing the data to be read from cache offset Y rather than disk offset X.
  • the cache manager When a write request is received by the cache manager which is directed to disk offset X, the cache manager employs the cache metadata to determine whether the data at that disk address is also stored in cache. If so (e.g., if the data is stored at cache address Y), the cache manager may cause the data to be written to the appropriate address in cache, or evict the cache contents at that address. If not, the cache manager may cause the data to be written to cache, and may update the cache metadata so that future reads to disk offset X may instead be serviced from the data stored on cache.
  • the ReadyBoost feature of the Windows Vista operating system supports cache devices with up to a four gigabyte storage capacity. (At the time Windows Vista was released, the maximum storage capacity of cache devices was approximately two gigabytes). The storage capacity of cache devices has grown rapidly in recent years, with some cache devices providing a storage capacity of up to sixteen gigabytes, which may store the equivalent of thirty-two gigabytes of data when compressed.
  • cache devices having relatively larger storage capacity offer significant opportunity to improve the speed of I/O operations performed by operating systems and applications.
  • Applicants have also appreciated that one reason conventional operating systems support cache devices with only relatively limited storage capacity is that cache contents must be repopulated when certain types of power transitions (e.g., standby, hibernate (or equivalent modes used by non-Microsoft Windows operating systems), or reboot) occur.
  • certain types of power transitions e.g., standby, hibernate (or equivalent modes used by non-Microsoft Windows operating systems), or reboot
  • repopulating cache contents can take considerable time and consume significant processing resources.
  • an eight gigabyte flash memory device which may hold up to sixteen gigabytes of compressed data, may take up to thirty minutes to repopulate using background I/O from disk at approximately ten megabytes per second.
  • cache contents must be repopulated across certain power transitions is that there is no way to reliably ensure that cache contents accurately represent the contents of disk storage when the computer is restarted, because the contents of the cache device and/or the disk may have been modified during the power transition. For example, when a first computer is shut down, a hacker could disconnect a removable cache device, connect it to another computer, and modify the cache contents, so that if the device were then reconnected to the first computer, incorrect data (hereinafter referred to as "inauthentic" data) could be served from the cache device to satisfy I/O requests.
  • cache contents might also become corrupted during a power transition due to a hardware failure of the computer or cache device. Cache contents may also become "stale" during a power transition because data on disk was updated during the transition, so that when the computer is restarted, the cache contents may no longer accurately represent disk contents. For example, after shutdown a user might boot the disk into another operating system that does not recognize the cache device and modify data stored on the disk that is cached on the cache device, so that when the computer is restarted the cache contents no longer reflect what is stored on the disk.
  • certain operations on a computer during shutdown might occur after the cache device is rendered inaccessible to the operating system (e.g., after the cache device is turned off), so that any writes to disk performed by the operating system subsequent to this point in time may not be accurately reflected by cache contents. Any of numerous events may cause cache contents to become stale across a power transition.
  • Embodiments of the present invention provide techniques for managing these and other concerns, so that cache contents may be relied upon as accurately reflecting data stored on disk across a power transition. For example, some embodiments of the invention provide techniques for verifying that cache contents remain authentic across a power transition. In addition, some embodiments provide techniques for reliably ensuring that cache contents do not become stale across a power transition. Further, some embodiments provide techniques for managing cache metadata across power transitions as well as during normal (“steady state”) operations, ensuring that the cache metadata may be efficiently accessed and reliably saved and restored when a power transition occurs.
  • some embodiments of the invention may enable a cache device with substantial storage capacity to be employed to significantly speed up I/O operations performed by the operating system and/or applications.
  • the increased speed of I/O operations may not only expedite normal, "steady state” operations of the computer, but also significantly speed up operations performed during boot, so that the computer is ready for use much more quickly.
  • a method for operating a computer comprising a memory and having coupled thereto a storage medium and a cache device, the storage medium storing a plurality of data items each at respective addresses, each of the plurality of data items also being stored at a corresponding address on the cache device, cache metadata accessible to the computer providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored.
  • the method comprises acts of: (A) storing the cache metadata in a hierarchical data structure comprising a plurality of hierarchy levels; and (B) loading only a subset of the plurality of hierarchy levels to the memory.
  • a computer system comprising: a memory; a storage medium storing a plurality of data items at respective addresses; a cache device also storing the plurality of data items at corresponding addresses and cache metadata providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored, the cache metadata being stored in a hierarchical data structure comprising a plurality of hierarchy levels; at least one processor programmed to: upon initiating a reboot of the computer, load only a subset of the plurality of hierarchy levels to the memory; process requests to read data items stored at respective addresses on the storage medium by using the cache metadata to identify corresponding addresses at which the data items are stored in the cache device and by storing identified corresponding addresses in the memory; and process a command to shut down the computer by transferring the subset of the plurality of hierarchy levels and the identified corresponding addresses from the memory to the cache device.
  • FIGS. 1A-1B are block diagram depicting techniques for writing to and reading from a cache device, in accordance with the prior art;
  • FIG. 3 is a block diagram depicting an exemplary technique for ensuring that cache data accurately reflects data stored on disk after a power transition, in accordance with some embodiments of the invention
  • FIG. 4 is a block diagram depicting an exemplary technique for storing cache metadata, in accordance with some embodiments of the invention.
  • FIG. 5 is a block diagram depicting exemplary storage operations for cache metadata, in accordance with some embodiments of the invention.
  • FIG. 6 is a flowchart depicting an exemplary technique for servicing read requests using a cache device, in accordance with some embodiments of the invention;
  • FIG. 7 is a block diagram depicting an example computer which may be used to implement aspects of the invention.
  • FIG. 8 is a block diagram depicting an example computer memory on which instructions implementing aspects of the invention may be recorded.
  • Some embodiments of the invention provide techniques for ensuring that cache contents accurately reflect the contents of disk storage across a power transition. For example, some embodiments provide a capability for ensuring that cache contents remain authentic and/or have not become stale across the power transition. Further, some embodiments provide techniques for managing cache metadata, to ensure that metadata has not been tampered with during a power transition. In addition, some embodiments provide a capability for storing cache metadata which may improve the efficiency with which both power transitions and normal operations may be performed. The sections that follow describe these embodiments in detail.
  • a cache device When a computer experiences a power transition (e.g., is taken into standby or hibernate mode, or is rebooted), a cache device may be disconnected from the computer, and its contents may be altered (e.g., by a malicious hacker). For example, when a computer is brought into standby or hibernate mode, a removable cache device such as a flash memory drive may be disconnected from the computer and its contents modified. Even non-removable devices such as internal NVRAM devices may be disconnected and their contents changed when the operating system is rebooted (i.e., reloaded, thereby restarting the computer).
  • Some embodiments of the invention provide techniques for detecting modifications that occur to cache contents during a power transition, to ensure that I/O requests are not satisfied using inauthentic data from cache.
  • a capability is provided to detect any "offline modifications" which occur to cache contents during a power transition which render them inauthentic.
  • a representation may be calculated or derived from at least a portion of the data and/or other information in a predetermined manner.
  • the representation may be generated a first time when the data is written to cache, and stored at one or more locations.
  • the representation may be written to cache along with the data, or to some other location(s).
  • the representation may be stored in a manner which associates the representation with the data (e.g., it may be written to a cache address adjacent that to which the data is written, written to cache metadata associated with the data, and/or associated in some other fashion).
  • the representation may also be retrieved.
  • the representation may be re-generated in the predetermined manner, and the regenerated representation may be compared to the retrieved representation. If the representations match, the data retrieved from cache is determined to be authentic, and served to satisfy the read request. If not, a request is issued to read the data instead from disk storage to satisfy the read request, and the inauthentic data stored on cache may be evicted (e.g., deleted).
  • FIGS. 2A-2B depict this process in greater detail.
  • process 2OA shown in FIG. 2A includes operations performed when data is written to cache
  • process 2OB shown in FIG. 2B includes operations performed subsequently when the data is read from cache.
  • Processes 20A-20B each include operations performed by cached volume 120 (on the left side of each figure), cache manager 100 (in the middle) and cache device 110 (on the right).
  • a write request directed to address X on disk storage is received by cache manager 100 in act 205.
  • cache manager 100 employs cache metadata (not shown) to determine an address Y on cache device 110 to which the data should also be written.
  • Cache manager also generates the representation of at least a portion of the data.
  • Embodiments of the invention may generate this representation using any suitable technique.
  • one or more cryptographic authentication techniques may be employed to generate the representation.
  • the representation may comprise a message authentication code (MAC) generated from the data and a set of secret keys and per-data item sequence numbers.
  • MAC message authentication code
  • the invention is not limited to such an implementation, as any suitable technique for generating the representation may be employed.
  • cryptographic authentication techniques need not be employed.
  • a strong hash and/or cyclic redundancy code (CRC) might alternatively be used to represent data, and may be generated from individual data items stored to cache, or for one or more groups of data items.
  • CRC cyclic redundancy code
  • Applicants have appreciated that if the goal of verifying data authenticity were to merely detect instances of hardware corruption (i.e., hacking of data were not a concern), then using a CRC may be sufficient, and may consume less processing resources than generating a MAC for each data item. However, if the goal is to prevent a hacker or malicious actor from modifying cache contents, then a cryptographic solution may be preferable, so that a representation such as a MAC may be used.
  • the invention is not limited to any particular implementation, as any suitable technique may be employed.
  • cache manager 100 issues the request to cache device 110 to write the data to cache address Y.
  • Cache manager also issues a request to cache device 110 to write the representation.
  • cache manager 100 may specify that the representation should be written to one or more locations adjacent to cache address Y, or be stored in cache metadata for the data, and/or using any other technique.
  • the invention is not limited to writing the representation to any particular location (e.g., it need not be written to cache device 110). If written to cache, the representation may be associated with the data in any manner desired.
  • cache device 110 receives the request and processes it by writing the data and the representation in act 225.
  • cache manager 100 issues a corresponding request to cached volume
  • Cached volume 120 to write the data to disk address X.
  • Cached volume 120 receives this request in act 235 and processes it by writing the data to address X in act 240.
  • acts 230-240 may be performed in parallel with acts 215-225, or at any other suitable time(s), as the invention is not limited to any particular implementation.
  • Process 2OA then completes.
  • Process 2OB includes operations performed to read the data stored to cache.
  • cache manager 100 receives a request to read the data stored at address X on cached volume 120. Using cache metadata (not shown), cache manager 100 determines that the data is stored at address Y on cache device 110 in act 250.
  • cache manager 100 issues a read request to cache device 110 to retrieve both the data stored at address Y and the associated representation. The request is received by cache device 110 in act 260 and processed in act 265, whereupon cache device 100 returns the results to cache manager 100.
  • act 270 cache manager 100 determines whether the data retrieved from cache can be verified. In some embodiments, this is done by re-generating the representation of the data, and comparing the regenerated representation with the representation originally generated in act 215. For example, act 270 may include regenerating a MAC or CRC for the data, and comparing it to the representation retrieved from cache in act 265.
  • act 270 If it is determined in act 270 that the representation can be verified, the process proceeds to act 275, wherein the data retrieved from cache device 110 is served to satisfy the read request, and process 2OB then completes. If it is determined in act 270 that the representation can not be verified, the process proceeds to act 280, wherein cache manager 100 issues a request to cache device 110 to evict (e.g., erase or otherwise make inaccessible) the data stored at address Y. Cache manager 100 then issues a request to cached volume 120 to read the data from address X on disk in act 285. This request is received in act 290 and processed in act 295, whereupon the data is returned to cache manager 100. The data read from address X is then served to satisfy the read request in act 299. Process 2OB then completes.
  • act 280 cache manager 100 issues a request to cache device 110 to evict (e.g., erase or otherwise make inaccessible) the data stored at address Y.
  • Cache manager 100 then issues a request to cached
  • any key(s) used to generate a representation may be written to locations other than the cache device for the duration of the power transition, to prevent a hacker from gaining access to the keys to regenerate representations for altered data items.
  • keys may be stored in disk storage (e.g., when the computer is shut down) to prevent unauthorized access.
  • the invention is not limited to such an implementation, as keys need not be stored, and if stored, may reside in any suitable location.
  • stored keys may be placed in any configuration store provided by the operating system that is available during system boot (e.g., the system registry in Windows), or re-generated based on some user input (e.g., a password) so that no key storage is necessary.
  • a configuration store provided by the operating system that is available during system boot (e.g., the system registry in Windows), or re-generated based on some user input (e.g., a password) so that no key storage is necessary.
  • Some embodiments of the invention provide mechanisms for detecting when these "offline writes" occur, thereby ensuring that cache contents accurately reflect data stored on disk after a power transition occurs.
  • the semantics of certain power transitions are such that data on non-removable storage devices (e.g., disk storage) can not be modified during a power transition.
  • non-removable storage devices e.g., disk storage
  • the cache contents corresponding to data on such non-removable media generally do not become stale.
  • a number of things can happen which make it possible for data on disk to be modified. For example, a user may boot the disk into another operating system on that computer, or connect the disk to another computer, and modify data stored on disk.
  • the mechanics of shutdown of many conventional operating systems are such that at some point during the shutdown, a cache device is turned off and is no longer accessible by the operating system, but the operating system may continue to access the disk. As such, the operating system may update data items on disk which are cached on the cache device. Because the cache device has been turned off, the operating system has no way of also updating these cache contents, so that they are rendered stale.
  • some embodiments of the invention provide techniques for detecting modifications to data stored on disk after a shutdown is initiated, so that cache contents which are rendered stale by such modifications may be updated, evicted from cache, or otherwise handled.
  • a write recorder component may, for example, be implemented as a driver in the operating system's I/O path, although the invention is not limited to such an implementation.
  • a write recorder component may be hardware-based.
  • disk storage hardware might provide one or more interfaces that provide the capability to identify the set of modifications that occurred during a certain time period, or whether modifications occurred during a certain time period.
  • disk storage hardware may provide a spin-up/power up/boot counter which may be employed to deduce that at least some stored data items have been updated, in which case cache contents corresponding to the data stored on disk may be evicted (this should not occur frequently, so employing the cache device should still deliver substantial benefits).
  • the invention is not limited to any particular implementation.
  • the write recorder component is configured to become active when shutdown is initiated, and to keep track of all writes performed to disk storage until shutdown completes. As a result, when the computer is later restarted, these writes may be applied to cache contents. For example, when the computer is restarted and disk volumes come online, the cache manager may then started, and may begin tracking writes to disk. The cache manager may query the write recorder component to determine the offline writes that occurred after the cache device was shut off, merge these writes with those which the cache manager tracked during startup, and apply the merged set of writes to cache contents.
  • Applying writes to cache contents may include, for example, updating the cache contents corresponding to the data on disk to which the writes were directed (e.g., performing the same write operations to these cache contents), evicting these cache contents, a combination of the two (e.g., applying write operations to certain cache contents and evicting others), or performing some other operation(s).
  • the write recorder component may be shut down, and the cache device may begin servicing I/O requests.
  • FIG. 3 depicts an example process 30 for tracking offline writes and applying these writes to cache contents.
  • process 300 includes operations performed by cache manager 100, write recorder 300, cache device 110 and cached volume 120 during a computer's shutdown and subsequent reboot.
  • cache manager 100 activates write recorder 300 and supplies to it a "persistence identifier" which identifies the set (i.e., generation) of write operations to be tracked by the write recorder. (Examples of the uses for a persistence identifier are described in detail below.)
  • act 310 cache manager 100 writes the persistence identifier, as well as cache metadata stored in memory, to cache device 110. At this point in the shutdown process, cache device 110 is turned off and becomes inaccessible to cache manager 100.
  • write recorder 300 writes the persistence identifier passed to it in act 305 to cached volume 120, and begins tracking any write operations performed to cached volume 120 during shutdown.
  • write recorder 300 may create a log file, or one or more other data structures, on cached volume 120 or at some other location(s) to indicate the addresses on disk to which write operations are performed, and/or the data written to those addresses.
  • the computer's shutdown operations have finished. Thereafter, the computer is restarted.
  • cached volume As part of the boot process, cached volume
  • Cache manager 100 may then begin tracking write operations performed to cached volume 120. For example, cache manager 100 may create a log file and store it on cache device 110, cached volume 120, and/or the computer's memory (not shown in FIG. 3).
  • write recorder 300 reads the volume changes logged in act 315, as well as the persistence identifier written to cached volume 120 in act 315. The volume changes and persistence identifier are then passed to cache manager 100 in act 325.
  • write recorder 300 may be incapable of tracking of all writes to disk after cache device 110 has turned off.
  • hardware data corruption, untimely power failures and/or problems in writing the log file may render write recorder 300 incapable of tracking all offline writes performed to a disk volume.
  • write recorder 300 may indicate to cache manager 100 in act 325 that it can not reliably determine that the log is a complete and accurate record of all offline writes performed. If this occurs, cache manager 100 may evict the entire cache contents, or a portion thereof (e.g., corresponding to a particular disk volume for which the write recorder could not track all write operations), as potentially being unreliable. The remainder of the description of FIG. 3 assumes that write recorder 300 is capable of tracking all offline writes.
  • cache manager 100 reads the cache metadata and persistence identifier from cache device 110 into memory. Cache manager 100 determines whether the persistence identifier can be verified (this is described further below). If not, cache manager 100 may evict the entire contents of cache device 110, or a portion thereof (e.g., corresponding to a particular disk volume for which the persistence identifier could not be verified). If the persistence identifier can be verified, cache manager 100 merges any write operations performed to disk storage since the computer was restarted with any write operations tracked by write recorder 300. For example, if one or more logs indicate the data written to each address on disk, cache manager 100 may select the latest update performed to each address and write it to memory.
  • write recorder 300 may be configured to continue recording writes after the computer is restarted, so that cache manager 100 need not record writes performed after that point and merge them with writes tracked by write recorder 300. Instead, write recorder 300 may simply provide a record of all writes to cache manager 100.
  • cache manager 100 uses the cache metadata read in act 330 to apply the set of writes to the contents of cache device 110 in act 335.
  • applying the writes may include evicting cache contents, updating cache contents, doing both, or performing some other operation(s).
  • offline writes tracked by write recorder 300 in act 315 may be applied by evicting the corresponding cache contents, while the writes tracked by cache manager 100 since the computer was restarted may be applied by updating the corresponding cache contents to reflect the writes. Applying write operations to cache contents may be performed in any suitable way, as the invention is not limited to any particular implementation.
  • a cache device may be susceptible to becoming inaccessible for periods of time. For example, if the cache device is accessed via one or more networks, connectivity could be lost, or if the cache device is removable from the computer, a surprise (e.g., unintentional) removal could occur.
  • some embodiments may employ a write recorder to track all (or a portion of) writes performed to disk, not just those occurring during shutdown, and a cache device which is configured to periodically capture cache "snapshots" while still online.
  • a write recorder to track all (or a portion of) writes performed to disk, not just those occurring during shutdown
  • a cache device which is configured to periodically capture cache "snapshots" while still online.
  • example process 30 of FIG. 3 may detect offline writes performed by the operating system during shutdown, other measures may be needed to detect offline writes performed to disk after shutdown completes. Such writes may occur, for example, when a user boots the disk into another operating system after shutdown, or removes the disk from the computer after shutdown and connects it to another computer, and then modifies data stored on disk.
  • some embodiments of the invention instead try to prevent them from occurring. For example, some embodiments attempt to make a particular disk volume inaccessible to operating systems that do not provide a write recorder component after shutdown. This may be accomplished in any of numerous ways.
  • write recorder 300 may mark a disk volume in such a way that it becomes un-mountable by operating systems that do not provide a write recorder component to track offline writes. For example, write recorder 300 may modify the volume identifier that indicates the type of file system used on the volume. In this respect, those skilled in the art will recognize that a volume identifier enables an operating system to identify the type of file system used to store data on the volume, thereby enabling the operating system to understand the structure of data stored on the volume, where to find files, etc.
  • NTFS NT File System
  • another operating system attempting to mount the volume would understand that an NTFS file system would be needed to parse and access the data thereon.
  • the volume identifier provided no indication of the type of file system used to store data on the volume, most operating systems would fail to mount the volume, as there would be no reliable way to understand the structure of data stored thereon.
  • some embodiments of the invention modify the volume identifier of a disk volume to make it inaccessible, thereby preventing a user from booting the disk volume into another operating system and making offline changes to data stored on the volume.
  • some embodiments of the invention provide a mechanism for detecting when an operating system mounts the volume.
  • any operating system would need to update the volume identifier (e.g., to indicate that a NTFS file system was employed to store data on the volume) to allow data thereon to be accessed. Any such update would be easily detectable upon reboot. If such an update were detected, some embodiments of the invention may assume that the contents of the volume had been modified since the last shutdown, and evict the cache contents corresponding to data stored on the volume.
  • Some embodiments of the invention provide a capability whereby a disk volume may be booted into another operating system which also employs a write recorder component. For example, if a disk were removed from one computer running an operating system that provides a write recorder component, and boots the disk into another operating system that provides a write recorder component, the other operating system might be configured to recognize that a changed volume identifier indicates that the volume may be cached. As a result, the other operating system may add to a log of offline writes (e.g., stored on the volume) created by the first operating system.
  • some embodiments of the invention provide a mechanism for determining whether a file system was mounted after shutdown. If so, it is assumed that changes were made to data in the file system, and all cache contents corresponding to data in the file system may be evicted.
  • Some embodiments may detect the mounting of a file system after shutdown by placing the file system log at shutdown in a state which would require any operating system attempting to mount the file system to modify the log in some way (e.g., change its location, add a new entry, etc.).
  • write recorder 300 may note as part of the task of logging offline writes the location and/or content of the file system log when the file system is dismounted (e.g., in the log itself).
  • any operating system attempting to mount the file system would have to change the log (e.g., if the file system were an NTFS file system, an operating system attempting to mount the file system would add an entry to the log), if the log has not changed upon reboot, it is assumed that the file system was not mounted by another operating system during the power transition, so that cache contents corresponding to data stored in the file system have not been rendered stale. Conversely, if the log has been changed in some way (e.g., its location has changed, and entry has been added, etc.) then it is assumed that the file system was mounted by another operating system, and that data stored therein has changed, rendering the cache contents corresponding to data stored in the file system stale. As such, these cache contents may be evicted.
  • some way e.g., its location has changed, and entry has been added, etc.
  • some embodiments of the invention provide a capability to manage inconsistent generations of cache contents. Inconsistent generations of cache contents may be created for any of numerous reasons. One example may occur when first and second computers, having first and second cache devices connected thereto, employ techniques described herein to persist cache contents across power transitions. If the second cache device were connected to the first computer (or the first cache device connected to the second computer) and the first computer were restarted, incorrect data could be served from the second cache device to satisfy I/O requests. This is because the first computer's operating system could deem the contents of the second cache device authentic (since a regenerated representation of the data returned from cache could match a representation originally generated) and not stale (since offline writes could be applied to cache contents).
  • Some embodiments provide a capability to identify inconsistent generations of cache contents so that cache contents persisted previous to the latest shutdown are not erroneously used to satisfy I/O requests.
  • this capability is provided via a unique persistence identifier, which may be generated (as an example) as shutdown is initiated, in any of numerous ways. For example, GUIDs and/or cryptographic random number generators may be employed for this purpose.
  • the persistence identifier may be stored on the cache device (e.g., in or with cache metadata) as well as on the computer (e.g., on disk and/or memory) and verified (e.g., by comparing the two versions) as the computer is started. If verification is unsuccessful, cache contents may be evicted as representing a previous persisted cache generation.
  • any keys used to generate a persistence identifier may be written to a location other than the cache device for the duration of a power transition.
  • a write recorder component may write the keys as well as the persistence identifier to disk storage (e.g., at shutdown).
  • the invention is not limited to such an implementation, as those skilled in the art may envision numerous alternative locations in which keys may be saved. Keys may, for example, be kept in any configuration store provided by the operating system which is available during system boot (e.g., the registry in Windows).
  • cache metadata may provide a mapping between disk addresses where data items are stored and the corresponding addresses on a cache device where those data items are cached.
  • Some embodiments of the invention provide a capability for storing cache metadata which significantly reduces the amount of memory required to store cache metadata during system runtime operations.
  • some embodiments provide techniques which allow cache metadata to be relied upon across power transitions or any other event which takes the cache device offline (e.g., removal of a cache device from the computer, a network outage which makes a network cache device inaccessible, etc.), so that cache contents may be reliably accessed when the computer is restarted and/or the cache device is brought online.
  • power transitions e.g., standby and hibernate modes
  • simply storing cache metadata in memory i.e., RAM
  • the contents of system memory are not preserved.
  • some embodiments of the invention provide for storing cache metadata on some non- volatile medium/media during shutdown, and then restored upon reboot.
  • cache metadata may be stored on a cache device, and/or on one or more separate non- volatile media.
  • some embodiments may be capable of deriving some portions of cache metadata from others, so that storing all cache metadata is not required.
  • Some embodiments may employ the techniques described in Section I. above for verifying the authenticity of cache metadata, so as to detect and prevent inadvertent or malicious modifications to metadata when the cache device goes offline (e.g., during computer shutdown, removal of the cache device from the computer, a network outage which makes a network cache device inaccessible, etc.).
  • the cache manager may verify the authenticity of metadata as it is loaded to memory, using the techniques described above with reference to FIGS. 2A-2B. If the authenticity of cache metadata can not be verified, the corresponding cache contents may be updated based on data stored on disk, evicted, or otherwise processed as described above.
  • cache metadata may be compressed to reduce the amount of metadata to save during shutdown and load at reboot. Because compression of metadata may require saving a separate piece of information (e.g., a header in the cache) containing information about the metadata, the techniques described above may be employed to verify the authenticity of this information as well at reboot.
  • Some embodiments of the invention provide techniques for storing cache metadata in a manner which greatly reduces the amount of cache metadata stored in memory at any one time, thereby reducing the amount of time required to load cache metadata to, and offload it from, memory (e.g., during runtime and startup/shutdown operations) and greatly reducing the memory "footprint" of cache metadata.
  • cache devices having relatively large storage capacity, a significant amount of metadata may be required to manage cache contents.
  • a cache device having a sixteen gigabyte storage capacity may be capable of storing up to thirty- two gigabytes of compressed data.
  • disk addresses may be reflected in cache metadata in "data units" representing four kilobytes of disk storage.
  • some embodiments of the invention provide techniques designed to reduce the storage resources needed to store cache metadata, as well as the time and processing resources required to save and restore cache metadata at shutdown and startup. In some embodiments, this is accomplished by storing cache metadata in one or more hierarchical data structures (e.g., trees, multi-level arrays, etc.). Employing a hierarchical data structure may allow lower levels of the hierarchy to be stored on a non- volatile medium (e.g., the cache device) while only higher levels of the hierarchy are stored in memory.
  • hierarchical data structures e.g., trees, multi-level arrays, etc.
  • only higher levels of the hierarchy are stored in memory, so that the "footprint" occupied by memory in cache metadata may be greatly reduced, even while an amount of cache metadata needed to support cache devices having significant storage capacity is stored overall.
  • storing only higher levels of the hierarchy in cache metadata may provide for storing some information kept at lower levels of the hierarchy in memory as well, so as to reduce the I/O overhead associated with repeat accesses to this information.
  • the invention is not limited to being implemented in any particular fashion.
  • the cache metadata that is read from the non-volatile medium (i.e., from lower levels of the hierarchy) to perform the read operation may be "paged in” to (i.e., read from a storage medium into) memory so that it may be more quickly accessed for subsequent read requests to the same disk/cache address.
  • the cache device When the computer is later shut down and/or the cache device is brought offline, only the cache metadata stored at the higher levels of the hierarchy, and the cache metadata to be stored in the lower levels of the hierarchy which was paged in to memory, may need to be saved to the non-volatile medium.
  • Some embodiments of the invention employ a B+ tree to store at least a portion of cache metadata.
  • B+ trees may employ large branching factors, and therefore reduce the number of levels in the hierarchy employed.
  • FIG. 4 depicts this example B+ tree which includes root node 400, level two nodes
  • Each node includes two hundred elements each separated by pointers to nodes at a lower level in the hierarchy.
  • element 402 in root node 400 is delimited by pointers 401 and 403.
  • a value e.g., a cache address
  • a given key e.g., a disk address
  • pointer 401 would be followed to level two node 41O 1
  • pointer 403 would be followed to level two node 41O 2 (not shown), and so on.
  • a pointer to the left or right of an element is followed to a level three node.
  • a final pointer is followed (again based on whether the key is less than or greater than elements in the node) to the value, with each pointer at level three referencing one of the eight million data units in cache metadata.
  • a B+ tree with a large branching factor provides a relatively "flat" hierarchy with almost all nodes being located at the bottom level of the hierarchy. That is, of the 40,201 total nodes in the tree, 40,000 are at the lowest level.
  • Some embodiments of the invention take advantage of this by restoring only the top two levels of the hierarchy to memory at startup, while the cache metadata in the lowest level of the hierarchy is stored on the cache device until needed (e.g., it may be loaded into memory on demand as read requests are processed, loaded lazily, etc.). Because only a portion of the hierarchical data structure is stored in memory, the cache metadata may occupy a much smaller portion of memory than would otherwise be required if the entirety of, or larger portion of, cache metadata were maintained in memory.
  • the computer is shut down, only the data at the top two levels and the data loaded into memory during operation need to be stored on the cache device. As a result, both startup and shutdown operations may be performed quickly and efficiently.
  • some embodiments of the invention provide for pointers in nodes at one level of the hierarchy stored in memory (in the example above, level two of the hierarchy) which reference nodes at another level of the hierarchy stored on the cache device (in the example above, level three). For example, when a read request for a cached data item is received, embodiments of the invention follow pointers through one or more levels of the hierarchy stored in memory, and then to metadata at lower levels of the hierarchy stored in cache, to determine the address at which the data item is stored in cache. In some embodiments, once the cache address is determined for the data item, it may be stored in memory so that subsequent requests to read the item may be performed without having to read cache metadata from the cache device.
  • FIG. 5 depicts an example system 50 for managing cache metadata in accordance with some embodiments of the invention.
  • FIG. 5 depicts memory 500 and cache device 110, both accessible to a computer (not shown).
  • cache metadata comprising one or more levels of a hierarchical data structure such as a B+ tree are loaded to memory 500 in operation 505.
  • a hierarchical data structure such as a B+ tree
  • the top two levels of the hierarchy may be loaded to memory 500.
  • the cache address at which the data item is stored is determined by accessing cache metadata stored in the level(s) of the hierarchy stored in cache device 110.
  • This cache metadata is then stored in memory 510, so that subsequent reads or writes to the data item may be performed without having to read cache metadata stored on cache device to determine the cache address at which the data item is stored. Instead, the cache address may be read from memory, which may be performed more quickly than a read to cache.
  • the cache metadata stored in memory i.e., the metadata stored in the levels of the hierarchy loaded to memory in operation 505, and any metadata used to satisfy read requests written to memory in operation 510 is loaded to cache device 500 in act 515.
  • the cache metadata stored in memory i.e., the metadata stored in the levels of the hierarchy loaded to memory in operation 505, and any metadata used to satisfy read requests written to memory in operation 510 is loaded to cache device 500 in act 515.
  • a B+ tree is but one of numerous types of data structures which may be employed to store cache metadata, and that other types of data structures (e.g., hierarchical structures such as AVL trees, red-black trees, binary search trees, B-trees and/or other hierarchical and non-hierarchical data structures) may be employed.
  • hierarchical structures such as AVL trees, red-black trees, binary search trees, B-trees and/or other hierarchical and non-hierarchical data structures
  • the invention is not limited to employing any one data structure or combination of data structures to store cache metadata.
  • Some embodiments may provide for a "target amount" of cache metadata to be kept in memory at any one time.
  • the target amount may be determined in any suitable fashion.
  • a target amount may be a percentage of the amount of physical memory available to a computer. For example, if the computer has one gigabyte of memory, then two megabytes of cache metadata (as an example) may be stored in memory at any one time. Thus, when the computer is shut down, only two megabytes of cache metadata need to be loaded to the cache device.
  • cache metadata may be cycled in and out of memory. For example, if a target amount of cache metadata is already stored in memory, and a read is performed which requires cache metadata to be read from the cache device, that metadata may be "paged in” to memory, and other cache metadata (e.g., that which was accessed least recently) may be erased. For example, cache metadata may be erased after being written to the cache device. Alternatively, the system may determine whether the cache metadata has changed since the last time it was written out, and if not, it may simply be erased, thus eliminating the time and processing resources otherwise required to write the cache metadata. Using the techniques described above, the small "footprint" occupied by cache metadata in memory may be maintained.
  • FIG. 6 depicts an example. Specifically, process 60 shown in FIG. 6 includes operations which may be performed by cache manager 100 to read cache metadata using the techniques described above.
  • a request is received in act 605 to read data stored at disk address X.
  • act 610 a determination is made whether the cache address at which the data is stored can be identified from cache metadata stored in memory. If so, the process proceeds to act 615, wherein the identified cache address is determined, and then used to issue a read request to cache device 110 in act 620. Process 60 then completes. If the cache address can not be identified using cache metadata stored in memory, then the process proceeds to act 625, wherein cache metadata is read from cache device 110 to determine the cache address at which the data is stored. Using the cache offset identified in act 625, a read request is issued to the identified cache offset in act 620, and process 60 then completes.
  • storing cache metadata on the cache device may not only speed up the process of loading and restoring cache metadata during startup and shutdown, but may also speed up the system operations performed during startup and shutdown.
  • shutdown and startup often involve multiple accesses to certain data items, and performing two read operations to a cache device is typically faster than performing one read operation to disk storage.
  • the data item might be accessed more quickly then if the data item were stored on disk, since the two reads to cache (i.e., one to access cache metadata to determine the item's location, and a second to access the item itself) can typically be performed more quickly than a single read to disk.
  • Computer system 700 includes input device(s) 702, output device(s) 701, processor 703, memory system 704 and storage 706, all of which are coupled, directly or indirectly, via interconnection mechanism 705, which may comprise one or more buses, switches, networks and/or any other suitable interconnection.
  • the input device(s) 702 receive(s) input from a user or machine (e.g., a human operator), and the output device(s) 701 display(s) or transmit(s) information to a user or machine (e.g., a liquid crystal display).
  • the processor 703 typically executes a computer program called an operating system (e.g., a Microsoft Windows-family operating system, or any other suitable operating system) which controls the execution of other computer programs, and provides scheduling, input/output and other device control, accounting, compilation, storage assignment, data management, memory management, communication and dataflow control.
  • an operating system e.g., a Microsoft Windows-family operating system, or any other suitable operating system
  • the processor 703 may also execute one or more computer programs to implement various functions. These computer programs may be written in any type of computer program language, including a procedural programming language, object-oriented programming language, macro language, or combination thereof. These computer programs may be stored in storage system 706. Storage system 706 may hold information on a volatile or non-volatile medium, and may be fixed or removable. Storage system 706 is shown in greater detail in FIG. 8. Storage system 706 typically includes a computer-readable and writable nonvolatile recording medium 801, on which signals are stored that define a computer program or information to be used by the program. A medium may, for example, be a disk or flash memory.
  • the processor 703 causes data to be read from the nonvolatile recording medium 801 into a volatile memory 802 (e.g., a random access memory, or RAM) that allows for faster access to the information by the processor 703 than does the medium 801.
  • the memory 802 may be located in the storage system 706, as shown in FIG. 8, or in memory system 704, as shown in FIG. 7.
  • the processor 703 generally manipulates the data within the integrated circuit memory 704, 802 and then copies the data to the medium 801 after processing is completed.
  • a variety of mechanisms are known for managing data movement between the medium 801 and the integrated circuit memory element 704, 802, and the invention is not limited thereto.
  • the invention is also not limited to a particular memory system 704 or storage system 706.
  • embodiments of the invention are also not limited to employing a cache manager component which is implemented as a driver in the I/O stack of an operating system. Any suitable component or combination of components, each of which may be implemented by an operating system or one or more standalone components, may alternatively or additionally be employed. The invention is not limited to any particular implementation.
  • the above-described embodiments of the present invention can be implemented in any of numerous ways.
  • the above-discussed functionality can be implemented using hardware, software or a combination thereof.
  • the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • any component or collection of components that perform the functions described herein can be generically considered as one or more controllers that control the above-discussed functions.
  • the one or more controllers can be implemented in numerous ways, such as with dedicated hardware, or by employing one or more processors that are programmed using microcode or software to perform the functions recited above.
  • a controller stores or provides data for system operation
  • data may be stored in a central repository, in a plurality of repositories, or a combination thereof.
  • a (client or server) computer may be embodied in any of a number of forms, such as a rack-mounted computer, desktop computer, laptop computer, tablet computer, or other type of computer.
  • a (client or server) computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
  • PDA Personal Digital Assistant
  • a (client or server) computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface including keyboards, and pointing devices, such as mice, touch pads, and digitizing tables. As another example, a computer may receive input information through speech recognition or in other audible format.
  • Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet.
  • networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
  • the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms.
  • software may be written using any of a number of suitable programming languages and/or conventional programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
  • the invention may be embodied as a storage medium (or multiple storage media) (e.g., a computer memory, one or more floppy disks, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other computer storage media) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above.
  • the storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above.
  • program or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
  • Computer-executable instructions may be provided in many forms, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.

Abstract

Techniques are provided for managing cache metadata that maps addresses on a storage medium (e.g., disk) to corresponding addresses on a cache device. Cache metadata may be stored in a hierarchical data structure. Only a subset of the levels of the hierarchy may be loaded to memory, thus reducing the cache metadata's memory "footprint" and expediting its restoration during startup. Startup may be further expedited by using cache metadata to perform operations associated with reboot. As requests to read data from storage are processed using cache metadata to identify the address(es) at which the data are stored in cache, the identified addressed may be stored in memory. When the computer is later shut down, instead of having to transfer the entirety of cache metadata from memory to storage, only the subset of the hierarchy levels and/or identified addresses may be transferred (e.g., to the cache device), expediting shutdown.

Description

MANAGING CACHE DATA AND METADATA
FIELD OF THE INVENTION
This invention relates to techniques for employing non-volatile memory devices, such as removable and non-removable non- volatile random access memory (NVRAM) devices.
BACKGROUND
Some conventional operating systems provide a capability to employ a non-volatile memory device (i.e., a peripheral device operable to provide auxiliary storage and/or memory to a computer, such as a flash memory USB drive) as a block or file-level cache for slower storage devices (e.g., a disk storage medium, or one or more storage devices accessible via a network), to improve the performance of the operating system and/or applications. In this respect, because read and write operations can be performed significantly faster from or to a non- volatile memory device (hereinafter referred to as a "cache device" for simplicity) than from or to a slower storage device, using a cache device to cache data stored on such a slower device offers opportunities to significantly improve the speed of input/output (I/O) operations of operating systems and/or applications. To this end, the Microsoft Windows Vista operating system, produced by Microsoft Corporation of Redmond, WA, includes a feature known as ReadyBoost which allows users to employ cache devices to cache data also residing in a slower storage device (referred to hereinafter as "disk storage" or "disk" for convenience, although it should be understood that these terms refer generally to any storage mechanism(s) and/or device(s) to which I/O is typically performed more slowly than a cache device, including storage devices accessible via a network).
Employing a cache device to cache data stored on disk may be accomplished using a cache manager component, which in some implementations is a driver implemented in the operating system's I/O stack. FIGS. 1A-1B depict example high-level processes 10A- 1OB whereby a cache manager component 100 manages the caching of data to cache device 110. Cache device 110 may be coupled, using wired and/or wireless communications infrastructure and protocol(s), to a computer (not shown) on which cache manager 100 resides. For example, cache device 110 may be removable from the computer (e.g., comprise a flash memory USB drive), non-removable and/or accessible to the computer via one or more wired and/or wireless networks.
At the start of the process 1OA (FIG. IA), a write request is received by cache manager 100 specifying that data should be written to address X on disk storage (i.e., cached volume 120). Cache manager 100 processes the request by causing the data to be written to address X on cached volume 120 in operation 101, and also to address Y on cache device 110 in operation 102. Process 1OB (FIG. IB) includes operations performed thereafter when a read request is received specifying that the data stored at address X on cached volume 120 should be read. Cache manager 100 determines that the data is cached on cache device 110 at address Y, and causes the data at address Y to be read in operation 103. The data is then served from the cache device to satisfy the read request in operation 104.
The cache manager maintains a mapping of disk addresses (e.g., address X) to corresponding cache addresses (e.g., address Y) in metadata, and this "cache metadata" is usually employed in reading from or writing to the cache device. Typically, cache metadata is maintained in memory and accessed by the cache manager when I/O requests are received. As such, when a read request is received by the cache manager which is directed to disk offset X, the cache manager uses the cache metadata to determine that the data is also stored at cache offset Y, and to satisfy the request by causing the data to be read from cache offset Y rather than disk offset X. When a write request is received by the cache manager which is directed to disk offset X, the cache manager employs the cache metadata to determine whether the data at that disk address is also stored in cache. If so (e.g., if the data is stored at cache address Y), the cache manager may cause the data to be written to the appropriate address in cache, or evict the cache contents at that address. If not, the cache manager may cause the data to be written to cache, and may update the cache metadata so that future reads to disk offset X may instead be serviced from the data stored on cache.
Conventional operating systems are capable of supporting cache devices with relatively limited storage capacity. For example, the ReadyBoost feature of the Windows Vista operating system supports cache devices with up to a four gigabyte storage capacity. (At the time Windows Vista was released, the maximum storage capacity of cache devices was approximately two gigabytes). The storage capacity of cache devices has grown rapidly in recent years, with some cache devices providing a storage capacity of up to sixteen gigabytes, which may store the equivalent of thirty-two gigabytes of data when compressed.
SUMMARY OF THE INVENTION
Applicants have appreciated that cache devices having relatively larger storage capacity offer significant opportunity to improve the speed of I/O operations performed by operating systems and applications. Applicants have also appreciated that one reason conventional operating systems support cache devices with only relatively limited storage capacity is that cache contents must be repopulated when certain types of power transitions (e.g., standby, hibernate (or equivalent modes used by non-Microsoft Windows operating systems), or reboot) occur. With cache devices that have relatively larger storage capacity, repopulating cache contents can take considerable time and consume significant processing resources. As an example, an eight gigabyte flash memory device, which may hold up to sixteen gigabytes of compressed data, may take up to thirty minutes to repopulate using background I/O from disk at approximately ten megabytes per second. This not only effectively negates any performance benefits that might have been gained by employing the cache device, but indeed may significantly slow system operations. One reason cache contents must be repopulated across certain power transitions is that there is no way to reliably ensure that cache contents accurately represent the contents of disk storage when the computer is restarted, because the contents of the cache device and/or the disk may have been modified during the power transition. For example, when a first computer is shut down, a hacker could disconnect a removable cache device, connect it to another computer, and modify the cache contents, so that if the device were then reconnected to the first computer, incorrect data (hereinafter referred to as "inauthentic" data) could be served from the cache device to satisfy I/O requests. In addition to a hacker's malicious acts, cache contents might also become corrupted during a power transition due to a hardware failure of the computer or cache device. Cache contents may also become "stale" during a power transition because data on disk was updated during the transition, so that when the computer is restarted, the cache contents may no longer accurately represent disk contents. For example, after shutdown a user might boot the disk into another operating system that does not recognize the cache device and modify data stored on the disk that is cached on the cache device, so that when the computer is restarted the cache contents no longer reflect what is stored on the disk. In another example, certain operations on a computer during shutdown might occur after the cache device is rendered inaccessible to the operating system (e.g., after the cache device is turned off), so that any writes to disk performed by the operating system subsequent to this point in time may not be accurately reflected by cache contents. Any of numerous events may cause cache contents to become stale across a power transition.
Embodiments of the present invention provide techniques for managing these and other concerns, so that cache contents may be relied upon as accurately reflecting data stored on disk across a power transition. For example, some embodiments of the invention provide techniques for verifying that cache contents remain authentic across a power transition. In addition, some embodiments provide techniques for reliably ensuring that cache contents do not become stale across a power transition. Further, some embodiments provide techniques for managing cache metadata across power transitions as well as during normal ("steady state") operations, ensuring that the cache metadata may be efficiently accessed and reliably saved and restored when a power transition occurs.
By providing techniques which ensure that cache contents can be relied upon as accurately reflecting data stored on disk across power transitions, some embodiments of the invention may enable a cache device with substantial storage capacity to be employed to significantly speed up I/O operations performed by the operating system and/or applications. The increased speed of I/O operations may not only expedite normal, "steady state" operations of the computer, but also significantly speed up operations performed during boot, so that the computer is ready for use much more quickly. In some embodiments of the invention, a method is provided for operating a computer comprising a memory and having coupled thereto a storage medium and a cache device, the storage medium storing a plurality of data items each at respective addresses, each of the plurality of data items also being stored at a corresponding address on the cache device, cache metadata accessible to the computer providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored. The method comprises acts of: (A) storing the cache metadata in a hierarchical data structure comprising a plurality of hierarchy levels; and (B) loading only a subset of the plurality of hierarchy levels to the memory.
Other embodiments provide at least one computer-readable storage medium having instructions encoded thereon which, when executed by a computer comprising a memory and having coupled thereto disk storage and a cache device, the disk storage storing a plurality of data items each at respective addresses, each of the plurality of data items also being stored at a corresponding address on the cache device, cache metadata accessible to the computer providing a mapping between the address on the disk storage and the corresponding address on the cache device at which each data item is stored, perform a method comprising acts of: (A) storing the cache metadata, in the cache device, in a hierarchical data structure comprising a plurality of hierarchy levels; (B) initiating a reboot of the computer; (C) upon initiating the reboot of the computer, loading only a subset of the plurality of hierarchy levels to the memory; (D) receiving a request to read a data item stored at an address on the storage medium; (E) accessing a first portion of the cache metadata to identify a corresponding address at which the data item is stored on the cache device; and (F) storing the first portion of the cache metadata in the memory.
Other embodiments provide a computer system, comprising: a memory; a storage medium storing a plurality of data items at respective addresses; a cache device also storing the plurality of data items at corresponding addresses and cache metadata providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored, the cache metadata being stored in a hierarchical data structure comprising a plurality of hierarchy levels; at least one processor programmed to: upon initiating a reboot of the computer, load only a subset of the plurality of hierarchy levels to the memory; process requests to read data items stored at respective addresses on the storage medium by using the cache metadata to identify corresponding addresses at which the data items are stored in the cache device and by storing identified corresponding addresses in the memory; and process a command to shut down the computer by transferring the subset of the plurality of hierarchy levels and the identified corresponding addresses from the memory to the cache device.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A-1B are block diagram depicting techniques for writing to and reading from a cache device, in accordance with the prior art; FIGS. 2A-2B depict exemplary techniques for writing to and reading from a cache device in accordance with some embodiments of the invention;
FIG. 3 is a block diagram depicting an exemplary technique for ensuring that cache data accurately reflects data stored on disk after a power transition, in accordance with some embodiments of the invention;
FIG. 4 is a block diagram depicting an exemplary technique for storing cache metadata, in accordance with some embodiments of the invention;
FIG. 5 is a block diagram depicting exemplary storage operations for cache metadata, in accordance with some embodiments of the invention; FIG. 6 is a flowchart depicting an exemplary technique for servicing read requests using a cache device, in accordance with some embodiments of the invention;
FIG. 7 is a block diagram depicting an example computer which may be used to implement aspects of the invention; and
FIG. 8 is a block diagram depicting an example computer memory on which instructions implementing aspects of the invention may be recorded.
DETAILED DESCRIPTION
Some embodiments of the invention provide techniques for ensuring that cache contents accurately reflect the contents of disk storage across a power transition. For example, some embodiments provide a capability for ensuring that cache contents remain authentic and/or have not become stale across the power transition. Further, some embodiments provide techniques for managing cache metadata, to ensure that metadata has not been tampered with during a power transition. In addition, some embodiments provide a capability for storing cache metadata which may improve the efficiency with which both power transitions and normal operations may be performed. The sections that follow describe these embodiments in detail.
I. Verifying Cache Data Authenticity
When a computer experiences a power transition (e.g., is taken into standby or hibernate mode, or is rebooted), a cache device may be disconnected from the computer, and its contents may be altered (e.g., by a malicious hacker). For example, when a computer is brought into standby or hibernate mode, a removable cache device such as a flash memory drive may be disconnected from the computer and its contents modified. Even non-removable devices such as internal NVRAM devices may be disconnected and their contents changed when the operating system is rebooted (i.e., reloaded, thereby restarting the computer). As a result, when the cache device is reconnected to the computer, it may store different information than the user believes (i.e., the data stored in cache may not be "authentic"). If inauthentic data is served from cache to satisfy an input/output (I/O) request, the computer's operation could be negatively affected. Some embodiments of the invention provide techniques for detecting modifications that occur to cache contents during a power transition, to ensure that I/O requests are not satisfied using inauthentic data from cache. In some embodiments, a capability is provided to detect any "offline modifications" which occur to cache contents during a power transition which render them inauthentic. Some embodiments provide this capability using a representation of at least a portion of the data. For example, a representation may be calculated or derived from at least a portion of the data and/or other information in a predetermined manner. The representation may be generated a first time when the data is written to cache, and stored at one or more locations. For example, the representation may be written to cache along with the data, or to some other location(s). When stored to cache, the representation may be stored in a manner which associates the representation with the data (e.g., it may be written to a cache address adjacent that to which the data is written, written to cache metadata associated with the data, and/or associated in some other fashion). When the data is subsequently read from cache, the representation may also be retrieved. The representation may be re-generated in the predetermined manner, and the regenerated representation may be compared to the retrieved representation. If the representations match, the data retrieved from cache is determined to be authentic, and served to satisfy the read request. If not, a request is issued to read the data instead from disk storage to satisfy the read request, and the inauthentic data stored on cache may be evicted (e.g., deleted).
FIGS. 2A-2B depict this process in greater detail. In particular, process 2OA shown in FIG. 2A includes operations performed when data is written to cache, and process 2OB shown in FIG. 2B includes operations performed subsequently when the data is read from cache. Processes 20A-20B each include operations performed by cached volume 120 (on the left side of each figure), cache manager 100 (in the middle) and cache device 110 (on the right). At the start of process 2OA (FIG. 2A), a write request directed to address X on disk storage is received by cache manager 100 in act 205. In act 210, cache manager 100 employs cache metadata (not shown) to determine an address Y on cache device 110 to which the data should also be written. Cache manager also generates the representation of at least a portion of the data. Embodiments of the invention may generate this representation using any suitable technique. In some embodiments, one or more cryptographic authentication techniques may be employed to generate the representation. For example, in some embodiments, the representation may comprise a message authentication code (MAC) generated from the data and a set of secret keys and per-data item sequence numbers. However, the invention is not limited to such an implementation, as any suitable technique for generating the representation may be employed. For example, cryptographic authentication techniques need not be employed. As an example, a strong hash and/or cyclic redundancy code (CRC) might alternatively be used to represent data, and may be generated from individual data items stored to cache, or for one or more groups of data items. In this respect, Applicants have appreciated that if the goal of verifying data authenticity were to merely detect instances of hardware corruption (i.e., hacking of data were not a concern), then using a CRC may be sufficient, and may consume less processing resources than generating a MAC for each data item. However, if the goal is to prevent a hacker or malicious actor from modifying cache contents, then a cryptographic solution may be preferable, so that a representation such as a MAC may be used. The invention is not limited to any particular implementation, as any suitable technique may be employed.
In act 215, cache manager 100 issues the request to cache device 110 to write the data to cache address Y. Cache manager also issues a request to cache device 110 to write the representation. For example, cache manager 100 may specify that the representation should be written to one or more locations adjacent to cache address Y, or be stored in cache metadata for the data, and/or using any other technique. As discussed above, the invention is not limited to writing the representation to any particular location (e.g., it need not be written to cache device 110). If written to cache, the representation may be associated with the data in any manner desired.
In act 220, cache device 110 receives the request and processes it by writing the data and the representation in act 225. In act 230, cache manager 100 issues a corresponding request to cached volume
120 to write the data to disk address X. Cached volume 120 receives this request in act 235 and processes it by writing the data to address X in act 240. Although shown in FIG. 2 A as being performed subsequent to the write to cache in acts 215-225, acts 230-240 may be performed in parallel with acts 215-225, or at any other suitable time(s), as the invention is not limited to any particular implementation. Process 2OA then completes.
Process 2OB (FIG. 2B) includes operations performed to read the data stored to cache. In act 245, cache manager 100 receives a request to read the data stored at address X on cached volume 120. Using cache metadata (not shown), cache manager 100 determines that the data is stored at address Y on cache device 110 in act 250. In act 255, cache manager 100 issues a read request to cache device 110 to retrieve both the data stored at address Y and the associated representation. The request is received by cache device 110 in act 260 and processed in act 265, whereupon cache device 100 returns the results to cache manager 100.
In act 270, cache manager 100 determines whether the data retrieved from cache can be verified. In some embodiments, this is done by re-generating the representation of the data, and comparing the regenerated representation with the representation originally generated in act 215. For example, act 270 may include regenerating a MAC or CRC for the data, and comparing it to the representation retrieved from cache in act 265.
If it is determined in act 270 that the representation can be verified, the process proceeds to act 275, wherein the data retrieved from cache device 110 is served to satisfy the read request, and process 2OB then completes. If it is determined in act 270 that the representation can not be verified, the process proceeds to act 280, wherein cache manager 100 issues a request to cache device 110 to evict (e.g., erase or otherwise make inaccessible) the data stored at address Y. Cache manager 100 then issues a request to cached volume 120 to read the data from address X on disk in act 285. This request is received in act 290 and processed in act 295, whereupon the data is returned to cache manager 100. The data read from address X is then served to satisfy the read request in act 299. Process 2OB then completes.
If a cryptographic solution for verifying data authenticity is employed, any key(s) used to generate a representation may be written to locations other than the cache device for the duration of the power transition, to prevent a hacker from gaining access to the keys to regenerate representations for altered data items. For example, in some embodiments, keys may be stored in disk storage (e.g., when the computer is shut down) to prevent unauthorized access. However, the invention is not limited to such an implementation, as keys need not be stored, and if stored, may reside in any suitable location. For example, stored keys may be placed in any configuration store provided by the operating system that is available during system boot (e.g., the system registry in Windows), or re-generated based on some user input (e.g., a password) so that no key storage is necessary.
It should be appreciated that the above-described embodiments for verifying the authenticity of a data item stored on a cache device are merely examples, and that authenticity may be verified using any suitable technique. For example, data item authenticity need not be verified by generating a representation of at least a portion of the data item when the data item is written which is later re-generated when the data item is read. Any suitable technique which reliably ensures that a data item read from cache is authentic and matches the data item previously written to cache may be employed. The invention is not limited to any particular implementation.
II. Preventing Cache Data Stateness
As discussed above, conventional operating systems are incapable of detecting when write operations are performed to data items stored on disk during power transitions which render cache contents stale. Some embodiments of the invention provide mechanisms for detecting when these "offline writes" occur, thereby ensuring that cache contents accurately reflect data stored on disk after a power transition occurs.
With some operating systems (e.g., the Windows family of operating systems offered by Microsoft Corporation), the semantics of certain power transitions (e.g., standby and hibernate modes) are such that data on non-removable storage devices (e.g., disk storage) can not be modified during a power transition. As such, the cache contents corresponding to data on such non-removable media generally do not become stale. However, when the computer is shut down, a number of things can happen which make it possible for data on disk to be modified. For example, a user may boot the disk into another operating system on that computer, or connect the disk to another computer, and modify data stored on disk. In addition, as discussed above, the mechanics of shutdown of many conventional operating systems are such that at some point during the shutdown, a cache device is turned off and is no longer accessible by the operating system, but the operating system may continue to access the disk. As such, the operating system may update data items on disk which are cached on the cache device. Because the cache device has been turned off, the operating system has no way of also updating these cache contents, so that they are rendered stale.
To manage these and other occurrences, some embodiments of the invention provide techniques for detecting modifications to data stored on disk after a shutdown is initiated, so that cache contents which are rendered stale by such modifications may be updated, evicted from cache, or otherwise handled.
To detect writes which are performed to disk storage during shutdown operations occurring after a cache device is shut off, some embodiments of the invention employ a write recorder component. A write recorder component may, for example, be implemented as a driver in the operating system's I/O path, although the invention is not limited to such an implementation. For example, a write recorder component may be hardware-based. As an example, disk storage hardware might provide one or more interfaces that provide the capability to identify the set of modifications that occurred during a certain time period, or whether modifications occurred during a certain time period. For example, disk storage hardware may provide a spin-up/power up/boot counter which may be employed to deduce that at least some stored data items have been updated, in which case cache contents corresponding to the data stored on disk may be evicted (this should not occur frequently, so employing the cache device should still deliver substantial benefits). The invention is not limited to any particular implementation.
In some embodiments, the write recorder component is configured to become active when shutdown is initiated, and to keep track of all writes performed to disk storage until shutdown completes. As a result, when the computer is later restarted, these writes may be applied to cache contents. For example, when the computer is restarted and disk volumes come online, the cache manager may then started, and may begin tracking writes to disk. The cache manager may query the write recorder component to determine the offline writes that occurred after the cache device was shut off, merge these writes with those which the cache manager tracked during startup, and apply the merged set of writes to cache contents. Applying writes to cache contents may include, for example, updating the cache contents corresponding to the data on disk to which the writes were directed (e.g., performing the same write operations to these cache contents), evicting these cache contents, a combination of the two (e.g., applying write operations to certain cache contents and evicting others), or performing some other operation(s). After offline writes are applied to cache contents, the write recorder component may be shut down, and the cache device may begin servicing I/O requests.
FIG. 3 depicts an example process 30 for tracking offline writes and applying these writes to cache contents. In particular, process 300 includes operations performed by cache manager 100, write recorder 300, cache device 110 and cached volume 120 during a computer's shutdown and subsequent reboot.
In act 305, which occurs during computer shutdown, cache manager 100 activates write recorder 300 and supplies to it a "persistence identifier" which identifies the set (i.e., generation) of write operations to be tracked by the write recorder. (Examples of the uses for a persistence identifier are described in detail below.) In act 310, cache manager 100 writes the persistence identifier, as well as cache metadata stored in memory, to cache device 110. At this point in the shutdown process, cache device 110 is turned off and becomes inaccessible to cache manager 100.
In act 315, write recorder 300 writes the persistence identifier passed to it in act 305 to cached volume 120, and begins tracking any write operations performed to cached volume 120 during shutdown. For example, write recorder 300 may create a log file, or one or more other data structures, on cached volume 120 or at some other location(s) to indicate the addresses on disk to which write operations are performed, and/or the data written to those addresses. At the completion of act 315, the computer's shutdown operations have finished. Thereafter, the computer is restarted. As part of the boot process, cached volume
120 is brought online, write recorder 300 and cache manager 100 are restarted. Cache manager 100 may then begin tracking write operations performed to cached volume 120. For example, cache manager 100 may create a log file and store it on cache device 110, cached volume 120, and/or the computer's memory (not shown in FIG. 3). In act 320, write recorder 300 reads the volume changes logged in act 315, as well as the persistence identifier written to cached volume 120 in act 315. The volume changes and persistence identifier are then passed to cache manager 100 in act 325.
It should be appreciated that write recorder 300 may be incapable of tracking of all writes to disk after cache device 110 has turned off. For example, hardware data corruption, untimely power failures and/or problems in writing the log file may render write recorder 300 incapable of tracking all offline writes performed to a disk volume. In such cases, write recorder 300 may indicate to cache manager 100 in act 325 that it can not reliably determine that the log is a complete and accurate record of all offline writes performed. If this occurs, cache manager 100 may evict the entire cache contents, or a portion thereof (e.g., corresponding to a particular disk volume for which the write recorder could not track all write operations), as potentially being unreliable. The remainder of the description of FIG. 3 assumes that write recorder 300 is capable of tracking all offline writes.
In act 330, cache manager 100 reads the cache metadata and persistence identifier from cache device 110 into memory. Cache manager 100 determines whether the persistence identifier can be verified (this is described further below). If not, cache manager 100 may evict the entire contents of cache device 110, or a portion thereof (e.g., corresponding to a particular disk volume for which the persistence identifier could not be verified). If the persistence identifier can be verified, cache manager 100 merges any write operations performed to disk storage since the computer was restarted with any write operations tracked by write recorder 300. For example, if one or more logs indicate the data written to each address on disk, cache manager 100 may select the latest update performed to each address and write it to memory.
In some embodiments, write recorder 300 may be configured to continue recording writes after the computer is restarted, so that cache manager 100 need not record writes performed after that point and merge them with writes tracked by write recorder 300. Instead, write recorder 300 may simply provide a record of all writes to cache manager 100.
Using the cache metadata read in act 330, cache manager 100 then applies the set of writes to the contents of cache device 110 in act 335. As described above, applying the writes may include evicting cache contents, updating cache contents, doing both, or performing some other operation(s). For example, offline writes tracked by write recorder 300 in act 315 may be applied by evicting the corresponding cache contents, while the writes tracked by cache manager 100 since the computer was restarted may be applied by updating the corresponding cache contents to reflect the writes. Applying write operations to cache contents may be performed in any suitable way, as the invention is not limited to any particular implementation.
At the completion of act 335, the process of FIG. 3 completes. It should be appreciated that the invention is not limited to employing a write recorder component that is configured to become active when shutdown is initiated, as write operations not occurring during shutdown may also, or alternatively, be tracked. For example, in some implementations, a cache device may be susceptible to becoming inaccessible for periods of time. For example, if the cache device is accessed via one or more networks, connectivity could be lost, or if the cache device is removable from the computer, a surprise (e.g., unintentional) removal could occur. As a result, some embodiments may employ a write recorder to track all (or a portion of) writes performed to disk, not just those occurring during shutdown, and a cache device which is configured to periodically capture cache "snapshots" while still online. As such, if the cache becomes inaccessible for some period of time and is later reconnected, the latest cache snapshot can be updated using write operations tracked by the write recorder, rather than having to be completely rebuilt.
It should also be appreciated that while the example process 30 of FIG. 3 may detect offline writes performed by the operating system during shutdown, other measures may be needed to detect offline writes performed to disk after shutdown completes. Such writes may occur, for example, when a user boots the disk into another operating system after shutdown, or removes the disk from the computer after shutdown and connects it to another computer, and then modifies data stored on disk.
Recognizing the difficulties associated with attempting to track offline writes occurring after shutdown (e.g., by another operating system), some embodiments of the invention instead try to prevent them from occurring. For example, some embodiments attempt to make a particular disk volume inaccessible to operating systems that do not provide a write recorder component after shutdown. This may be accomplished in any of numerous ways.
In some embodiments, write recorder 300 may mark a disk volume in such a way that it becomes un-mountable by operating systems that do not provide a write recorder component to track offline writes. For example, write recorder 300 may modify the volume identifier that indicates the type of file system used on the volume. In this respect, those skilled in the art will recognize that a volume identifier enables an operating system to identify the type of file system used to store data on the volume, thereby enabling the operating system to understand the structure of data stored on the volume, where to find files, etc. For example, if a volume identifier indicates that an NT File System (NTFS) file system was used to store data on the volume, then another operating system attempting to mount the volume would understand that an NTFS file system would be needed to parse and access the data thereon. If the volume identifier provided no indication of the type of file system used to store data on the volume, most operating systems would fail to mount the volume, as there would be no reliable way to understand the structure of data stored thereon. As such, some embodiments of the invention modify the volume identifier of a disk volume to make it inaccessible, thereby preventing a user from booting the disk volume into another operating system and making offline changes to data stored on the volume. Recognizing that some operating systems may be capable of identifying the type of file system used to store data on a volume even if the volume identifier were modified, some embodiments of the invention provide a mechanism for detecting when an operating system mounts the volume. In this respect, to mount a disk volume, any operating system would need to update the volume identifier (e.g., to indicate that a NTFS file system was employed to store data on the volume) to allow data thereon to be accessed. Any such update would be easily detectable upon reboot. If such an update were detected, some embodiments of the invention may assume that the contents of the volume had been modified since the last shutdown, and evict the cache contents corresponding to data stored on the volume. Some embodiments of the invention provide a capability whereby a disk volume may be booted into another operating system which also employs a write recorder component. For example, if a disk were removed from one computer running an operating system that provides a write recorder component, and boots the disk into another operating system that provides a write recorder component, the other operating system might be configured to recognize that a changed volume identifier indicates that the volume may be cached. As a result, the other operating system may add to a log of offline writes (e.g., stored on the volume) created by the first operating system.
The above-described embodiments designed to make a disk volume un-mountable by certain operating systems may pose problems for certain applications which rely on the volume identifier to perform certain functions (e.g., backup applications). With these applications, if the volume identifier were changed, the volume may be unrecognizable and thus not backed up. Accordingly, some embodiments of the invention provide a mechanism for determining whether a file system was mounted after shutdown. If so, it is assumed that changes were made to data in the file system, and all cache contents corresponding to data in the file system may be evicted.
Some embodiments may detect the mounting of a file system after shutdown by placing the file system log at shutdown in a state which would require any operating system attempting to mount the file system to modify the log in some way (e.g., change its location, add a new entry, etc.). For example, write recorder 300 may note as part of the task of logging offline writes the location and/or content of the file system log when the file system is dismounted (e.g., in the log itself). Because any operating system attempting to mount the file system would have to change the log (e.g., if the file system were an NTFS file system, an operating system attempting to mount the file system would add an entry to the log), if the log has not changed upon reboot, it is assumed that the file system was not mounted by another operating system during the power transition, so that cache contents corresponding to data stored in the file system have not been rendered stale. Conversely, if the log has been changed in some way (e.g., its location has changed, and entry has been added, etc.) then it is assumed that the file system was mounted by another operating system, and that data stored therein has changed, rendering the cache contents corresponding to data stored in the file system stale. As such, these cache contents may be evicted. In addition to providing mechanisms to prevent offline writes, some embodiments of the invention provide a capability to manage inconsistent generations of cache contents. Inconsistent generations of cache contents may be created for any of numerous reasons. One example may occur when first and second computers, having first and second cache devices connected thereto, employ techniques described herein to persist cache contents across power transitions. If the second cache device were connected to the first computer (or the first cache device connected to the second computer) and the first computer were restarted, incorrect data could be served from the second cache device to satisfy I/O requests. This is because the first computer's operating system could deem the contents of the second cache device authentic (since a regenerated representation of the data returned from cache could match a representation originally generated) and not stale (since offline writes could be applied to cache contents). Another example could arise if a first cache device were connected to a computer, the computer was shut down (thereby persisting cache contents), the computer was then restarted, a second cache device was connected, and the computer was shut down again (thereby persisting cache contents again). If the computer was then restarted again and the first cache device connected, incorrect data could be served to satisfy I/O requests, since there would be no reliable way to determine that the first cache device does not store the latest generation of cache contents.
Some embodiments provide a capability to identify inconsistent generations of cache contents so that cache contents persisted previous to the latest shutdown are not erroneously used to satisfy I/O requests. In some embodiments, this capability is provided via a unique persistence identifier, which may be generated (as an example) as shutdown is initiated, in any of numerous ways. For example, GUIDs and/or cryptographic random number generators may be employed for this purpose. As described above with reference to FIG. 3, the persistence identifier may be stored on the cache device (e.g., in or with cache metadata) as well as on the computer (e.g., on disk and/or memory) and verified (e.g., by comparing the two versions) as the computer is started. If verification is unsuccessful, cache contents may be evicted as representing a previous persisted cache generation.
As with the authentication keys discussed above, any keys used to generate a persistence identifier may be written to a location other than the cache device for the duration of a power transition. For example, in some embodiments a write recorder component may write the keys as well as the persistence identifier to disk storage (e.g., at shutdown). However, the invention is not limited to such an implementation, as those skilled in the art may envision numerous alternative locations in which keys may be saved. Keys may, for example, be kept in any configuration store provided by the operating system which is available during system boot (e.g., the registry in Windows).
III. Cache Metadata
As described above, cache metadata may provide a mapping between disk addresses where data items are stored and the corresponding addresses on a cache device where those data items are cached. Some embodiments of the invention provide a capability for storing cache metadata which significantly reduces the amount of memory required to store cache metadata during system runtime operations.
In addition, some embodiments provide techniques which allow cache metadata to be relied upon across power transitions or any other event which takes the cache device offline (e.g., removal of a cache device from the computer, a network outage which makes a network cache device inaccessible, etc.), so that cache contents may be reliably accessed when the computer is restarted and/or the cache device is brought online. In this respect, it should be appreciated that with certain types of power transitions (e.g., standby and hibernate modes), simply storing cache metadata in memory (i.e., RAM) is acceptable since the contents of memory are preserved during standby and hibernate transitions. During reboot, however, the contents of system memory are not preserved. As such, some embodiments of the invention provide for storing cache metadata on some non- volatile medium/media during shutdown, and then restored upon reboot. For example, cache metadata may be stored on a cache device, and/or on one or more separate non- volatile media. Further, some embodiments may be capable of deriving some portions of cache metadata from others, so that storing all cache metadata is not required.
Some embodiments may employ the techniques described in Section I. above for verifying the authenticity of cache metadata, so as to detect and prevent inadvertent or malicious modifications to metadata when the cache device goes offline (e.g., during computer shutdown, removal of the cache device from the computer, a network outage which makes a network cache device inaccessible, etc.). For example, when the cache device comes online, the cache manager may verify the authenticity of metadata as it is loaded to memory, using the techniques described above with reference to FIGS. 2A-2B. If the authenticity of cache metadata can not be verified, the corresponding cache contents may be updated based on data stored on disk, evicted, or otherwise processed as described above. In some embodiments, cache metadata may be compressed to reduce the amount of metadata to save during shutdown and load at reboot. Because compression of metadata may require saving a separate piece of information (e.g., a header in the cache) containing information about the metadata, the techniques described above may be employed to verify the authenticity of this information as well at reboot.
Some embodiments of the invention provide techniques for storing cache metadata in a manner which greatly reduces the amount of cache metadata stored in memory at any one time, thereby reducing the amount of time required to load cache metadata to, and offload it from, memory (e.g., during runtime and startup/shutdown operations) and greatly reducing the memory "footprint" of cache metadata. In this respect, it should be appreciated that with cache devices having relatively large storage capacity, a significant amount of metadata may be required to manage cache contents. For example, a cache device having a sixteen gigabyte storage capacity may be capable of storing up to thirty- two gigabytes of compressed data. In some implementations, disk addresses may be reflected in cache metadata in "data units" representing four kilobytes of disk storage. As such, to track the location of thirty-two gigabytes of data, eight million distinct data units are needed. If each of the eight million data units is represented in cache metadata using a sixteen-byte mapping (i.e., from a disk address to a cache address), then these mappings require one hundred twenty-eight megabytes of storage. Applicants have appreciated that storing one hundred twenty-eight megabytes of cache metadata in memory would occupy an unnecessarily large portion of memory in many computers. In addition, the time required to write one hundred twenty-eight megabytes of cache metadata from memory to non- volatile media during shutdown, and to restore one hundred twenty-eight megabytes of cache metadata from non- volatile media to memory at reboot, would be prohibitively time-consuming and consume an excessive amount of processing resources.
Recognizing that the amount of cache metadata can not easily be reduced, some embodiments of the invention provide techniques designed to reduce the storage resources needed to store cache metadata, as well as the time and processing resources required to save and restore cache metadata at shutdown and startup. In some embodiments, this is accomplished by storing cache metadata in one or more hierarchical data structures (e.g., trees, multi-level arrays, etc.). Employing a hierarchical data structure may allow lower levels of the hierarchy to be stored on a non- volatile medium (e.g., the cache device) while only higher levels of the hierarchy are stored in memory. For example, in some embodiments, only higher levels of the hierarchy are stored in memory, so that the "footprint" occupied by memory in cache metadata may be greatly reduced, even while an amount of cache metadata needed to support cache devices having significant storage capacity is stored overall. Of course, storing only higher levels of the hierarchy in cache metadata, as some embodiments may provide for storing some information kept at lower levels of the hierarchy in memory as well, so as to reduce the I/O overhead associated with repeat accesses to this information. The invention is not limited to being implemented in any particular fashion. During system operation, as read requests are processed, the cache metadata that is read from the non-volatile medium (i.e., from lower levels of the hierarchy) to perform the read operation may be "paged in" to (i.e., read from a storage medium into) memory so that it may be more quickly accessed for subsequent read requests to the same disk/cache address. When the computer is later shut down and/or the cache device is brought offline, only the cache metadata stored at the higher levels of the hierarchy, and the cache metadata to be stored in the lower levels of the hierarchy which was paged in to memory, may need to be saved to the non-volatile medium. As such, the time required to move cache metadata from memory to non-volatile storage at shutdown, and to restore cache metadata from non-volatile storage to memory at reboot, may be significantly reduced. Some embodiments of the invention employ a B+ tree to store at least a portion of cache metadata. As those skilled in the art will appreciate, B+ trees may employ large branching factors, and therefore reduce the number of levels in the hierarchy employed. Using the example given above, if eight million data units are to be represented in cache metadata and a B+ tree with a branching factor of two hundred were employed (so that each node in the hierarchy has two hundred "child" nodes), a data structure having only three levels would be sufficient to store the metadata: a single "root" node at the highest level, two hundred nodes at the second level, and forty thousand nodes at the third level, with each of the forty thousand nodes including pointers to two hundred data units (or eight million data units total). FIG. 4 depicts this example B+ tree which includes root node 400, level two nodes
41Oi_2oo and level three nodes 42Oi_2oo- Each node includes two hundred elements each separated by pointers to nodes at a lower level in the hierarchy. For example, element 402 in root node 400 is delimited by pointers 401 and 403. A value (e.g., a cache address) associated with a given key (e.g., a disk address) may be determined by following the pointer to the left or right of an element in a node, with the pointer to the left of the element being followed if the key is less than the element, and the pointer to the right being followed if the key is greater than the element. For example, to determine a value for a key which is less than element 402, pointer 401 would be followed to level two node 41O1, to determine a value for a key greater than element 402 but less than element 404, pointer 403 would be followed to level two node 41O2 (not shown), and so on. Similarly, at the level two node, a pointer to the left or right of an element (depending on whether the key is less than or greater than elements in the node) is followed to a level three node. At level three, a final pointer is followed (again based on whether the key is less than or greater than elements in the node) to the value, with each pointer at level three referencing one of the eight million data units in cache metadata.
It should be appreciated that a B+ tree with a large branching factor provides a relatively "flat" hierarchy with almost all nodes being located at the bottom level of the hierarchy. That is, of the 40,201 total nodes in the tree, 40,000 are at the lowest level. Some embodiments of the invention take advantage of this by restoring only the top two levels of the hierarchy to memory at startup, while the cache metadata in the lowest level of the hierarchy is stored on the cache device until needed (e.g., it may be loaded into memory on demand as read requests are processed, loaded lazily, etc.). Because only a portion of the hierarchical data structure is stored in memory, the cache metadata may occupy a much smaller portion of memory than would otherwise be required if the entirety of, or larger portion of, cache metadata were maintained in memory. In addition, when the computer is shut down, only the data at the top two levels and the data loaded into memory during operation need to be stored on the cache device. As a result, both startup and shutdown operations may be performed quickly and efficiently.
Thus, some embodiments of the invention provide for pointers in nodes at one level of the hierarchy stored in memory (in the example above, level two of the hierarchy) which reference nodes at another level of the hierarchy stored on the cache device (in the example above, level three). For example, when a read request for a cached data item is received, embodiments of the invention follow pointers through one or more levels of the hierarchy stored in memory, and then to metadata at lower levels of the hierarchy stored in cache, to determine the address at which the data item is stored in cache. In some embodiments, once the cache address is determined for the data item, it may be stored in memory so that subsequent requests to read the item may be performed without having to read cache metadata from the cache device.
FIG. 5 depicts an example system 50 for managing cache metadata in accordance with some embodiments of the invention. FIG. 5 depicts memory 500 and cache device 110, both accessible to a computer (not shown). When the computer is started, cache metadata comprising one or more levels of a hierarchical data structure such as a B+ tree are loaded to memory 500 in operation 505. Using the example above to illustrate, if there are eight million data units represented in cache metadata, such that a three-level hierarchical data structure may be used to store the cache metadata, then the top two levels of the hierarchy may be loaded to memory 500. Of course, if more or less than eight million data units are be represented in metadata, and a hierarchical data structure having more or less than three levels is to be used, then a different number of levels to the hierarchy may be loaded to memory 500. Thereafter, when a read request is directed to a data item maintained in cache, the cache address at which the data item is stored is determined by accessing cache metadata stored in the level(s) of the hierarchy stored in cache device 110. This cache metadata is then stored in memory 510, so that subsequent reads or writes to the data item may be performed without having to read cache metadata stored on cache device to determine the cache address at which the data item is stored. Instead, the cache address may be read from memory, which may be performed more quickly than a read to cache.
Later, when the computer is shut down, the cache metadata stored in memory (i.e., the metadata stored in the levels of the hierarchy loaded to memory in operation 505, and any metadata used to satisfy read requests written to memory in operation 510) is loaded to cache device 500 in act 515. As a result of the relatively small amount of cache metadata stored in memory, shutdown may be performed quickly, without requiring substantial processing resources.
It should be appreciated that a B+ tree is but one of numerous types of data structures which may be employed to store cache metadata, and that other types of data structures (e.g., hierarchical structures such as AVL trees, red-black trees, binary search trees, B-trees and/or other hierarchical and non-hierarchical data structures) may be employed. The invention is not limited to employing any one data structure or combination of data structures to store cache metadata.
Some embodiments may provide for a "target amount" of cache metadata to be kept in memory at any one time. The target amount may be determined in any suitable fashion. For example, a target amount may be a percentage of the amount of physical memory available to a computer. For example, if the computer has one gigabyte of memory, then two megabytes of cache metadata (as an example) may be stored in memory at any one time. Thus, when the computer is shut down, only two megabytes of cache metadata need to be loaded to the cache device.
In some embodiments, cache metadata may be cycled in and out of memory. For example, if a target amount of cache metadata is already stored in memory, and a read is performed which requires cache metadata to be read from the cache device, that metadata may be "paged in" to memory, and other cache metadata (e.g., that which was accessed least recently) may be erased. For example, cache metadata may be erased after being written to the cache device. Alternatively, the system may determine whether the cache metadata has changed since the last time it was written out, and if not, it may simply be erased, thus eliminating the time and processing resources otherwise required to write the cache metadata. Using the techniques described above, the small "footprint" occupied by cache metadata in memory may be maintained.
FIG. 6 depicts an example. Specifically, process 60 shown in FIG. 6 includes operations which may be performed by cache manager 100 to read cache metadata using the techniques described above.
At the start of process 600, a request is received in act 605 to read data stored at disk address X. In act 610, a determination is made whether the cache address at which the data is stored can be identified from cache metadata stored in memory. If so, the process proceeds to act 615, wherein the identified cache address is determined, and then used to issue a read request to cache device 110 in act 620. Process 60 then completes. If the cache address can not be identified using cache metadata stored in memory, then the process proceeds to act 625, wherein cache metadata is read from cache device 110 to determine the cache address at which the data is stored. Using the cache offset identified in act 625, a read request is issued to the identified cache offset in act 620, and process 60 then completes.
It should be appreciated that storing cache metadata on the cache device may not only speed up the process of loading and restoring cache metadata during startup and shutdown, but may also speed up the system operations performed during startup and shutdown. In this respect, shutdown and startup often involve multiple accesses to certain data items, and performing two read operations to a cache device is typically faster than performing one read operation to disk storage. As a result, if a data item accessed during shutdown and/or startup and the metadata which specifies its location were both stored in cache, then the data item might be accessed more quickly then if the data item were stored on disk, since the two reads to cache (i.e., one to access cache metadata to determine the item's location, and a second to access the item itself) can typically be performed more quickly than a single read to disk. As such, individual operations performed during shutdown and startup may be expedited. Even further, if during a first read of cache metadata from cache the address at which the item is stored is paged into memory, then subsequent reads of the data item could be performed even more quickly, since a read to memory can typically be performed more quickly than a read to cache. Various aspects of the systems and methods for practicing features of the invention may be implemented on one or more computer systems, such as the exemplary computer system 700 shown in FIG. 7. Computer system 700 includes input device(s) 702, output device(s) 701, processor 703, memory system 704 and storage 706, all of which are coupled, directly or indirectly, via interconnection mechanism 705, which may comprise one or more buses, switches, networks and/or any other suitable interconnection. The input device(s) 702 receive(s) input from a user or machine (e.g., a human operator), and the output device(s) 701 display(s) or transmit(s) information to a user or machine (e.g., a liquid crystal display). The processor 703 typically executes a computer program called an operating system (e.g., a Microsoft Windows-family operating system, or any other suitable operating system) which controls the execution of other computer programs, and provides scheduling, input/output and other device control, accounting, compilation, storage assignment, data management, memory management, communication and dataflow control. Collectively, the processor and operating system define the computer platform for which application programs and other computer program languages are written.
The processor 703 may also execute one or more computer programs to implement various functions. These computer programs may be written in any type of computer program language, including a procedural programming language, object-oriented programming language, macro language, or combination thereof. These computer programs may be stored in storage system 706. Storage system 706 may hold information on a volatile or non-volatile medium, and may be fixed or removable. Storage system 706 is shown in greater detail in FIG. 8. Storage system 706 typically includes a computer-readable and writable nonvolatile recording medium 801, on which signals are stored that define a computer program or information to be used by the program. A medium may, for example, be a disk or flash memory. Typically, an operation, the processor 703 causes data to be read from the nonvolatile recording medium 801 into a volatile memory 802 (e.g., a random access memory, or RAM) that allows for faster access to the information by the processor 703 than does the medium 801. The memory 802 may be located in the storage system 706, as shown in FIG. 8, or in memory system 704, as shown in FIG. 7. The processor 703 generally manipulates the data within the integrated circuit memory 704, 802 and then copies the data to the medium 801 after processing is completed. A variety of mechanisms are known for managing data movement between the medium 801 and the integrated circuit memory element 704, 802, and the invention is not limited thereto. The invention is also not limited to a particular memory system 704 or storage system 706.
Further, embodiments of the invention are also not limited to employing a cache manager component which is implemented as a driver in the I/O stack of an operating system. Any suitable component or combination of components, each of which may be implemented by an operating system or one or more standalone components, may alternatively or additionally be employed. The invention is not limited to any particular implementation.
The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the above-discussed functionality can be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. In this respect, it should be appreciated that any component or collection of components that perform the functions described herein can be generically considered as one or more controllers that control the above-discussed functions. The one or more controllers can be implemented in numerous ways, such as with dedicated hardware, or by employing one or more processors that are programmed using microcode or software to perform the functions recited above. Where a controller stores or provides data for system operation, such data may be stored in a central repository, in a plurality of repositories, or a combination thereof. Further, it should be appreciated that a (client or server) computer may be embodied in any of a number of forms, such as a rack-mounted computer, desktop computer, laptop computer, tablet computer, or other type of computer. Additionally, a (client or server) computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
Also, a (client or server) computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface including keyboards, and pointing devices, such as mice, touch pads, and digitizing tables. As another example, a computer may receive input information through speech recognition or in other audible format.
Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks. Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms.
Additionally, software may be written using any of a number of suitable programming languages and/or conventional programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. In this respect, the invention may be embodied as a storage medium (or multiple storage media) (e.g., a computer memory, one or more floppy disks, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other computer storage media) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. The storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above.
The terms "program" or "software" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention. Computer-executable instructions may be provided in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
Use of ordinal terms such as "first," "second," "third," etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including," "comprising," or "having," "containing," "involving," and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.
What is claimed is:

Claims

1. A method for operating a computer (700) comprising a memory and having coupled thereto a storage medium (706) and a cache device (110), the storage medium (706) storing a plurality of data items each at respective addresses, each of the plurality of data items also being stored at a corresponding address on the cache device (110), cache metadata accessible to the computer providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored, the method comprising acts of:
(A) storing the cache metadata in a hierarchical data structure comprising a plurality of hierarchy levels; and
(B) loading only a subset of the plurality of hierarchy levels to the memory.
2. The method of claim 1, wherein the act (B) further comprises storing at least a portion of the remainder of the plurality of hierarchy levels on the cache device.
3. The method of claim 1, further comprising an act, performed after the act (A), comprising initiating a reboot of the computer and/or bringing the cache device offline, and wherein the act (B) is performed upon initiating the reboot of the computer and/or bringing the cache device online.
4. The method of claim 1 , wherein the hierarchical data structure has branching factor of at least one hundred.
5. The method of claim 1, wherein the act (A) further comprises storing cache metadata on the cache device in response to a command to take the cache device offline.
6. The method of claim 1, further comprising acts of:
(D) receiving a request to read a data item stored at an address on the storage medium; (E) accessing a first portion of the cache metadata to identify a corresponding address at which the data item is stored on the cache device; and
(F) storing at least some of the first portion of the cache metadata in the memory.
7. The method of claim 6, wherein the act (D) further comprises receiving a request to read a data item as part of an operation performed to boot an operating system and/or to bring the cache device online.
8. The method of claim 6, wherein: the act (D) further comprises receiving requests to read a plurality of data items each stored at a respective address on the storage medium; the act (E) further comprises, for each request received in (D), accessing the cache metadata to identify a corresponding address at which a data item is stored on the cache device; the act (F) further comprises, for each access in (E), storing an indication of the corresponding address in the memory; and wherein the method further comprises an act of:
(G) upon receiving a command to shut down the computer, storing the subset of the plurality of hierarchy levels loaded to the memory in (C) and/or the indications stored in (F) to the cache device.
9. The method of claim 6, wherein the act (F) further comprises:
(Fl) determining whether a target amount of cache metadata is already stored in the memory; (F2) if it is determined that the target amount of cache metadata is already stored in the memory: identifying a second portion of cache metadata to be erased from the memory; erasing the second portion of cache metadata; and storing the first portion of cache metadata to the memory; and
(F3) if it is determined that the target amount of cache metadata is not already stored in the memory, storing the first portion of cache metadata to the memory.
10. The method of claim 9, wherein the memory has a storage capacity, and wherein the determining in the act (Fl) is performed with reference to the storage capacity of the memory.
11. The method of claim 6, wherein the act (E) further comprises verifying that the cache metadata was not modified after completion of the act (A).
12. The method of claim 11, wherein the act (A) further comprises generating a representation of at least a portion of the cache metadata and writing the representation to the cache device, and wherein the act of verifying in (E) comprises: (El) retrieving the representation written to the cache device;
(E2) re-generating the representation; and
(E3) comparing the representation retrieved in (El) to the representation regenerated in (E2) to determine whether the cache metadata can reliably be employed to identify the corresponding address at which the data item is stored on the cache device.
13. The method of claim 12, further comprising acts of:
(E4) if it is determined that the cache metadata can reliably be employed to identify the corresponding address, reading the data item at the corresponding address on the cache device; and (E5) if it is determined that the cache metadata can not reliably be employed to identify the corresponding address, evicting the cache metadata and reading the data item from the address on the storage medium.
14. At least one computer-readable storage medium having instructions encoded thereon which, when executed by a computer (700) comprising a memory and having coupled thereto disk storage (706) and a cache device (110), the disk storage storing a plurality of data items each at respective addresses, each of the plurality of data items also being stored at a corresponding address on the cache device, cache metadata accessible to the computer providing a mapping between the address on the disk storage and the corresponding address on the cache device at which each data item is stored, perform a method comprising acts of:
(A) storing the cache metadata, in the cache device, in a hierarchical data structure comprising a plurality of hierarchy levels;
(B) initiating a reboot of the computer;
(C) upon initiating the reboot of the computer, loading only a subset of the plurality of hierarchy levels to the memory;
(D) receiving a request to read a data item stored at an address on the storage medium;
(E) accessing a first portion of the cache metadata to identify a corresponding address at which the data item is stored on the cache device; and
(F) storing the first portion of the cache metadata in the memory.
15. A computer system (700), comprising: a memory (704); a storage medium (706) storing a plurality of data items at respective addresses; a cache device (110) also storing the plurality of data items at corresponding addresses and cache metadata providing a mapping between the address on the storage medium and the corresponding address on the cache device at which each data item is stored, the cache metadata being stored in a hierarchical data structure comprising a plurality of hierarchy levels; at least one processor (703) programmed to: upon initiating a reboot of the computer, load only a subset of the plurality of hierarchy levels to the memory; process requests to read data items stored at respective addresses on the storage medium by using the cache metadata to identify corresponding addresses at which the data items are stored in the cache device and by storing identified corresponding addresses in the memory; and process a command to shut down the computer by transferring the subset of the plurality of hierarchy levels and the identified corresponding addresses from the memory to the cache device.
PCT/US2009/063127 2008-11-14 2009-11-03 Managing cache data and metadata WO2010056571A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2011536387A JP2012508932A (en) 2008-11-14 2009-11-03 Manage cache data and metadata
ES09826570.5T ES2663701T3 (en) 2008-11-14 2009-11-03 Data management and cache metadata
EP09826570.5A EP2353081B1 (en) 2008-11-14 2009-11-03 Managing cache data and metadata
CN200980145878.1A CN102216899B (en) 2008-11-14 2009-11-03 Managing cache data and metadata

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/271,400 2008-11-14
US12/271,400 US8032707B2 (en) 2008-09-15 2008-11-14 Managing cache data and metadata

Publications (2)

Publication Number Publication Date
WO2010056571A2 true WO2010056571A2 (en) 2010-05-20
WO2010056571A3 WO2010056571A3 (en) 2010-07-29

Family

ID=42008275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/063127 WO2010056571A2 (en) 2008-11-14 2009-11-03 Managing cache data and metadata

Country Status (7)

Country Link
US (3) US8032707B2 (en)
EP (1) EP2353081B1 (en)
JP (1) JP2012508932A (en)
CN (1) CN102216899B (en)
ES (1) ES2663701T3 (en)
TW (1) TWI471726B (en)
WO (1) WO2010056571A2 (en)

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7644239B2 (en) 2004-05-03 2010-01-05 Microsoft Corporation Non-volatile memory cache performance improvement
US7490197B2 (en) 2004-10-21 2009-02-10 Microsoft Corporation Using external memory devices to improve system performance
US8914557B2 (en) 2005-12-16 2014-12-16 Microsoft Corporation Optimizing write and wear performance for a memory
IL176685A (en) * 2006-07-03 2011-02-28 Eci Telecom Dnd Inc Method for performing a system shutdown
US8631203B2 (en) 2007-12-10 2014-01-14 Microsoft Corporation Management of external memory functioning as virtual cache
US8032707B2 (en) 2008-09-15 2011-10-04 Microsoft Corporation Managing cache data and metadata
US9032151B2 (en) 2008-09-15 2015-05-12 Microsoft Technology Licensing, Llc Method and system for ensuring reliability of cache data and metadata subsequent to a reboot
US7953774B2 (en) 2008-09-19 2011-05-31 Microsoft Corporation Aggregation of write traffic to a data store
US8200895B2 (en) * 2009-05-04 2012-06-12 Microsoft Corporation File system recognition structure
US9015733B2 (en) 2012-08-31 2015-04-21 Facebook, Inc. API version testing based on query schema
KR101113894B1 (en) * 2010-05-18 2012-02-29 주식회사 노바칩스 Semiconductor memory system and controlling method thereof
JP5520747B2 (en) * 2010-08-25 2014-06-11 株式会社日立製作所 Information device equipped with cache and computer-readable storage medium
US8793309B2 (en) * 2010-09-07 2014-07-29 Sap Ag (Th) Systems and methods for the efficient exchange of hierarchical data
US10114847B2 (en) * 2010-10-04 2018-10-30 Ca, Inc. Change capture prior to shutdown for later backup
US9003104B2 (en) 2011-02-15 2015-04-07 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a file-level cache
US8996807B2 (en) 2011-02-15 2015-03-31 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a multi-level cache
US8874823B2 (en) 2011-02-15 2014-10-28 Intellectual Property Holdings 2 Llc Systems and methods for managing data input/output operations
US9201677B2 (en) 2011-05-23 2015-12-01 Intelligent Intellectual Property Holdings 2 Llc Managing data input/output operations
US9286079B1 (en) * 2011-06-30 2016-03-15 Western Digital Technologies, Inc. Cache optimization of a data storage device based on progress of boot commands
CN102567490B (en) * 2011-12-21 2013-12-04 华为技术有限公司 Method and apparatus for recovering description information and caching data in database
US9286303B1 (en) * 2011-12-22 2016-03-15 Emc Corporation Unified catalog service
US9116812B2 (en) 2012-01-27 2015-08-25 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a de-duplication cache
US8756538B2 (en) * 2012-02-20 2014-06-17 International Business Machines Corporation Parsing data representative of a hardware design into commands of a hardware design environment
US9612966B2 (en) 2012-07-03 2017-04-04 Sandisk Technologies Llc Systems, methods and apparatus for a virtual machine cache
US10339056B2 (en) 2012-07-03 2019-07-02 Sandisk Technologies Llc Systems, methods and apparatus for cache transfers
WO2014030249A1 (en) * 2012-08-24 2014-02-27 株式会社日立製作所 Verification system and verification method for i/o performance of volume
US9646028B2 (en) * 2012-08-31 2017-05-09 Facebook, Inc. Graph query logic
US20140067781A1 (en) * 2012-08-31 2014-03-06 Scott W. Wolchok Graph Query Language API Querying and Parsing
US9047238B2 (en) 2012-11-28 2015-06-02 Red Hat Israel, Ltd. Creating a virtual machine from a snapshot
US10713183B2 (en) * 2012-11-28 2020-07-14 Red Hat Israel, Ltd. Virtual machine backup using snapshots and current configuration
US9317435B1 (en) * 2012-12-18 2016-04-19 Netapp, Inc. System and method for an efficient cache warm-up
US9842053B2 (en) 2013-03-15 2017-12-12 Sandisk Technologies Llc Systems and methods for persistent cache logging
US10089192B2 (en) 2013-06-13 2018-10-02 Hytrust, Inc. Live restore for a data intelligent storage system
US9213706B2 (en) 2013-06-13 2015-12-15 DataGravity, Inc. Live restore for a data intelligent storage system
US8849764B1 (en) 2013-06-13 2014-09-30 DataGravity, Inc. System and method of data intelligent storage
US10102079B2 (en) 2013-06-13 2018-10-16 Hytrust, Inc. Triggering discovery points based on change
US9684607B2 (en) 2015-02-25 2017-06-20 Microsoft Technology Licensing, Llc Automatic recovery of application cache warmth
US9582160B2 (en) 2013-11-14 2017-02-28 Apple Inc. Semi-automatic organic layout for media streams
US9489104B2 (en) 2013-11-14 2016-11-08 Apple Inc. Viewable frame identification
US20150134661A1 (en) * 2013-11-14 2015-05-14 Apple Inc. Multi-Source Media Aggregation
US10108686B2 (en) 2014-02-19 2018-10-23 Snowflake Computing Inc. Implementation of semi-structured data as a first-class database element
US10545917B2 (en) 2014-02-19 2020-01-28 Snowflake Inc. Multi-range and runtime pruning
WO2016122491A1 (en) * 2015-01-28 2016-08-04 Hewlett-Packard Development Company, L.P. Page cache in a non-volatile memory
US9684596B2 (en) 2015-02-25 2017-06-20 Microsoft Technology Licensing, Llc Application cache replication to secondary application(s)
US9600417B2 (en) * 2015-04-29 2017-03-21 Google Inc. Data caching
CN106484691B (en) 2015-08-24 2019-12-10 阿里巴巴集团控股有限公司 data storage method and device of mobile terminal
CN106681649A (en) * 2015-11-06 2017-05-17 湖南百里目科技有限责任公司 Distributed storage metadata accelerating method
US10678578B2 (en) * 2016-06-30 2020-06-09 Microsoft Technology Licensing, Llc Systems and methods for live migration of a virtual machine based on heat map and access pattern
US10437780B2 (en) * 2016-07-14 2019-10-08 Snowflake Inc. Data pruning based on metadata
CN108733507B (en) * 2017-04-17 2021-10-08 伊姆西Ip控股有限责任公司 Method and device for file backup and recovery
US10733044B2 (en) * 2018-07-09 2020-08-04 Microsoft Technology Licensing, Llc Use of cache for content validation and error remediation
US11354415B2 (en) * 2019-06-29 2022-06-07 Intel Corporation Warm boot attack mitigations for non-volatile memory modules
US10992743B1 (en) * 2019-09-23 2021-04-27 Amazon Technologies, Inc. Dynamic cache fleet management
US11221776B2 (en) * 2019-12-30 2022-01-11 Micron Technology, Inc. Metadata indication for a memory device
CN113031864B (en) * 2021-03-19 2024-02-02 上海众源网络有限公司 Data processing method and device, electronic equipment and storage medium
US11366765B1 (en) 2021-04-21 2022-06-21 International Business Machines Corporation Optimize metadata management to boost overall system performance
US20230124622A1 (en) * 2021-10-14 2023-04-20 Arm Limited Alarm Systems and Circuits
KR20230072886A (en) * 2021-11-18 2023-05-25 에스케이하이닉스 주식회사 Apparatus and method for improving data input/output performance of storage
US11860780B2 (en) 2022-01-28 2024-01-02 Pure Storage, Inc. Storage cache management

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128627A (en) 1998-04-15 2000-10-03 Inktomi Corporation Consistent data storage in an object cache
US20070061511A1 (en) 2005-09-15 2007-03-15 Faber Robert W Distributed and packed metadata structure for disk cache

Family Cites Families (282)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4476526A (en) 1981-11-27 1984-10-09 Storage Technology Corporation Cache buffered memory subsystem
US4612612A (en) 1983-08-30 1986-09-16 Amdahl Corporation Virtually addressed cache
US4979108A (en) 1985-12-20 1990-12-18 Ag Communication Systems Corporation Task synchronization arrangement and method for remote duplex processors
US4972316A (en) 1987-03-30 1990-11-20 International Business Machines Corporation Method of handling disk sector errors in DASD cache
US4945474A (en) 1988-04-08 1990-07-31 Internatinal Business Machines Corporation Method for restoring a database after I/O error employing write-ahead logging protocols
US5394531A (en) 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
EP0617363B1 (en) 1989-04-13 2000-01-26 SanDisk Corporation Defective cell substitution in EEprom array
JPH02273843A (en) 1989-04-14 1990-11-08 Nec Corp Swapping device
US5900870A (en) 1989-06-30 1999-05-04 Massachusetts Institute Of Technology Object-oriented computer user interface
US5088026A (en) 1990-02-09 1992-02-11 International Business Machines Corporation Method for managing a data cache using virtual external storage addresses as arguments
US5307497A (en) 1990-06-25 1994-04-26 International Business Machines Corp. Disk operating system loadable from read only memory using installable file system interface
RU2010317C1 (en) 1990-07-20 1994-03-30 Институт точной механики и вычислительной техники им.С.А.Лебедева РАН Buffer memory control unit
US5263136A (en) 1991-04-30 1993-11-16 Optigraphics Corporation System for managing tiled images using multiple resolutions
US5764877A (en) 1991-06-25 1998-06-09 Digital Equipment Corporation Media recovery with time-split B-trees
JP2582487B2 (en) 1991-07-12 1997-02-19 インターナショナル・ビジネス・マシーンズ・コーポレイション External storage system using semiconductor memory and control method thereof
US6230233B1 (en) 1991-09-13 2001-05-08 Sandisk Corporation Wear leveling techniques for flash EEPROM systems
US5297258A (en) 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
JP3451099B2 (en) 1991-12-06 2003-09-29 株式会社日立製作所 External storage subsystem
EP0547992A3 (en) 1991-12-17 1993-12-01 Ibm Method and system for enhanced efficiency of data recovery in balanced tree memory structures
JP3485938B2 (en) 1992-03-31 2004-01-13 株式会社東芝 Nonvolatile semiconductor memory device
US5420998A (en) 1992-04-10 1995-05-30 Fujitsu Limited Dual memory disk drive
US5398325A (en) 1992-05-07 1995-03-14 Sun Microsystems, Inc. Methods and apparatus for improving cache consistency using a single copy of a cache tag memory in multiple processor computer systems
US5574877A (en) 1992-09-25 1996-11-12 Silicon Graphics, Inc. TLB with two physical pages per virtual tag
US5454098A (en) 1992-09-28 1995-09-26 Conner Peripherals, Inc. Method of emulating access to a sequential access data storage device while actually using a random access storage device
US5561783A (en) 1992-11-16 1996-10-01 Intel Corporation Dynamic cache coherency method and apparatus using both write-back and write-through operations
US5751932A (en) 1992-12-17 1998-05-12 Tandem Computers Incorporated Fail-fast, fail-functional, fault-tolerant multiprocessor system
US5463739A (en) 1992-12-22 1995-10-31 International Business Machines Corporation Apparatus for vetoing reallocation requests during a data transfer based on data bus latency and the number of received reallocation requests below a threshold
US5557770A (en) 1993-03-24 1996-09-17 International Business Machines Corporation Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk
KR970008188B1 (en) 1993-04-08 1997-05-21 가부시끼가이샤 히다찌세이사꾸쇼 Control method of flash memory and information processing apparatus using the same
US5551002A (en) 1993-07-01 1996-08-27 Digital Equipment Corporation System for controlling a write cache and merging adjacent data blocks for write operations
US5572660A (en) 1993-10-27 1996-11-05 Dell Usa, L.P. System and method for selective write-back caching within a disk array subsystem
JPH086854A (en) 1993-12-23 1996-01-12 Unisys Corp Outboard-file-cache external processing complex
US6026027A (en) 1994-01-31 2000-02-15 Norand Corporation Flash memory system having memory cache
EP0668565B1 (en) 1994-02-22 2002-07-17 Advanced Micro Devices, Inc. Virtual memory system
US6185629B1 (en) 1994-03-08 2001-02-06 Texas Instruments Incorporated Data transfer controller employing differing memory interface protocols dependent upon external input at predetermined time
US5603001A (en) 1994-05-09 1997-02-11 Kabushiki Kaisha Toshiba Semiconductor disk system having a plurality of flash memories
US5642501A (en) 1994-07-26 1997-06-24 Novell, Inc. Computer method and apparatus for asynchronous ordered operations
US5845293A (en) 1994-08-08 1998-12-01 Microsoft Corporation Method and system of associating, synchronizing and reconciling computer files in an operating system
DE69520753T2 (en) 1994-10-18 2001-11-22 Iomega Corp DISK CASSETTE DETECTION METHOD AND DEVICE
JPH08137634A (en) 1994-11-09 1996-05-31 Mitsubishi Electric Corp Flash disk card
JPH11500548A (en) 1995-01-23 1999-01-12 タンデム コンピューターズ インコーポレイテッド Database integrity maintenance system
JP3426385B2 (en) 1995-03-09 2003-07-14 富士通株式会社 Disk controller
US5897660A (en) 1995-04-07 1999-04-27 Intel Corporation Method for managing free physical pages that reduces trashing to improve system performance
US6078925A (en) 1995-05-01 2000-06-20 International Business Machines Corporation Computer program product for database relational extenders
US5917723A (en) 1995-05-22 1999-06-29 Lsi Logic Corporation Method and apparatus for transferring data between two devices with reduced microprocessor overhead
US5608892A (en) 1995-06-09 1997-03-04 Alantec Corporation Active cache for a microprocessor
US5720029A (en) 1995-07-25 1998-02-17 International Business Machines Corporation Asynchronously shadowing record updates in a remote copy session using track arrays
US5717954A (en) 1995-10-13 1998-02-10 Compaq Computer Corporation Locked exchange FIFO
US5809280A (en) 1995-10-13 1998-09-15 Compaq Computer Corporation Adaptive ahead FIFO with LRU replacement
US5754782A (en) 1995-12-04 1998-05-19 International Business Machines Corporation System and method for backing up and restoring groupware documents
US5754888A (en) 1996-01-18 1998-05-19 The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations System for destaging data during idle time by transferring to destage buffer, marking segment blank , reodering data in buffer, and transferring to beginning of segment
US5806074A (en) 1996-03-19 1998-09-08 Oracle Corporation Configurable conflict resolution in a computer implemented distributed database
US6247026B1 (en) 1996-10-11 2001-06-12 Sun Microsystems, Inc. Method, apparatus, and product for leasing of delegation certificates in a distributed system
US5832515A (en) 1996-09-12 1998-11-03 Veritas Software Log device layered transparently within a filesystem paradigm
US5996054A (en) 1996-09-12 1999-11-30 Veritas Software Corp. Efficient virtualized mapping space for log device data storage system
US6321234B1 (en) 1996-09-18 2001-11-20 Sybase, Inc. Database server system with improved methods for logging transactions
GB2317722B (en) 1996-09-30 2001-07-18 Nokia Mobile Phones Ltd Memory device
GB2317720A (en) 1996-09-30 1998-04-01 Nokia Mobile Phones Ltd Managing Flash memory
US6112024A (en) 1996-10-02 2000-08-29 Sybase, Inc. Development system providing methods for managing different versions of objects with a meta model
US5832529A (en) 1996-10-11 1998-11-03 Sun Microsystems, Inc. Methods, apparatus, and product for distributed garbage collection
JPH10177563A (en) 1996-12-17 1998-06-30 Mitsubishi Electric Corp Microcomputer with built-in flash memory
US6073232A (en) 1997-02-25 2000-06-06 International Business Machines Corporation Method for minimizing a computer's initial program load time after a system reset or a power-on using non-volatile storage
US6345000B1 (en) 1997-04-16 2002-02-05 Sandisk Corporation Flash memory permitting simultaneous read/write and erase operations in a single memory array
US5943692A (en) 1997-04-30 1999-08-24 International Business Machines Corporation Mobile client computer system with flash memory management utilizing a virtual address map and variable length data
US5897638A (en) 1997-06-16 1999-04-27 Ab Initio Software Corporation Parallel virtual file system
US6148368A (en) 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US6879266B1 (en) 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
US6000006A (en) 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage
US6240414B1 (en) 1997-09-28 2001-05-29 Eisolutions, Inc. Method of resolving data conflicts in a shared data environment
US6189071B1 (en) 1997-10-06 2001-02-13 Emc Corporation Method for maximizing sequential output in a disk array storage device
US6108004A (en) 1997-10-21 2000-08-22 International Business Machines Corporation GUI guide for data mining
FR2770952B1 (en) 1997-11-12 2000-01-21 Adl Systeme Sa TELE-WRITING DEVICE
US6560702B1 (en) 1997-12-10 2003-05-06 Phoenix Technologies Ltd. Method and apparatus for execution of an application during computer pre-boot operation
US6098075A (en) 1997-12-16 2000-08-01 International Business Machines Corporation Deferred referential integrity checking based on determining whether row at-a-time referential integrity checking would yield the same results as deferred integrity checking
US6567889B1 (en) 1997-12-19 2003-05-20 Lsi Logic Corporation Apparatus and method to provide virtual solid state disk in cache memory in a storage controller
US6018746A (en) 1997-12-23 2000-01-25 Unisys Corporation System and method for managing recovery information in a transaction processing system
US6006291A (en) 1997-12-31 1999-12-21 Intel Corporation High-throughput interface between a system memory controller and a peripheral device
US6205527B1 (en) 1998-02-24 2001-03-20 Adaptec, Inc. Intelligent backup and restoring system and method for implementing the same
US7007072B1 (en) 1999-07-27 2006-02-28 Storage Technology Corporation Method and system for efficiently storing web pages for quick downloading at a remote device
US6272534B1 (en) 1998-03-04 2001-08-07 Storage Technology Corporation Method and system for efficiently storing web pages for quick downloading at a remote device
US6959318B1 (en) 1998-03-06 2005-10-25 Intel Corporation Method of proxy-assisted predictive pre-fetching with transcoding
US6298428B1 (en) 1998-03-30 2001-10-02 International Business Machines Corporation Method and apparatus for shared persistent virtual storage on existing operating systems
US6138125A (en) 1998-03-31 2000-10-24 Lsi Logic Corporation Block coding method and system for failure recovery in disk arrays
US6360330B1 (en) 1998-03-31 2002-03-19 Emc Corporation System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server
US6263342B1 (en) 1998-04-01 2001-07-17 International Business Machines Corp. Federated searching of heterogeneous datastores using a federated datastore object
US6101601A (en) 1998-04-20 2000-08-08 International Business Machines Corporation Method and apparatus for hibernation within a distributed data processing system
US6122685A (en) 1998-05-06 2000-09-19 Emc Corporation System for improving the performance of a disk storage device by reconfiguring a logical volume of data in response to the type of operations being performed
US6314433B1 (en) 1998-06-12 2001-11-06 Hewlett-Packard Company Frame-based heroic data recovery
FR2780178B1 (en) 1998-06-18 2001-08-10 Inst Nat Rech Inf Automat METHOD FOR TRANSFORMING AND TRANSPORTING DATA BETWEEN AGENT SERVERS PRESENT ON MACHINES AND A CENTRAL AGENT SERVER PRESENT ON ANOTHER MACHINE
US6425057B1 (en) 1998-08-27 2002-07-23 Hewlett-Packard Company Caching protocol method and system based on request frequency and relative storage duration
US6209088B1 (en) 1998-09-21 2001-03-27 Microsoft Corporation Computer hibernation implemented by a computer operating system
US6714935B1 (en) 1998-09-21 2004-03-30 Microsoft Corporation Management of non-persistent data in a persistent database
US6519597B1 (en) 1998-10-08 2003-02-11 International Business Machines Corporation Method and apparatus for indexing structured documents with rich data types
US6249841B1 (en) 1998-12-03 2001-06-19 Ramtron International Corporation Integrated circuit memory device and method incorporating flash and ferroelectric random access memory arrays
US6338056B1 (en) 1998-12-14 2002-01-08 International Business Machines Corporation Relational database extender that supports user-defined index types and user-defined search
US6378043B1 (en) 1998-12-31 2002-04-23 Oracle Corporation Reward based cache management
US6640278B1 (en) 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6401093B1 (en) 1999-03-31 2002-06-04 International Business Machines Corporation Cross file system caching and synchronization
US6295578B1 (en) 1999-04-09 2001-09-25 Compaq Computer Corporation Cascaded removable media data storage system
US6535949B1 (en) * 1999-04-19 2003-03-18 Research In Motion Limited Portable electronic device having a log-structured file system in flash memory
US6237065B1 (en) 1999-05-14 2001-05-22 Hewlett-Packard Company Preemptive replacement strategy for a caching dynamic translator
US6317806B1 (en) 1999-05-20 2001-11-13 International Business Machines Corporation Static queue and index queue for storing values identifying static queue locations
US6381605B1 (en) 1999-05-29 2002-04-30 Oracle Corporation Heirarchical indexing of multi-attribute data by sorting, dividing and storing subsets
US6370534B1 (en) 1999-06-01 2002-04-09 Pliant Technologies, Inc. Blocking techniques for data storage
US6438750B1 (en) 1999-06-18 2002-08-20 Phoenix Technologies Ltd. Determining loading time of an operating system
TW479194B (en) 1999-06-18 2002-03-11 Phoenix Tech Ltd Method and apparatus for execution of an application during computer pre-boot operation
US6199195B1 (en) 1999-07-08 2001-03-06 Science Application International Corporation Automatically generated objects within extensible object frameworks and links to enterprise resources
JP3812928B2 (en) 1999-07-14 2006-08-23 株式会社日立製作所 External storage device and information processing system
US6513051B1 (en) 1999-07-16 2003-01-28 Microsoft Corporation Method and system for backing up and restoring files stored in a single instance store
US6311232B1 (en) 1999-07-29 2001-10-30 Compaq Computer Corporation Method and apparatus for configuring storage devices
US6542904B2 (en) 1999-07-30 2003-04-01 International Business Machines Corporation Method and system for efficiently providing maintenance activity on a relational database that is utilized within a processing system
JP3239335B2 (en) 1999-08-18 2001-12-17 インターナショナル・ビジネス・マシーンズ・コーポレーション Method for forming structure for electrical connection and substrate for solder transfer
JP2001067258A (en) 1999-08-25 2001-03-16 Mitsubishi Electric Corp Semiconductor device with built-in flash memory and flash memory address converting method
US6370541B1 (en) 1999-09-21 2002-04-09 International Business Machines Corporation Design and implementation of a client/server framework for federated multi-search and update across heterogeneous datastores
US6539456B2 (en) * 1999-10-13 2003-03-25 Intel Corporation Hardware acceleration of boot-up utilizing a non-volatile disk cache
US6751658B1 (en) 1999-10-18 2004-06-15 Apple Computer, Inc. Providing a reliable operating system for clients of a net-booted environment
US6338126B1 (en) 1999-12-06 2002-01-08 Legato Systems, Inc. Crash recovery without complete remirror
CN1206594C (en) 1999-12-17 2005-06-15 皇家菲利浦电子有限公司 Data processor with cache
JP3562419B2 (en) 2000-02-01 2004-09-08 日本電気株式会社 Electronic exchange
US6556983B1 (en) 2000-01-12 2003-04-29 Microsoft Corporation Methods and apparatus for finding semantic information, such as usage logs, similar to a query using a pattern lattice data space
US6366996B1 (en) 2000-01-24 2002-04-02 Pmc-Sierra, Inc. Page memory management in non time critical data buffering applications
US6671757B1 (en) 2000-01-26 2003-12-30 Fusionone, Inc. Data transfer and synchronization system
US6694336B1 (en) 2000-01-25 2004-02-17 Fusionone, Inc. Data transfer and synchronization system
WO2001057675A1 (en) 2000-02-02 2001-08-09 Sony Electronics Inc. System and method for effectively utilizing a cache memory in an electronic device
JP4131894B2 (en) 2000-02-29 2008-08-13 株式会社東芝 Disk control mechanism suitable for random disk write
JP4078010B2 (en) 2000-03-03 2008-04-23 株式会社日立グローバルストレージテクノロジーズ Magnetic disk apparatus and information recording method
US6633978B1 (en) 2000-03-31 2003-10-14 Hewlett-Packard Development Company, L.P. Method and apparatus for restoring computer resources
US6718361B1 (en) 2000-04-07 2004-04-06 Network Appliance Inc. Method and apparatus for reliable and scalable distribution of data files in distributed networks
US6820088B1 (en) 2000-04-10 2004-11-16 Research In Motion Limited System and method for synchronizing data records between multiple databases
US7421541B2 (en) * 2000-05-12 2008-09-02 Oracle International Corporation Version management of cached permissions metadata
US6629201B2 (en) 2000-05-15 2003-09-30 Superspeed Software, Inc. System and method for high-speed substitute cache
US6671699B1 (en) 2000-05-20 2003-12-30 Equipe Communications Corporation Shared database usage in network devices
US6715016B1 (en) 2000-06-01 2004-03-30 Hitachi, Ltd. Multiple operating system control method
JP3705731B2 (en) 2000-06-05 2005-10-12 富士通株式会社 I / O controller
US7412369B1 (en) 2000-06-09 2008-08-12 Stmicroelectronics, Inc. System and method for designing and optimizing the memory of an embedded processing system
TW576966B (en) 2000-06-23 2004-02-21 Intel Corp Non-volatile cache integrated with mass storage device
US6557077B1 (en) 2000-07-07 2003-04-29 Lsi Logic Corporation Transportable memory apparatus and associated methods of initializing a computer system having the same
US6928521B1 (en) * 2000-08-01 2005-08-09 International Business Machines Corporation Method, system, and data structures for using metadata in updating data in a storage device
US6418510B1 (en) 2000-09-14 2002-07-09 International Business Machines Corporation Cooperative cache and rotational positioning optimization (RPO) scheme for a direct access storage device (DASD)
US6725342B1 (en) 2000-09-26 2004-04-20 Intel Corporation Non-volatile mass storage cache coherency apparatus
US6434682B1 (en) 2000-09-28 2002-08-13 International Business Machines Corporation Data management system with shortcut migration via efficient automatic reconnection to previously migrated copy
US7043524B2 (en) 2000-11-06 2006-05-09 Omnishift Technologies, Inc. Network caching system for streamed applications
US6999956B2 (en) 2000-11-16 2006-02-14 Ward Mullins Dynamic object-driven database manipulation and mapping system
US6629198B2 (en) 2000-12-08 2003-09-30 Sun Microsystems, Inc. Data storage system and method employing a write-ahead hash log
US7178100B2 (en) 2000-12-15 2007-02-13 Call Charles G Methods and apparatus for storing and manipulating variable length and fixed length data elements as a sequence of fixed length integers
US6871271B2 (en) 2000-12-21 2005-03-22 Emc Corporation Incrementally restoring a mass storage device to a prior state
JP2002197073A (en) * 2000-12-25 2002-07-12 Hitachi Ltd Cache coincidence controller
US6651141B2 (en) 2000-12-29 2003-11-18 Intel Corporation System and method for populating cache servers with popular media contents
US6546472B2 (en) 2000-12-29 2003-04-08 Hewlett-Packard Development Company, L.P. Fast suspend to disk
US6516380B2 (en) * 2001-02-05 2003-02-04 International Business Machines Corporation System and method for a log-based non-volatile write cache in a storage controller
US6877081B2 (en) 2001-02-13 2005-04-05 International Business Machines Corporation System and method for managing memory compression transparent to an operating system
US6918022B2 (en) 2001-02-28 2005-07-12 Intel Corporation Memory space organization
JP2002259186A (en) 2001-03-06 2002-09-13 Hitachi Ltd Method, program and device for checking and processing compatibility of tree structured index
US6877111B2 (en) 2001-03-26 2005-04-05 Sun Microsystems, Inc. Method and apparatus for managing replicated and migration capable session state for a Java platform
US6996660B1 (en) 2001-04-09 2006-02-07 Matrix Semiconductor, Inc. Memory device and method for storing and reading data in a write-once memory array
US6584034B1 (en) 2001-04-23 2003-06-24 Aplus Flash Technology Inc. Flash memory array structure suitable for multiple simultaneous operations
US6961723B2 (en) 2001-05-04 2005-11-01 Sun Microsystems, Inc. System and method for determining relevancy of query responses in a distributed network search mechanism
US6717763B2 (en) 2001-05-16 2004-04-06 Hitachi Global Storage Technologies, Netherlands B.V. Power savings method and apparatus for disk drives
JP2002342037A (en) 2001-05-22 2002-11-29 Fujitsu Ltd Disk device
KR100389867B1 (en) 2001-06-04 2003-07-04 삼성전자주식회사 Flash memory management method
US6697818B2 (en) 2001-06-14 2004-02-24 International Business Machines Corporation Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database
US6922765B2 (en) 2001-06-21 2005-07-26 International Business Machines Corporation Method of allocating physical memory space having pinned and non-pinned regions
US6772178B2 (en) 2001-07-27 2004-08-03 Sun Microsystems, Inc. Method and apparatus for managing remote data replication in a distributed computer system
US6742097B2 (en) 2001-07-30 2004-05-25 Rambus Inc. Consolidation of allocated memory to reduce power consumption
US6769050B1 (en) 2001-09-10 2004-07-27 Rambus Inc. Techniques for increasing bandwidth in port-per-module memory systems having mismatched memory modules
JP3822081B2 (en) 2001-09-28 2006-09-13 東京エレクトロンデバイス株式会社 Data writing apparatus, data writing control method, and program
JP4093741B2 (en) 2001-10-03 2008-06-04 シャープ株式会社 External memory control device and data driven information processing device including the same
US6636942B2 (en) 2001-10-05 2003-10-21 International Business Machines Corporation Storage structure for storing formatted data on a random access medium
US6944757B2 (en) 2001-10-16 2005-09-13 Dell Products L.P. Method for allowing CD removal when booting embedded OS from a CD-ROM device
EP1304620A1 (en) 2001-10-17 2003-04-23 Texas Instruments Incorporated Cache with selective write allocation
US20030110357A1 (en) 2001-11-14 2003-06-12 Nguyen Phillip V. Weight based disk cache replacement method
US6687158B2 (en) 2001-12-21 2004-02-03 Fujitsu Limited Gapless programming for a NAND type flash memory
JP2003196032A (en) 2001-12-26 2003-07-11 Nec Corp Write cache control method of storage device, and storage device
US20030154314A1 (en) 2002-02-08 2003-08-14 I/O Integrity, Inc. Redirecting local disk traffic to network attached storage
US6782453B2 (en) 2002-02-12 2004-08-24 Hewlett-Packard Development Company, L.P. Storing data in memory
US6901499B2 (en) 2002-02-27 2005-05-31 Microsoft Corp. System and method for tracking data stored in a flash memory device
US6771536B2 (en) 2002-02-27 2004-08-03 Sandisk Corporation Operating techniques for reducing program and read disturbs of a non-volatile memory
JP4299555B2 (en) * 2002-03-15 2009-07-22 富士通株式会社 Cache control program
US7136966B2 (en) 2002-03-18 2006-11-14 Lsi Logic Corporation Method and apparatus for using a solid state disk device as a storage controller cache
US20040044776A1 (en) 2002-03-22 2004-03-04 International Business Machines Corporation Peer to peer file sharing system using common protocols
US7065627B2 (en) 2002-03-25 2006-06-20 International Business Machines Corporation Method and system for providing an event driven image for a boot record
JP4229626B2 (en) 2002-03-26 2009-02-25 富士通株式会社 File management system
US6820180B2 (en) 2002-04-04 2004-11-16 International Business Machines Corporation Apparatus and method of cascading backup logical volume mirrors
US6966006B2 (en) 2002-05-09 2005-11-15 International Business Machines Corporation Adaptive startup policy for accelerating multi-disk array spin-up
US6898609B2 (en) 2002-05-10 2005-05-24 Douglas W. Kerwin Database scattering system
US7062675B1 (en) 2002-06-25 2006-06-13 Emc Corporation Data storage cache system shutdown scheme
US7065527B2 (en) 2002-06-26 2006-06-20 Microsoft Corporation Systems and methods of optimizing metadata publishing system updates by alternating databases
US7082495B2 (en) 2002-06-27 2006-07-25 Microsoft Corporation Method and apparatus to reduce power consumption and improve read/write performance of hard disk drives using non-volatile memory
US7017037B2 (en) 2002-06-27 2006-03-21 Microsoft Corporation Apparatus and method to decrease boot time and hibernate awaken time of a computer system utilizing disk spin-up-time
US6941310B2 (en) 2002-07-17 2005-09-06 Oracle International Corp. System and method for caching data for a mobile application
AU2003250670A1 (en) 2002-07-23 2004-02-09 Research In Motion Limited Data store management system and method for wireless devices
JP2004054845A (en) 2002-07-24 2004-02-19 Sony Corp Data management device
JP4026753B2 (en) 2002-07-25 2007-12-26 株式会社日立製作所 Semiconductor integrated circuit
NZ520786A (en) 2002-08-14 2005-06-24 Daniel James Oaeconnell Method of booting a computer system using a memory image of the post boot content of the system RAM memory
US7043610B2 (en) 2002-08-19 2006-05-09 Aristos Logic Corporation System and method for maintaining cache coherency without external controller intervention
FI20021620A (en) 2002-09-10 2004-03-11 Nokia Corp Memory structure, system and electronics device and method of a memory circuit
US20040078508A1 (en) * 2002-10-02 2004-04-22 Rivard William G. System and method for high performance data storage and retrieval
US6910106B2 (en) 2002-10-04 2005-06-21 Microsoft Corporation Methods and mechanisms for proactive memory management
US7284149B1 (en) 2002-10-16 2007-10-16 Ken Scott Fisher Intermittent connection protection for external computer devices
US20040088481A1 (en) 2002-11-04 2004-05-06 Garney John I. Using non-volatile memories for disk caching
US7035974B2 (en) 2002-11-06 2006-04-25 Synology Inc. RAID-5 disk having cache memory implemented using non-volatile RAM
US7036040B2 (en) 2002-11-26 2006-04-25 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
US7502791B2 (en) 2002-11-26 2009-03-10 Norsync Technology A/S Database constraint enforcer
US7003620B2 (en) 2002-11-26 2006-02-21 M-Systems Flash Disk Pioneers Ltd. Appliance, including a flash memory, that is robust under power failure
US7039765B1 (en) 2002-12-19 2006-05-02 Hewlett-Packard Development Company, L.P. Techniques for cache memory management using read and write operations
US7010645B2 (en) 2002-12-27 2006-03-07 International Business Machines Corporation System and method for sequentially staging received data to a write cache in advance of storing the received data
KR100504696B1 (en) 2003-02-26 2005-08-03 삼성전자주식회사 Nand-type flash memory device having array of status cells for storing block erase/program information
JP2004272324A (en) 2003-03-05 2004-09-30 Nec Corp Disk array device
US7505958B2 (en) 2004-09-30 2009-03-17 International Business Machines Corporation Metadata management for a data abstraction model
US20050286855A1 (en) 2003-04-25 2005-12-29 Matsushita Electric Industrial Co., Ltd. Data recording apparatus
US7296043B2 (en) 2003-05-30 2007-11-13 Microsoft Corporation Memory file size adjustment
US7139933B2 (en) 2003-06-20 2006-11-21 International Business Machines Corporation Preserving cache data against cluster reboot
US7299379B2 (en) 2003-06-27 2007-11-20 Intel Corporation Maintaining cache integrity by recording write addresses in a log
JP4090400B2 (en) 2003-07-24 2008-05-28 株式会社日立製作所 Storage system
US7068575B2 (en) 2003-07-30 2006-06-27 Microsoft Corporation High speed optical disc recording
US6977842B2 (en) 2003-09-16 2005-12-20 Micron Technology, Inc. Boosted substrate/tub programming for flash memories
US7366866B2 (en) 2003-10-30 2008-04-29 Hewlett-Packard Development Company, L.P. Block size allocation in copy operations
EP1538525A1 (en) 2003-12-04 2005-06-08 Texas Instruments Incorporated ECC computation simultaneously performed while reading or programming a flash memory
CN100437456C (en) 2003-12-09 2008-11-26 松下电器产业株式会社 Electronic device, control method thereof, host device, and control method thereof
US7130962B2 (en) 2003-12-18 2006-10-31 Intel Corporation Writing cache lines on a disk drive
JP2005191413A (en) 2003-12-26 2005-07-14 Toshiba Corp Nonvolatile semiconductor memory
US20050251617A1 (en) 2004-05-07 2005-11-10 Sinclair Alan W Hybrid non-volatile memory system
US8458488B2 (en) 2003-12-31 2013-06-04 International Business Machines Corporation Method and system for diagnosing operation of tamper-resistant software
US20050145923A1 (en) 2004-01-06 2005-07-07 Chiou-Feng Chen NAND flash memory with enhanced program and erase performance, and fabrication process
US6993618B2 (en) 2004-01-15 2006-01-31 Super Talent Electronics, Inc. Dual-mode flash storage exchanger that transfers flash-card data to a removable USB flash key-drive with or without a PC host
US7127549B2 (en) 2004-02-04 2006-10-24 Sandisk Corporation Disk acceleration using first and second storage devices
KR100564613B1 (en) 2004-02-25 2006-03-29 삼성전자주식회사 Flash memory and method for operating firmware module dynamic loading of optical drive
US7421562B2 (en) 2004-03-01 2008-09-02 Sybase, Inc. Database system providing methodology for extended memory support
US20050204091A1 (en) 2004-03-11 2005-09-15 Kilbuck Kevin M. Non-volatile memory with synchronous DRAM interface
US7143120B2 (en) 2004-05-03 2006-11-28 Microsoft Corporation Systems and methods for automated maintenance and repair of database and file systems
US7644239B2 (en) 2004-05-03 2010-01-05 Microsoft Corporation Non-volatile memory cache performance improvement
US7366740B2 (en) 2004-05-03 2008-04-29 Microsoft Corporation Systems and methods for automatic maintenance and repair of enitites in a data model
JP4392601B2 (en) 2004-05-07 2010-01-06 パナソニック株式会社 Data access device and recording medium
US7231497B2 (en) 2004-06-15 2007-06-12 Intel Corporation Merging write-back and write-through cache policies
US20060010293A1 (en) * 2004-07-09 2006-01-12 Schnapp Michael G Cache for file system used in storage system
CN1266229C (en) 2004-08-10 2006-07-26 汕头市龙华珠光颜料有限公司 Pigment of multi-gradition discolour at diffierent direction and production process thereof
US7171532B2 (en) 2004-08-30 2007-01-30 Hitachi, Ltd. Method and system for data lifecycle management in an external storage linkage environment
JP4192129B2 (en) 2004-09-13 2008-12-03 株式会社東芝 Memory management device
US20060075185A1 (en) 2004-10-06 2006-04-06 Dell Products L.P. Method for caching data and power conservation in an information handling system
US7657756B2 (en) * 2004-10-08 2010-02-02 International Business Machines Corporaiton Secure memory caching structures for data, integrity and version values
US7490197B2 (en) 2004-10-21 2009-02-10 Microsoft Corporation Using external memory devices to improve system performance
JP4956922B2 (en) 2004-10-27 2012-06-20 ソニー株式会社 Storage device
US20060106889A1 (en) 2004-11-12 2006-05-18 Mannby Claes-Fredrik U Method, system, and program for managing revisions to a file
JP4689247B2 (en) 2004-11-19 2011-05-25 キヤノン株式会社 Camera and control method thereof
KR100643287B1 (en) 2004-11-19 2006-11-10 삼성전자주식회사 Data processing device and method for flash memory
US20060136664A1 (en) 2004-12-16 2006-06-22 Trika Sanjeev N Method, apparatus and system for disk caching in a dual boot environment
US7480654B2 (en) 2004-12-20 2009-01-20 International Business Machines Corporation Achieving cache consistency while allowing concurrent changes to metadata
US7480761B2 (en) 2005-01-10 2009-01-20 Microsoft Corporation System and methods for an overlay disk and cache using portable flash memory
KR100670010B1 (en) 2005-02-03 2007-01-19 삼성전자주식회사 The hybrid broadcast encryption method
US7620773B2 (en) 2005-04-15 2009-11-17 Microsoft Corporation In-line non volatile memory disk read cache and write buffer
US8812781B2 (en) 2005-04-19 2014-08-19 Hewlett-Packard Development Company, L.P. External state cache for computer processor
US8452929B2 (en) 2005-04-21 2013-05-28 Violin Memory Inc. Method and system for storage of data in non-volatile media
US7516277B2 (en) 2005-04-28 2009-04-07 Sap Ag Cache monitoring using shared memory
US20060277359A1 (en) 2005-06-06 2006-12-07 Faber Robert W Blank memory location detection mechanism
US7523256B2 (en) 2005-06-15 2009-04-21 Bea Systems, Inc. System and method for scheduling disk writes in an application server of transactional environment
JP4833595B2 (en) 2005-06-30 2011-12-07 大和ハウス工業株式会社 Shoe box with deodorizing function
US7640398B2 (en) 2005-07-11 2009-12-29 Atmel Corporation High-speed interface for high-density flash with two levels of pipelined cache
US7634516B2 (en) 2005-08-17 2009-12-15 International Business Machines Corporation Maintaining an aggregate including active files in a storage pool in a random access medium
US7409524B2 (en) 2005-08-17 2008-08-05 Hewlett-Packard Development Company, L.P. System and method for responding to TLB misses
US7395401B2 (en) 2005-09-30 2008-07-01 Sigmatel, Inc. System and methods for accessing solid-state memory devices
US7409537B2 (en) 2005-10-06 2008-08-05 Microsoft Corporation Fast booting an operating system from an off state
US8914557B2 (en) 2005-12-16 2014-12-16 Microsoft Corporation Optimizing write and wear performance for a memory
US20070150966A1 (en) 2005-12-22 2007-06-28 Kirschner Wesley A Method and apparatus for maintaining a secure software boundary
US7451353B2 (en) 2005-12-23 2008-11-11 Intel Corporation Cache disassociation detection
US7627713B2 (en) 2005-12-29 2009-12-01 Intel Corporation Method and apparatus to maintain data integrity in disk cache memory during and after periods of cache inaccessibility
US20070207800A1 (en) 2006-02-17 2007-09-06 Daley Robert C Diagnostics And Monitoring Services In A Mobile Network For A Mobile Device
JP2007233896A (en) 2006-03-03 2007-09-13 Hitachi Ltd Storage device and control method thereof
ES2498096T3 (en) 2006-03-31 2014-09-24 Mosaid Technologies Incorporated Flash memory system control scheme
US7558913B2 (en) 2006-06-20 2009-07-07 Microsoft Corporation Atomic commit of cache transfer with staging area
US7512739B2 (en) * 2006-07-05 2009-03-31 International Business Machines Corporation Updating a node-based cache LRU tree
US7818701B1 (en) 2006-12-22 2010-10-19 Cypress Semiconductor Corporation Memory controller with variable zone size
US20080172519A1 (en) 2007-01-11 2008-07-17 Sandisk Il Ltd. Methods For Supporting Readydrive And Readyboost Accelerators In A Single Flash-Memory Storage Device
TWI499909B (en) * 2007-01-26 2015-09-11 Cheriton David Hierarchical immutable content-addressable memory processor
US7945734B2 (en) 2007-08-10 2011-05-17 Eastman Kodak Company Removable storage device with code to allow change detection
US8190652B2 (en) 2007-12-06 2012-05-29 Intel Corporation Achieving coherence between dynamically optimized code and original code
US8631203B2 (en) 2007-12-10 2014-01-14 Microsoft Corporation Management of external memory functioning as virtual cache
US8082384B2 (en) 2008-03-26 2011-12-20 Microsoft Corporation Booting an electronic device using flash memory and a limited function memory controller
US8275970B2 (en) 2008-05-15 2012-09-25 Microsoft Corp. Optimizing write traffic to a disk
US9032151B2 (en) 2008-09-15 2015-05-12 Microsoft Technology Licensing, Llc Method and system for ensuring reliability of cache data and metadata subsequent to a reboot
US8032707B2 (en) 2008-09-15 2011-10-04 Microsoft Corporation Managing cache data and metadata
US7953774B2 (en) 2008-09-19 2011-05-31 Microsoft Corporation Aggregation of write traffic to a data store
ES2928456T3 (en) 2014-06-02 2022-11-18 Tr Holdings Inc covered container

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128627A (en) 1998-04-15 2000-10-03 Inktomi Corporation Consistent data storage in an object cache
US20070061511A1 (en) 2005-09-15 2007-03-15 Faber Robert W Distributed and packed metadata structure for disk cache

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OHN ET AL.: "JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING", vol. 67, 27 January 2007, ELSEVIER, article "Path conscious caching of B+ tree indexes in a shared disks cluster", pages: 266 - 301
See also references of EP2353081A4

Also Published As

Publication number Publication date
CN102216899A (en) 2011-10-12
CN102216899B (en) 2016-12-07
US8135914B2 (en) 2012-03-13
EP2353081A4 (en) 2012-06-27
TWI471726B (en) 2015-02-01
JP2012508932A (en) 2012-04-12
US8032707B2 (en) 2011-10-04
US8489815B2 (en) 2013-07-16
ES2663701T3 (en) 2018-04-16
WO2010056571A3 (en) 2010-07-29
TW201019110A (en) 2010-05-16
US20120173824A1 (en) 2012-07-05
EP2353081A2 (en) 2011-08-10
EP2353081B1 (en) 2017-12-27
US20100070747A1 (en) 2010-03-18
US20110314202A1 (en) 2011-12-22

Similar Documents

Publication Publication Date Title
US10387313B2 (en) Method and system for ensuring reliability of cache data and metadata subsequent to a reboot
US8032707B2 (en) Managing cache data and metadata
US7882386B1 (en) System and method for recovering a logical volume during failover or reboot of a file server in a data storage environment
TWI546818B (en) Green nand device (gnd) driver with dram data persistence for enhanced flash endurance and performance
US7769945B2 (en) Method and system for facilitating fast wake-up of a flash memory system
US20150039837A1 (en) System and method for tiered caching and storage allocation
US20090327608A1 (en) Accelerated resume from hibernation in a cached disk system
US20110167049A1 (en) File system management techniques for computing environments and systems
TW201017405A (en) Improved hybrid drive
KR101651204B1 (en) Apparatus and Method for synchronization of snapshot image
US20070043968A1 (en) Disk array rebuild disruption resumption handling method and system
WO2008087634A1 (en) A method and system for facilitating fast wake-up of a flash memory system
CN114880293A (en) Software starting acceleration method and device and computing equipment
CN113722131A (en) Method and system for facilitating fast crash recovery in a storage device
JP2021022213A (en) Storage system, storage control device and program
CN107704198B (en) Information processing method and electronic equipment
McObject Database Persistence, Without the Performance Penalty

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980145878.1

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09826570

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 2009826570

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 878/MUMNP/2011

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2011536387

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE