CN102760101A - SSD-based (Solid State Disk) cache management method and system - Google Patents

SSD-based (Solid State Disk) cache management method and system Download PDF

Info

Publication number
CN102760101A
CN102760101A CN2012101603505A CN201210160350A CN102760101A CN 102760101 A CN102760101 A CN 102760101A CN 2012101603505 A CN2012101603505 A CN 2012101603505A CN 201210160350 A CN201210160350 A CN 201210160350A CN 102760101 A CN102760101 A CN 102760101A
Authority
CN
China
Prior art keywords
data
ssd
chained list
buffer memory
lru
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101603505A
Other languages
Chinese (zh)
Other versions
CN102760101B (en
Inventor
车玉坤
熊劲
马久跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201210160350.5A priority Critical patent/CN102760101B/en
Publication of CN102760101A publication Critical patent/CN102760101A/en
Application granted granted Critical
Publication of CN102760101B publication Critical patent/CN102760101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an SSD-based (Solid State Disk) cache management method and system. The SSD-based cache management method comprises the following steps: step 1, sending a read-write request, checking whether a cached DRAM (Dynamic Random Access Memory) hits data, searching a hash list, judging whether the data is existent, reading the data from the cached DRAM and returning the request if the data is existent, and reading data to the cached DRAM from a HDD (Hard Disk Drive) and carrying out step 2 if the data is nonexistent in the cached DRAM; step 2, carrying out data screening by using a two-level LRU (least recently used) linked list and a Ghost buffer, and identifying data heat; and step 3, carrying out self-adaptive change calculation on the length of the two-level LRU linked list, acquiring the granularity of a page cluster when the second-level LRU linked list of the cached DRAM is full, taking C pages behind the second-level LRU end as a whole body to replace the cached DRAM, and writing the C pages into an SSD in a large granularity, wherein the page cluster size is C pages, and C is an integral multiple of Block page number in the SSD.

Description

A kind of buffer memory management method and system based on SSD
Technical field
The present invention relates to the storage organization and strategy of buffer memory, relate in particular to a kind of buffer memory management method and system based on SSD.
Background technology
Along with the progress of contemporary society, the data message that needs to handle gets more and more, and data volume is explosive growth.This has brought many problems for traditional storage system.Traditional storage system generally is made up of internal memory (DRAM) and hard disk (HDD), and DRAM is as the buffer memory of HDD, the following challenge of such systems face:
One of which, the data total amount is in rapid increase.The Joining Report of IDC and EMC is pointed out: the data of current society are explosive growth trend; Can find out from their report; Before 2005 and data volume before be merely tens EB (1EB=1018 byte), 2010 is thousands of EB then, expects 2015 and then can arrive near 8000EB; The data volume of 8ZB (1ZB=1021 byte) just. face such data scale; Under traditional DRAM+HDD storage architecture, will have increasing I/O request to send out, so performance can be because of elongated being exerted an influence of request responding time to disk.
Its two, I/O gap is increasing gradually, HDD becomes performance bottleneck gradually.Report shows, cpu performance every year doubled with 60% speed increment in just per 18 months.And the annual growth rate of the performance of HDD is less than 10%, and about about 8%, this is because it is limited by the characteristics of disk physical arrangement greatly, and the seeking speed of disk arm and the rotational speed of card just can be doubled in about per 10 years.Simultaneously DRAM and the delay gap of HDD be also in increasing, more than these cause HDD to become the bottleneck of I/O.If ask frequent must sending out, will inevitably seriously reduce the performance of system so to disk.
Its three, the performance requirement of data processing is improving constantly.In recent years, high-performance calculation is changed to the I/O intensity by the CPU intensity gradually, and the I/O efficient of system is to the performance important influence, and this has just proposed very high requirement to the I/O operation of storage system.The high speed development of Internet service is also had higher requirement to the I/O operating performance of mass storage system (MSS) in addition; Internet, applications for search engine, ecommerce, OSN and so on; They need handle the operation requests of a large number of users simultaneously, and the response time that the user experiences must be in acceptable scope (in second level).Such application characteristic just requires the data-storage system of its bottom must have good I/O performance, and traditional DRAM+HDD will more and more be difficult to be competent at.
SSD is emerging in recent years a kind of novel storage medium, and its appearance very likely helps to solve above-mentioned challenge.The performance of SSD and price are all between DRAM and HDD; If it is joined in the caching system; With its L2 cache, very likely improve the performance of system, because its volume ratio DRAM wants big as HDD; Performance is better than the several number magnitude than HDD simultaneously, therefore estimates effectively to reduce the request of mailing to HDD.But SSD has the characteristic of many uniquenesses, makes directly SSD to be introduced between DRAM and the HDD can have many problems as buffer memory, causes the performance of SSD to can not get maximum using. these characteristics are following:
First; The readwrite performance of SSD is asymmetric; The performance of read operation is better than write operation far away, and is poorer under the less situation of random operation and granularity, yet in traditional caching system; Data be from HDD, get into SSD or from DRAM replacement to get into SSD all be the random write of small grain size, this just makes that the performance of SSD is had a strong impact on.
The second, the SSD limited service life, its age limit is in erasing times; And the quantity of write operation directly determines erasing times; Yet in traditional buffer memory notion, whatsoever the data of appearance all will get into buffer memory, even these data only by visit once; These will cause unnecessary erase operation, can influence the life-span of SSD.Therefore to reduce unnecessary write operation as far as possible.
The 3rd, the finite capacity of SSD, though it is bigger to compare to DRAM, HDD still be smaller with wanting data quantity stored relatively, therefore should it maximized the use with making the frequent data of visiting of its storage.
From top narration, can find out how to design a caching system and cache management strategy, improve the performance of system, it is a challenge that performance and the space of SSD are maximized the use.Traditional cache management mode also is not suitable for being applied among the SSD, and they mainly are divided into following several types:
The first kind is based on the buffer storage managing algorithm of temporal locality.Typical case representative is LRU (Least Recently Used) algorithm, but this algorithm it can not the appraising datum visit temperature, also just can be replaced even data are only visited once from go to last-of-chain from begin chain.
Second type, be based on the buffer storage managing algorithm of access frequency.Typical case's representative is LFU (Least Frequently Used) algorithm.But it does not consider this factor of time this algorithm, and the past often visits but the data of no longer having been visited for a long time are still owing to there being very high visit frequently to be stored in the buffer memory, therefore can cause buffer memory to pollute.
The 3rd type is the consideration of comprehensive two factors, and typical case's representative is LIRS (Low Inter-Reference Set) algorithm.But its shortcoming is to realize comparatively complicacy, can bring certain overhead simultaneously.
No matter top which kind of; They all can not improve above-mentioned problem, because no matter data are to get into SSD or replacement SSD, then certainly lead to the random write for the small grain size of SSD; If the indiscriminate entering of data, the buffer memory pollution problems shows even more seriously on SSD.Specifically see accompanying drawing 1.
Therefore need a kind of technology aspect controlling data that get into SSD and the replacement of optimizing data among the SSD, to set about, thereby change problem above-mentioned, thus the maximized performance of utilizing SSD.
Summary of the invention
For addressing the above problem; To the performance characteristics of SSD, it is too much that the SSD that the objective of the invention is to solve prior art goes up the small grain size random write, the with serious pollution problem of buffer memory; A new buffer memory architecture has been proposed; Be made up of DRAM and SSD two-stage, DRAM is a level cache, and SSD is as the system of L2 cache.
The present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1; Send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, reading of data back execution in step 2 to the buffer memory DRAM from HDD then;
Step 2 adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3; Carry out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of Block number of pages among the SSD.
Described buffer memory management method based on SSD, said step 1 comprises:
Step 21, if data exist, promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, request is accomplished;
Step 22 if in the hash table, do not exist, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Step 23 if in SSD, hit, then reads out these data from SSD, request is accomplished.
Described buffer memory management method based on SSD comprises:
The data that from HDD, read directly are copied among the buffer memory DRAM, and data are by replacing to the SSD buffer memory of part just after the buffer memory DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described buffer memory management method based on SSD, said step 2 comprises:
Step 41 when data enter into buffer memory DRAM for the first time, is placed on the MRU end of first order LRU chained list earlier, and two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list are set to whole buffer memory DRAM magnitude proportion and are made as p 1, 0<p 1<1;
Step 43 when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Step 44 when the data in the first order LRU chained list are hit for the second time, then is promoted to it in chained list of second level;
Step 45 when second level LRU chained list is full, is replaced the content of its second level LRU chained list to SSD, obtains visiting the temperature higher data.
Described buffer memory management method based on SSD, the calculating of adaptive change comprises in the said step 3:
Step 51, for two-stage LRU chained list has all added corresponding Shadowbuffer, it stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously, after the data in the LRU chained list of the second level are replaced, also is saved among the LRU chained list Shadowbuffer of the second level;
Step 62, when data were hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needed to increase TargeSize++;
Step 63, when data were hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needed to increase TargetSize--.
Described buffer memory management method based on SSD, said step 3 also comprises:
Step 71, through the screening of two-stage LRU chained list among the buffer memory DRAM, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone;
Step 73, the each replacement of second level chained list all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state;
Step 74, when the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 71-74.
The present invention also discloses a kind of cache management system based on SSD, comprising:
Inspection buffer memory DRAM module; Be used to send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, then from HDD reading of data to buffer memory DRAM;
The garbled data module is used to adopt two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Adaptive change and polymerization module; Be used for carrying out the calculating of adaptive change, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch for the length of two-stage LRU chained list; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of middle Block number of pages.
Described cache management system based on SSD, said inspection buffer memory DRAM module comprises:
The hiting data module exists if be used for data, and promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, and request is accomplished;
Inquiry SSD module if be used for not existing at the hash table, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Read the SSD module, if with in SSD, hitting, then these data are read out from SSD, request is accomplished.
Described cache management system based on SSD comprises:
The data that from HDD, read directly are copied among the DRAM, and data are by replacing to the SSD buffer memory of part just after the DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described cache management system based on SSD, said garbled data module comprises:
Place MRU end module, be used for when data enter into buffer memory DRAM for the first time, be placed on the MRU end of first order LRU chained list earlier, two-stage LRU chained list is all in buffer memory DRAM;
The ratio module is set, and the size that is used for first order LRU chained list is set to whole buffer memory DRAM magnitude proportion and is made as p 1, 0<p 1<1;
The replacement module is used for when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Secondary hits module, is used for when the data of first order LRU chained list are hit for the second time, then it being promoted in the chained list of the second level;
The temperature module is used for when second level LRU chained list is full, the content of its second level LRU chained list being replaced to SSD, obtains visiting the temperature higher data.
Described cache management system based on SSD, said adaptive change and polymerization module comprise:
The adaptive change module; Be used to two-stage LRU chained list and all added corresponding Shadowbuffer; It stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described cache management system based on SSD, its adaptive change module also comprises:
Preserve the historical information module; After being used for the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously; After data in the LRU chained list of the second level are replaced, also be saved among the LRU chained list Shadowbuffer of the second level;
One-level increases the length module, be used for when data when first order LRU chained list Shadowbuffer hits, first order LRU chained list length needs growth, TargeSize++;
Secondary increases the length module, is used for when data when LRU chained list Shadowbuffer hits in the second level, and second level LRU chained list length needs growth, TargetSize--.
Described cache management system based on SSD, said adaptive change and polymerization module also comprise:
The polymerization module is used for the screening through buffer memory DRAM two-stage LRU chained list, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low; When second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone; Chained list each replacement in the second level all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state; When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties.
Beneficial effect of the present invention is:
1, in the present invention, the flow direction of data and traditional buffer memory are also inequality.The data of from HDD, reading not are directly to enter into SSD, screen selectively entering SSD of back among the DRAM but formerly take.The technique effect that brings like this is that data are just to get into SSD after will passing through the screening and filtering of DRAM, and just data are observed earlier " temperature " in DRAM.The write operation that gets into SSD is reduced greatly, and the buffer memory that has reduced SSD simultaneously pollutes.
2, to top buffer structure, the present invention has designed a kind of method of garbled data.The technique effect that brings like this is that the data that in this way enter into SSD are those data of frequently being visited; Therefore can utilize the SSD spatial cache substantially; The content of DRAM and SSD buffer memory is different simultaneously, can make the space utilization of two-level cache more effective like this.
3, get into SSD all with the problem of the I/O pattern of small grain size random write to traditional data, the present invention has designed data aggregation technique.Data are earlier in DRAM; Several pages are polymerized to the page or leaf bunch (cluster) of a coarsegrain; Bunch be that granularity is written among the SSD with page or leaf then, when replacement, also adopt page or leaf bunch to be that granularity, such technique effect are to avoid the small grain size random write on SSD, brought.Corresponding, when data were full in the buffer memory, we taked the mode of coarsegrain to replace, the continuous page or leaf of disposable replacement some, and this just combines with the mode that data get into, thereby improves the performance of caching system.
In sum, the present invention has adopted three above gordian techniquies, can carry out optimization to a certain degree to the caching system of SSD, thereby more effectively utilize the performance of SSD.On system response time, owing to can reduce the request of sending out to HDD, so system performance can increase.Secondly the buffer memory pollution problem on the SSD is able to avoid.At last, the random write of the small grain size of SSD can effectively be reduced, thereby the life-span of SSD also can prolong.
Description of drawings
Fig. 1 is traditional buffer organization mode;
Fig. 2 is the new caching system data flow figure of the present invention;
Fig. 3 is two-stage LRU chained list management DRAM buffer memory of the present invention;
Fig. 4 is a two-stage LRU chained list length dynamic change algorithm of the present invention;
Fig. 5 gets into the technology of SSD for polymerization of the present invention;
Fig. 6 is the buffer memory management method synoptic diagram that the present invention is based on SSD;
Fig. 7 is the cache management system synoptic diagram that the present invention is based on SSD.
Embodiment
Provide embodiment of the present invention below, the present invention has been made detailed description in conjunction with accompanying drawing.
New caching system
New caching system is by DRAM, SSD, and HDD constitutes, and specifically sees accompanying drawing 2.SSD is between DRAM and HDD, and as the buffer memory of HDD, data persistence is stored among the HDD, and DRAM is as first order buffer memory, and SSD is as second level buffer memory.DRAM and SSD have constituted the two-level cache of HDD.To write down the content of itself storing in the DRAM buffer memory among the DRAM; Whether in the DRAM buffer memory, exist in order to locate certain page fast; Therefore the page that stores in the buffer memory is managed with the mode of hash table; Also will write down simultaneously the content among the SSD, the relevant information of data in the SSD buffer memory will be recorded among the DRAM, this needs show recorded content with hash equally.Therefore in DRAM, need the following information of storage:
LRU (Least Recently Used LRU) chained list information, this chained list has head pointer and tail pointer, is used for carrying out inserting operations such as replacement;
DRAM content Hash table, whether the hash table of content is used for searching data and in the DRAM buffer memory, hits among the storage DRAM;
The LRU chained list of SSD page, the same with the LRU chained list among the DRAM, head pointer and tail pointer are arranged, be used for carrying out and insert operations such as replacement;
The hash table of SSD content, whether the hash table of content is used for searching data and in SSD, hits among the storage SSD;
Whether the processing of request process is such: whenever user CPU/cache sends a read, at first check among the level cache DRAM and hit, search the hash table, check whether the page page or leaf exists in the hash table.If exist, promptly in the DRAM buffer memory, hit, then can directly the data in the DRAM buffer memory be returned, request is accomplished.If in the hash table, do not exist, then need continue to inquire about the hash table of SSD, see whether these data are stored among the SSD.If in SSD, hit, then these data are read out from SSD, request is accomplished.Notice that data are not copied among the first order buffer memory DRAM, the content between them is inequality, that is to say it is exclusive, and the benefit of bringing like this is that the space of buffer memory is the summation in two spaces.If ask also missly in SSD, need request be sent among the HDD so and read.When request is accomplished, these data are saved in the DRAM buffer memory, just in the first order buffer memory, this is based on the consideration of temporal locality, because might also might visit these data later.
Here to note the replacement operation when buffer memory is full; Because request possibly be a write operation; If write operation hits in the DRAM buffer memory, then need its page or leaf be changed to " dirty ", and if this page or leaf is selected replacement; Then need elder generation that the content of this page is write back among the HDD, could from the DRAM buffer memory, remove then.
The strategy of garbled data
Wanting to reduce buffer memory pollutes; Reduce to the small grain size random write on the SSD; The strategy that a kind of garbled data just must be arranged, and traditional caching system, data are as long as bring from HDD; Must put into buffer memory, and the mode that the present invention has adopted two-stage LRU chained list to add Ghost buffer is carried out the screening of data in DRAM.Concrete situation is seen Fig. 3.If take the LRU chained list to manage simply, the front was also mentioned, temperature that can't authentication data, and buffer memory is seriously polluted.And adopt two-stage LRU chained list effectively to address this problem, the concrete operations flow process is following:
When data enter into the DRAM buffer memory for the first time, be placed on the MRU end (Most Recently Used) of first order LRU chained list earlier.Two-stage LRU chained list is all in the DRAM buffer memory.The certain proportion that the size of first order chained list is set to whole DRAM cache size (is made as p 1, 0<p 1<1), Here it is pollutes in order to reduce buffer memory.When chained list is full, then adopt the mode of LRU to replace, the information of the page or leaf of replacing out is saved among the ghost buffer; Note; Only preserve the visit information of this page among the ghost buffer, do not write down the content of this page, therefore a page (4K) only needs 16 bytes just can preserve.Ghost buffer can preserve the visit information of the capacity sum of DRAM buffer memory and SSD buffer memory, and shared amount of ram is also also few.Add the DRAM buffer memory of 128MB like the SSD of a 1GB, its historical visit information promptly can be stored in space that then only need about 4MB.When the data in the first order LRU chained list are hit for the second time, then it is promoted in the chained list of the second level.When second level LRU chained list is full, the content of its LRU end to be replaced to SSD, these are visit temperature higher data.And its history access record preserved, because these are the not high data of visit temperature.See that on the whole SSD has formed a big LRU chained list with this second level LRU chained list.
When the page or leaf of a request in buffer memory when miss; Under this strategy; Need add the operation of one query Ghost buffer; If in Ghost buffer, find visit information, think that then these data are data of comparative heat, just therefore behind disk read data, put it into the LRU chained list of the second level.Therefore in the DRAM buffer memory, also need preserve following information:
Historical visit information among the Ghost buffer is managed each with the LRU mode.If full, just replace old information.
The hash of Ghost buffer table is used for the query history Visitor Logs.
Algorithm above we adopt in DRAM carries out garbled data, like this can be so that get into the data that the data of SSD are comparison " heat ".Yet when DRAM was full, we need select a sacrifice page or leaf to replace.Is selecting the page or leaf in the one-level LRU chained list so on earth be the page or leaf in the secondary LRU chained list? If only select one of them to replace, certainly will cause the number of pages of another one chained list to diminish gradually.And if be respectively the size that two chained lists are selected regular lengths because these two chained lists code Recency and two parameters of Frequency respectively, if equal constantization always of size, just then this algorithm can not be according to the variation of load and corresponding variation.Therefore we need make these two parameters adaptively change with the load variations page or leaf.
The algorithm of two-stage chained list length adaptive change:
In other words, we need design a parameter: the problem of the proportional distribution of one-level chain table size L1:L2.For this reason, we have designed the algorithm of a dynamic change, and are as shown in Figure 4:
Shadowbuffer is identical with the function of Ghostbuffer in fact among Fig. 4, their only storage visit recently page visit information, and storing data information not.Therefore its expense is very little.And in order to distinguish mutually with above-mentioned Ghostbuffer, we are referred to as Shadowbuffer.Herein, we have added corresponding Shadowbuffer for every grade of LRU chained list, and the visit information of the page or leaf that is replaced recently in the appropriate level chained list is stored in this difference respectively.These two Shadowbuffer store the visit information record of the same quantity of DRAM buffer memory.
The size that we adopt these two Shadowbuffer to come dynamic change two-stage LRU chained list; We set a desired value TargetSize for this reason; This value is the desired value of first order LRU chained list, that is to say, the target sizes of one-level LRU chained list is TargetSize.Initial value is made as the half the of DRAM size.Thereafter the dynamic change along with the variation of load.Its change procedure is following:
After the data page replacement of first order LRU chained list, historical information is kept among the L1 ShadowBuffer, simultaneously, after the data among the LRU of the second level are replaced, also is saved among the L2 Shadow buffer.
When data were hit in L1 Shadowbuffer, we thought that one-level LRU chained list length needs to increase TargeSize++;
When data were hit in L2 Shadowbuffer, we thought that secondary LRU chained list length needs to increase TargetSize--;
We why like this reason of dynamic change be; Two ShadowBuffer all store the page or leaf of replacing out in the appropriate level chained list, and if page or leaf request is hit in L1 Shadow, this means miss in the DRAM buffer memory; In its ShadowBuffer, hit and mean that then L1 chained list length is not enough; If because long enough, just so current request can be not miss, so we make its TargetSize become big.In like manner, for L2, we should make the L2 size become big, even TargetSize diminishes.So just, accomplished the dynamic change of two-stage LRU chained list.
When the DRAM buffer memory is full,, from L1, select to sacrifice page or leaf when then replacing if the size of one-level chained list surpasses TargetSize; Otherwise then from L2, select to sacrifice page or leaf.
In sum, we can bring following benefit after taking garbled data to get into the scheme of SSD:
Effectively controlled the data that get into SSD, made that the data that get into SSD often all are " heat " data of visit.Stopped only to visit the possibility of the data entering SSD that does not once just visit again simultaneously.This can make the buffer memory utilization factor of SSD effectively improved.
Reduced the write operation that is sent on the SSD.Through the screening operation of data, again can be with the data that read among the HDD all through the SSD buffer memory, this write operation number that makes SSD send on the SSD greatly reduces, and can make obtain prolonging the serviceable life of SSD
The data of DRAM and SSD storage are disjoint, just exclusive.So, the useful space of two-level cache storage is the summation in two-level cache space.This makes that whole caching system can more efficiently handy DRAM and whole spaces of SSD.
Polymerization writes SSD and coarsegrain replacement
Test shows is more a lot of than the I/O good operation performance of small grain size for the I/O operation of SSD coarsegrain.This is that the operation of small grain size can cause SSD inside fragmentation to occur because the characteristics of can not the original place writing of SSD cause, thereby reduces performance.Therefore the present invention's technology of having adopted polymerization to write.When second level LRU chained list has been expired in the DRAM buffer memory, be not the granularity that adopts page or leaf when we replace, but take the granularity of page or leaf bunch.We are a page or leaf bunch size (being assumed to be the C page or leaf) with wipe granularity (erase block) or its multiple of SSD; That is to say that the back C page or leaf that is positioned at LRU end makes the as a whole DRAM buffer memory that is replaced out together; With them together as a bulk polymerization; Be written to then among the SSD, idiographic flow is as shown in Figure 5, and its algorithm is as follows:
Through the screening of two-stage LRU chained list in the DRAM buffer memory, second level LRU storage of linked list be the data of comparative heat.From MRU hold to the temperature of LRU end respectively from high to low;
When secondary LRU chained list was full, the page or leaf of LRU end was replaced to buffer zone;
The each replacement of secondary chained list all can make page of accumulation in the buffer zone, and therefore after after a while, buffer zone can arrive 64 pages.Arrived the size of Cluster this moment, so this Cluster is in the ready state
When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 1-4.
After having added polymerization technique, we also need manage the information of all pages bunch in SSD.Other LRU of page or leaf bunch level just, so the information below we also need in the DRAM buffer memory:
Page or leaf bunch level other LRU needs the weight of each page of record bunch
When SSD is full, also be bunch to be that granularity is replaced during replacement with page or leaf.The small grain size random write of having introduced when so just having avoided the data entering and having replaced out SSD.
As shown in Figure 6, the present invention discloses a kind of buffer memory management method based on SSD, comprising:
Step 1; Send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, reading of data back execution in step 2 to the buffer memory DRAM from HDD then;
Step 2 adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3; Carry out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of Block number of pages among the SSD.
Described buffer memory management method based on SSD, said step 1 comprises:
Step 21, if data exist, promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, request is accomplished;
Step 22 if in the hash table, do not exist, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Step 23 if in SSD, hit, then reads out these data from SSD, request is accomplished.
Described buffer memory management method based on SSD comprises:
The data that from HDD, read directly are copied among the buffer memory DRAM, and data are by replacing to the SSD buffer memory of part just after the buffer memory DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described buffer memory management method based on SSD, said step 2 comprises:
Step 41 when data enter into buffer memory DRAM for the first time, is placed on the MRU end of first order LRU chained list earlier, and two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list are set to whole buffer memory DRAM magnitude proportion and are made as p 1, 0<p 1<1;
Step 43 when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Step 44 when the data in the first order LRU chained list are hit for the second time, then is promoted to it in chained list of second level;
Step 45 when second level LRU chained list is full, is replaced the content of its second level LRU chained list to SSD, obtains visiting the temperature higher data.
Described buffer memory management method based on SSD, the calculating of adaptive change comprises in the said step 3:
Step 51, for two-stage LRU chained list has all added corresponding Shadowbuffer, it stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described buffer memory management method based on SSD, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously, after the data in the LRU chained list of the second level are replaced, also is saved among the LRU chained list Shadowbuffer of the second level;
Step 62, when data were hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needed to increase TargeSize++;
Step 63, when data were hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needed to increase TargetSize--.
Described buffer memory management method based on SSD, said step 3 also comprises:
Step 71, through the screening of two-stage LRU chained list among the buffer memory DRAM, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone;
Step 73, the each replacement of second level chained list all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state;
Step 74, when the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 71-74.
As shown in Figure 7, the present invention also discloses a kind of cache management system based on SSD, comprising:
Inspection buffer memory DRAM module; Be used to send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, then from HDD reading of data to buffer memory DRAM;
The garbled data module is used to adopt two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Adaptive change and polymerization module; Be used for carrying out the calculating of adaptive change, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch for the length of two-stage LRU chained list; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of middle Block number of pages.
Described cache management system based on SSD, said inspection buffer memory DRAM module comprises:
The hiting data module exists if be used for data, and promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, and request is accomplished;
Inquiry SSD module if be used for not existing at the hash table, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Read the SSD module, if with in SSD, hitting, then these data are read out from SSD, request is accomplished.
Described cache management system based on SSD comprises:
The data that from HDD, read directly are copied among the DRAM, and data are by replacing to the SSD buffer memory of part just after the DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
Described cache management system based on SSD, said garbled data module comprises:
Place MRU end module, be used for when data enter into buffer memory DRAM for the first time, be placed on the MRU end of first order LRU chained list earlier, two-stage LRU chained list is all in buffer memory DRAM;
The ratio module is set, and the size that is used for first order LRU chained list is set to whole buffer memory DRAM magnitude proportion and is made as p 1, 0<p 1<1;
The replacement module is used for when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Secondary hits module, is used for when the data of first order LRU chained list are hit for the second time, then it being promoted in the chained list of the second level;
The temperature module is used for when second level LRU chained list is full, the content of its second level LRU chained list being replaced to SSD, obtains visiting the temperature higher data.
Described cache management system based on SSD, said adaptive change and polymerization module comprise:
The adaptive change module; Be used to two-stage LRU chained list and all added corresponding Shadowbuffer; It stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
Described cache management system based on SSD, its adaptive change module also comprises:
Preserve the historical information module; After being used for the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously; After data in the LRU chained list of the second level are replaced, also be saved among the LRU chained list Shadowbuffer of the second level;
One-level increases the length module, be used for when data when first order LRU chained list Shadowbuffer hits, first order LRU chained list length needs growth, TargeSize++;
Secondary increases the length module, is used for when data when LRU chained list Shadowbuffer hits in the second level, and second level LRU chained list length needs growth, TargetSize--.
Described cache management system based on SSD, said adaptive change and polymerization module also comprise:
The polymerization module is used for the screening through buffer memory DRAM two-stage LRU chained list, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low; When second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone; Chained list each replacement in the second level all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state; When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties.
To sum up, we have invented a kind of new caching system, the hybrid cache system that this system is made up of DRAM and SSD, and it is intended to the performance of the maximized SSD of utilization.Major technology have following some:
1. designed caching system, designed a kind of framework that gets into the SSD data of controlling based on SSD.Pollute for fear of buffer memory, we take the mode of observed data temperature in DRAM to make data optionally get into SSD.These are different with traditional cache management strategy, and the data that it can avoid only visiting once get into buffer memory and cause buffer memory to pollute.Therefore the space that it can more efficient use SSD buffer memory.
2. designed a kind of algorithm of garbled data.We adopt the technology of two-stage LRU chained list to screen the data that get into SSD.One-level LRU storage of linked list is only visited data once, and the secondary chained list is then stored and visited above data at least twice, just compares the data of " heat ".Make that like this data of storing among the SSD are more effective, simultaneously, this arithmetic cost is very little, and can dynamically adapt to the variation of load.
3. to the performance characteristics of SSD, designed polymerization and got into the data of SSD and the scheme of coarsegrain replacement.On the basis of this paper garbled data in DRAM, come polymerization to get into the data of SSD through increasing a buffer zone.The non-constant of performance of the last small grain size random write of SSD, and take such scheme to stop the small grain size random write in the process of data entering SSD, thus the usability of SSD is improved.Meanwhile, when data were full in the buffer memory, we taked the mode of coarsegrain to replace, the continuous page or leaf of disposable replacement some, and this just combines with the mode that data write, thereby improves the performance of caching system.
Those skilled in the art can also carry out various modifications to above content under the condition that does not break away from the definite the spirit and scope of the present invention of claims.Therefore scope of the present invention is not limited in above explanation, but confirm by the scope of claims.

Claims (14)

1. the buffer memory management method based on SSD is characterized in that, comprising:
Step 1; Send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, reading of data back execution in step 2 to the buffer memory DRAM from HDD then;
Step 2 adopts two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Step 3; Carry out the calculating of adaptive change for the length of two-stage LRU chained list, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of Block number of pages among the SSD.
2. the buffer memory management method based on SSD as claimed in claim 1 is characterized in that, said step 1 comprises:
Step 21, if data exist, promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, request is accomplished;
Step 22 if in the hash table, do not exist, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Step 23 if in SSD, hit, then reads out these data from SSD, request is accomplished.
3. the buffer memory management method based on SSD as claimed in claim 2 is characterized in that, comprising:
The data that from HDD, read directly are copied among the buffer memory DRAM, and data are by replacing to the SSD buffer memory of part just after the buffer memory DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
4. the buffer memory management method based on SSD as claimed in claim 1 is characterized in that, said step 2 comprises:
Step 41 when data enter into buffer memory DRAM for the first time, is placed on the MRU end of first order LRU chained list earlier, and two-stage LRU chained list is all in buffer memory DRAM;
Step 42, the size of first order LRU chained list are set to whole buffer memory DRAM magnitude proportion and are made as p 1, 0<p 1<1;
Step 43 when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Step 44 when the data in the first order LRU chained list are hit for the second time, then is promoted to it in chained list of second level;
Step 45 when second level LRU chained list is full, is replaced the content of its second level LRU chained list to SSD, obtains visiting the temperature higher data.
5. the buffer memory management method based on SSD as claimed in claim 1 is characterized in that, the calculating of adaptive change comprises in the said step 3:
Step 51, for two-stage LRU chained list has all added corresponding Shadowbuffer, it stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM;
Step 52, the size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
6. the buffer memory management method based on SSD as claimed in claim 5 is characterized in that, its change procedure comprises:
Step 61, after the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously, after the data in the LRU chained list of the second level are replaced, also is saved among the LRU chained list Shadowbuffer of the second level;
Step 62, when data were hit in first order LRU chained list Shadowbuffer, first order LRU chained list length needed to increase TargeSize++;
Step 63, when data were hit in the LRU chained list Shadowbuffer of the second level, secondary LRU chained list length needed to increase TargetSize--.
7. the buffer memory management method based on SSD as claimed in claim 1 is characterized in that, said step 3 also comprises:
Step 71, through the screening of two-stage LRU chained list among the buffer memory DRAM, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low;
Step 72, when second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone;
Step 73, the each replacement of second level chained list all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state;
Step 74, when the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties, afterwards repeating step 71-74.
8. the cache management system based on SSD is characterized in that, comprising:
Inspection buffer memory DRAM module; Be used to send read-write requests, hiting data whether among the inspection buffer memory DRAM is searched the hash table; Judge whether said data exist; If exist then from buffer memory DRAM, read these data and return this time request, as not existing among the buffer memory DRAM, then from HDD reading of data to buffer memory DRAM;
The garbled data module is used to adopt two-stage LRU chained list and Ghost buffer to carry out the screening of data, the temperature of authentication data;
Adaptive change and polymerization module; Be used for carrying out the calculating of adaptive change, when LRU chained list in the second level among the buffer memory DRAM is full, take the granularity of page or leaf bunch for the length of two-stage LRU chained list; The back C page or leaf that will be positioned at second level LRU end is replaced out buffer memory DRAM as a bulk polymerization together; Coarsegrain is written among the SSD then, and wherein a page or leaf bunch size is assumed to the C page or leaf, and C is the integral multiple of middle Block number of pages.
9. the cache management system based on SSD as claimed in claim 8 is characterized in that, said inspection buffer memory DRAM module comprises:
The hiting data module exists if be used for data, and promptly hiting data in buffer memory DRAM then can directly return the data among the buffer memory DRAM, and request is accomplished;
Inquiry SSD module if be used for not existing at the hash table, then need continue to inquire about the hash table of SSD, judges whether these data are stored among the SSD;
Read the SSD module, if with in SSD, hitting, then these data are read out from SSD, request is accomplished.
10. the cache management system based on SSD as claimed in claim 9 is characterized in that, comprising:
The data that from HDD, read directly are copied among the DRAM, and data are by replacing to the SSD buffer memory of part just after the DRAM screening, and the content between buffer memory DRAM and the SSD is inequality, and the space of buffer memory is the summation in two spaces; If ask also missly in SSD, need request be sent among the HDD so and read.
11. the cache management system based on SSD as claimed in claim 8 is characterized in that, said garbled data module comprises:
Place MRU end module, be used for when data enter into buffer memory DRAM for the first time, be placed on the MRU end of first order LRU chained list earlier, two-stage LRU chained list is all in buffer memory DRAM;
The ratio module is set, and the size that is used for first order LRU chained list is set to whole buffer memory DRAM magnitude proportion and is made as p 1, 0<p 1<1;
The replacement module is used for when first order chained list is full, adopts the mode of LRU to replace, and the information of the page or leaf of replacing out is saved among the ghost buffer, and its history access record is preserved, and this history access record is the not high data of visit temperature;
Secondary hits module, is used for when the data of first order LRU chained list are hit for the second time, then it being promoted in the chained list of the second level;
The temperature module is used for when second level LRU chained list is full, the content of its second level LRU chained list being replaced to SSD, obtains visiting the temperature higher data.
12. the cache management system based on SSD as claimed in claim 8 is characterized in that, said adaptive change and polymerization module comprise:
The adaptive change module; Be used to two-stage LRU chained list and all added corresponding Shadowbuffer; It stores the visit information of the page or leaf that is replaced recently in the appropriate level chained list respectively, and two Shadowbuffer are the visit information record of the same quantity of memory buffers DRAM; The size of two Shadowbuffer dynamic change two-stage LRU chained lists, target setting value TargetSize, this value is the desired value of first order LRU chained list, promptly the target sizes of one-level LRU chained list is TargetSize; Initial value is made as the half the of buffer memory DRAM size; For subsequently the load process that changes.
13. the cache management system based on SSD as claimed in claim 12 is characterized in that, its adaptive change module also comprises:
Preserve the historical information module; After being used for the data page replacement of first order LRU chained list, historical information is kept among the first order LRU chained list ShadowBuffer, simultaneously; After data in the LRU chained list of the second level are replaced, also be saved among the LRU chained list Shadowbuffer of the second level;
One-level increases the length module, be used for when data when first order LRU chained list Shadowbuffer hits, first order LRU chained list length needs growth, TargeSize++;
Secondary increases the length module, is used for when data when LRU chained list Shadowbuffer hits in the second level, and second level LRU chained list length needs growth, TargetSize--.
14. the cache management system based on SSD as claimed in claim 8 is characterized in that, said adaptive change and polymerization module also comprise:
The polymerization module is used for the screening through buffer memory DRAM two-stage LRU chained list, second level LRU storage of linked list be the data of comparative heat, from MRU hold to the temperature of LRU end respectively from high to low; When second level LRU chained list was full, the page or leaf of second level LRU end was replaced to buffer zone; Chained list each replacement in the second level all can make page of accumulation in the buffer zone, and after after a while, buffer zone can arrive 64 pages, and arrived the size of Cluster this moment, so this Cluster is in the ready state; When the page or leaf of replacement once more got into buffer zone, buffer zone was full, therefore this Cluster was brushed to SSD, and buffer zone empties.
CN201210160350.5A 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system Active CN102760101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210160350.5A CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210160350.5A CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Publications (2)

Publication Number Publication Date
CN102760101A true CN102760101A (en) 2012-10-31
CN102760101B CN102760101B (en) 2015-03-18

Family

ID=47054564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210160350.5A Active CN102760101B (en) 2012-05-22 2012-05-22 SSD-based (Solid State Disk) cache management method and system

Country Status (1)

Country Link
CN (1) CN102760101B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150136A (en) * 2013-03-25 2013-06-12 中国人民解放军国防科学技术大学 Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN103198027A (en) * 2013-02-27 2013-07-10 天脉聚源(北京)传媒科技有限公司 Method and device for storing and providing files
CN103744624A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 System architecture for realizing selective upgrade of data cached in SSD (Solid State Disk) of storage system
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104077242A (en) * 2013-03-25 2014-10-01 华为技术有限公司 Cache management method and device
CN104317958A (en) * 2014-11-12 2015-01-28 北京国双科技有限公司 Method and system for processing data in real time
CN104462388A (en) * 2014-12-10 2015-03-25 上海爱数软件有限公司 Redundant data cleaning method based on cascade storage media
CN104991743A (en) * 2015-07-02 2015-10-21 西安交通大学 Wear-leveling method applied to cache of resistive random access memory of solid-state hard disk
CN105094686A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Data caching method, cache and computer system
CN105117174A (en) * 2015-08-31 2015-12-02 北京神州云科数据技术有限公司 Data hotness and data density based cache back-writing method and system
CN105488157A (en) * 2015-11-27 2016-04-13 浪潮软件股份有限公司 Data transmission method and device
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system
CN106020714A (en) * 2015-03-27 2016-10-12 英特尔公司 Caching and tiering for cloud storage
CN106294197A (en) * 2016-08-05 2017-01-04 华中科技大学 A kind of page frame replacement method towards nand flash memory
CN106527988A (en) * 2016-11-04 2017-03-22 郑州云海信息技术有限公司 SSD (Solid State Drive) data migration method and device
CN107015865A (en) * 2017-03-17 2017-08-04 华中科技大学 A kind of DRAM cache management method and system based on temporal locality
CN107133183A (en) * 2017-04-11 2017-09-05 深圳市云舒网络技术有限公司 A kind of cache data access method and system based on TCMU Virtual Block Devices
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109324759A (en) * 2018-09-17 2019-02-12 山东浪潮云投信息科技有限公司 The processing terminal of big data platform, the method read data and write data
CN110309015A (en) * 2019-03-25 2019-10-08 深圳市德名利电子有限公司 A kind of method for writing data and device and equipment based on Ssd apparatus
CN111796757A (en) * 2019-04-08 2020-10-20 中移(苏州)软件技术有限公司 Solid state disk cache region management method and device
CN111880739A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Near data processing system for super fusion equipment
CN111880900A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Design method of near data processing system for super fusion equipment
CN112015678A (en) * 2019-05-30 2020-12-01 北京京东尚科信息技术有限公司 Log caching method and device
CN112559452A (en) * 2020-12-11 2021-03-26 北京云宽志业网络技术有限公司 Data deduplication processing method, device, equipment and storage medium
CN113050894A (en) * 2021-04-20 2021-06-29 南京理工大学 Agricultural spectrum hybrid storage system cache replacement algorithm based on cuckoo algorithm
US11074189B2 (en) 2019-06-20 2021-07-27 International Business Machines Corporation FlatFlash system for byte granularity accessibility of memory in a unified memory-storage hierarchy
US11586629B2 (en) 2017-08-17 2023-02-21 Samsung Electronics Co., Ltd. Method and device of storing data object
CN116561020A (en) * 2023-05-15 2023-08-08 合芯科技(苏州)有限公司 Request processing method, device and storage medium under mixed cache granularity

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553478B (en) * 2015-09-23 2016-10-11 瑞昱半導體股份有限公司 Device capable of using external volatile memory and device capable of releasing internal volatile memory

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114944B1 (en) * 1982-12-28 1989-09-27 International Business Machines Corporation Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches
US5608890A (en) * 1992-07-02 1997-03-04 International Business Machines Corporation Data set level cache optimization
US20060265568A1 (en) * 2003-05-16 2006-11-23 Burton David A Methods and systems of cache memory management and snapshot operations
CN102118309A (en) * 2010-12-31 2011-07-06 中国科学院计算技术研究所 Method and system for double-machine hot backup
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102362464A (en) * 2011-04-19 2012-02-22 华为技术有限公司 Memory access monitoring method and device
CN102364474A (en) * 2011-11-17 2012-02-29 中国科学院计算技术研究所 Metadata storage system for cluster file system and metadata management method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114944B1 (en) * 1982-12-28 1989-09-27 International Business Machines Corporation Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches
US5608890A (en) * 1992-07-02 1997-03-04 International Business Machines Corporation Data set level cache optimization
US20060265568A1 (en) * 2003-05-16 2006-11-23 Burton David A Methods and systems of cache memory management and snapshot operations
CN102118309A (en) * 2010-12-31 2011-07-06 中国科学院计算技术研究所 Method and system for double-machine hot backup
CN102362464A (en) * 2011-04-19 2012-02-22 华为技术有限公司 Memory access monitoring method and device
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102364474A (en) * 2011-11-17 2012-02-29 中国科学院计算技术研究所 Metadata storage system for cluster file system and metadata management method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汤显 等: "《FClock :一种面向SSD的自适应缓冲区管理算法》", 《计算机学报》, vol. 33, no. 8, 31 August 2010 (2010-08-31) *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198027A (en) * 2013-02-27 2013-07-10 天脉聚源(北京)传媒科技有限公司 Method and device for storing and providing files
CN103150136A (en) * 2013-03-25 2013-06-12 中国人民解放军国防科学技术大学 Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN103150136B (en) * 2013-03-25 2014-07-23 中国人民解放军国防科学技术大学 Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN104077242A (en) * 2013-03-25 2014-10-01 华为技术有限公司 Cache management method and device
CN104077242B (en) * 2013-03-25 2017-03-29 华为技术有限公司 A kind of buffer memory management method and device
CN103744624A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 System architecture for realizing selective upgrade of data cached in SSD (Solid State Disk) of storage system
CN103744624B (en) * 2014-01-10 2017-09-22 浪潮电子信息产业股份有限公司 A kind of system architecture for realizing the data cached selectivity upgradings of storage system SSD
US10241919B2 (en) 2014-05-09 2019-03-26 Huawei Technologies Co., Ltd. Data caching method and computer system
CN105094686A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Data caching method, cache and computer system
CN105094686B (en) * 2014-05-09 2018-04-10 华为技术有限公司 Data cache method, caching and computer system
CN103984736A (en) * 2014-05-21 2014-08-13 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN103984736B (en) * 2014-05-21 2017-04-12 西安交通大学 Efficient buffer management method for NAND flash memory database system
CN104317958A (en) * 2014-11-12 2015-01-28 北京国双科技有限公司 Method and system for processing data in real time
CN104317958B (en) * 2014-11-12 2018-01-16 北京国双科技有限公司 A kind of real-time data processing method and system
CN104462388A (en) * 2014-12-10 2015-03-25 上海爱数软件有限公司 Redundant data cleaning method based on cascade storage media
CN104462388B (en) * 2014-12-10 2017-12-29 上海爱数信息技术股份有限公司 A kind of redundant data method for cleaning based on tandem type storage medium
CN106020714B (en) * 2015-03-27 2019-06-04 英特尔公司 Cache operations and hierarchical operations for cloud storage
CN106020714A (en) * 2015-03-27 2016-10-12 英特尔公司 Caching and tiering for cloud storage
US10423535B2 (en) 2015-03-27 2019-09-24 Intel Corporation Caching and tiering for cloud storage
CN104991743B (en) * 2015-07-02 2018-01-19 西安交通大学 Loss equalizing method applied to solid state hard disc resistance-variable storing device caching
CN104991743A (en) * 2015-07-02 2015-10-21 西安交通大学 Wear-leveling method applied to cache of resistive random access memory of solid-state hard disk
CN105117174A (en) * 2015-08-31 2015-12-02 北京神州云科数据技术有限公司 Data hotness and data density based cache back-writing method and system
CN105488157A (en) * 2015-11-27 2016-04-13 浪潮软件股份有限公司 Data transmission method and device
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system
WO2017211247A1 (en) * 2016-06-05 2017-12-14 华为技术有限公司 Cache management method, cache controller, and computer system
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN106294197B (en) * 2016-08-05 2019-12-13 华中科技大学 Page replacement method for NAND flash memory
CN106294197A (en) * 2016-08-05 2017-01-04 华中科技大学 A kind of page frame replacement method towards nand flash memory
CN106527988A (en) * 2016-11-04 2017-03-22 郑州云海信息技术有限公司 SSD (Solid State Drive) data migration method and device
CN106527988B (en) * 2016-11-04 2019-07-26 郑州云海信息技术有限公司 A kind of method and device of solid state hard disk Data Migration
CN107015865A (en) * 2017-03-17 2017-08-04 华中科技大学 A kind of DRAM cache management method and system based on temporal locality
CN107015865B (en) * 2017-03-17 2019-12-17 华中科技大学 DRAM cache management method and system based on time locality
CN107133183A (en) * 2017-04-11 2017-09-05 深圳市云舒网络技术有限公司 A kind of cache data access method and system based on TCMU Virtual Block Devices
CN107133183B (en) * 2017-04-11 2020-06-30 深圳市联云港科技有限公司 Cache data access method and system based on TCMU virtual block device
US11586629B2 (en) 2017-08-17 2023-02-21 Samsung Electronics Co., Ltd. Method and device of storing data object
CN109032969A (en) * 2018-06-16 2018-12-18 温州职业技术学院 A kind of caching method of the LRU-K algorithm based on K value dynamic monitoring
CN109324759A (en) * 2018-09-17 2019-02-12 山东浪潮云投信息科技有限公司 The processing terminal of big data platform, the method read data and write data
CN110309015A (en) * 2019-03-25 2019-10-08 深圳市德名利电子有限公司 A kind of method for writing data and device and equipment based on Ssd apparatus
CN111796757A (en) * 2019-04-08 2020-10-20 中移(苏州)软件技术有限公司 Solid state disk cache region management method and device
CN111796757B (en) * 2019-04-08 2022-12-13 中移(苏州)软件技术有限公司 Solid state disk cache region management method and device
CN112015678A (en) * 2019-05-30 2020-12-01 北京京东尚科信息技术有限公司 Log caching method and device
US11074189B2 (en) 2019-06-20 2021-07-27 International Business Machines Corporation FlatFlash system for byte granularity accessibility of memory in a unified memory-storage hierarchy
CN111880900A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Design method of near data processing system for super fusion equipment
CN111880739A (en) * 2020-07-29 2020-11-03 北京计算机技术及应用研究所 Near data processing system for super fusion equipment
CN112559452A (en) * 2020-12-11 2021-03-26 北京云宽志业网络技术有限公司 Data deduplication processing method, device, equipment and storage medium
CN112559452B (en) * 2020-12-11 2021-12-17 北京云宽志业网络技术有限公司 Data deduplication processing method, device, equipment and storage medium
CN113050894A (en) * 2021-04-20 2021-06-29 南京理工大学 Agricultural spectrum hybrid storage system cache replacement algorithm based on cuckoo algorithm
CN116561020A (en) * 2023-05-15 2023-08-08 合芯科技(苏州)有限公司 Request processing method, device and storage medium under mixed cache granularity
CN116561020B (en) * 2023-05-15 2024-04-09 合芯科技(苏州)有限公司 Request processing method, device and storage medium under mixed cache granularity

Also Published As

Publication number Publication date
CN102760101B (en) 2015-03-18

Similar Documents

Publication Publication Date Title
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
EP3210121B1 (en) Cache optimization technique for large working data sets
US10241919B2 (en) Data caching method and computer system
US8650362B2 (en) System for increasing utilization of storage media
KR101726824B1 (en) Efficient Use of Hybrid Media in Cache Architectures
Eisenman et al. Flashield: a hybrid key-value cache that controls flash write amplification
CN106547476B (en) Method and apparatus for data storage system
Jiang et al. S-FTL: An efficient address translation for flash memory by exploiting spatial locality
JP6613375B2 (en) Profiling cache replacement
Zhou et al. An efficient page-level FTL to optimize address translation in flash memory
CN107391398B (en) Management method and system for flash memory cache region
CN108762671A (en) Mixing memory system and its management method based on PCM and DRAM
Liu et al. PCM-based durable write cache for fast disk I/O
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
Wu et al. APP-LRU: A new page replacement method for PCM/DRAM-based hybrid memory systems
CN113254358A (en) Method and system for address table cache management
CN109002400B (en) Content-aware computer cache management system and method
Cheng et al. AMC: an adaptive multi‐level cache algorithm in hybrid storage systems
Hu et al. GC-ARM: Garbage collection-aware RAM management for flash based solid state drives
Fan et al. Extending SSD lifespan with comprehensive non-volatile memory-based write buffers
US20140359228A1 (en) Cache allocation in a computerized system
CN115344201A (en) Data storage method, data query method and device
KR101020781B1 (en) A method for log management in flash memory-based database systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: HUAWEI TECHNOLOGY CO., LTD.

Effective date: 20130116

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 100190 HAIDIAN, BEIJING

TA01 Transfer of patent application right

Effective date of registration: 20130116

Address after: 100190 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Applicant after: Institute of Computing Technology, Chinese Academy of Sciences

Applicant after: Huawei Technologies Co., Ltd.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Applicant before: Institute of Computing Technology, Chinese Academy of Sciences

C14 Grant of patent or utility model
GR01 Patent grant