US20170308461A1 - Method and apparatus for memory management - Google Patents

Method and apparatus for memory management Download PDF

Info

Publication number
US20170308461A1
US20170308461A1 US15/627,001 US201715627001A US2017308461A1 US 20170308461 A1 US20170308461 A1 US 20170308461A1 US 201715627001 A US201715627001 A US 201715627001A US 2017308461 A1 US2017308461 A1 US 2017308461A1
Authority
US
United States
Prior art keywords
memory
memory space
cpu
space
pinned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/627,001
Inventor
Gongbiao NIU
Zhen Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to US15/627,001 priority Critical patent/US20170308461A1/en
Publication of US20170308461A1 publication Critical patent/US20170308461A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/128Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement

Definitions

  • the disclosure relates generally to the field of memory management and, more particularly, to a method and apparatus for managing computer memory.
  • a memory size is not capable to perform dynamic propagation when memory usage needs to be replicated.
  • a CPU in a server system tends to utilize a relatively large amount of memory to process these protocols, causing a certain amount of delay in time and decrease in the processing efficiency of the server system.
  • Embodiments of the disclosure provide methods and apparatus of memory management. Also, embodiments of the disclosure provide techniques to address the shortcomings of conventional solutions by providing better memory management and CPU processing allocation.
  • the server system includes a plurality of memory chips that define a memory space that includes a plurality of pinned memory spaces and an unallocated memory space.
  • the server system also includes a plurality of memory controllers. Each memory controller has a plurality of channels, and each channel is coupled to a number of memory chips of the plurality of memory chips.
  • the server system includes a plurality of CPUs coupled to the plurality of memory controllers in a one-to-one correspondence.
  • the plurality of CPUs is pinned to the plurality of pinned memory spaces in a one-to-one correspondence.
  • a CPU of the plurality of CPUs has a corresponding memory controller and a corresponding pinned memory space. The CPU to determine a utilization value of the corresponding pinned memory space and, when the utilization value exceeds an upper threshold, determine if a channel of the corresponding memory controller is coupled to a number of memory chips that include both the corresponding pinned memory space and a portion of the unallocated memory space.
  • a method of managing a memory includes pooling a plurality of memory chips to form a memory pool.
  • the plurality of memory chips is coupled to a plurality of memory controllers.
  • the plurality of memory controllers are coupled to a plurality of CPUs in a one-to-one correspondence.
  • the method also includes dividing the memory pool to form a plurality of pinned memory spaces and an unallocated memory space.
  • the method includes pinning each pinned memory space to a corresponding CPU of the plurality of CPUs.
  • the method further includes adding memory space from the unallocated memory space to a pinned memory space to form an increased memory space when a utilization of the pinned memory space exceeds an upper threshold.
  • another embodiment of the disclosure provides a method of operating a memory space that has a plurality of pinned memory spaces and an unallocated memory space.
  • the method includes obtaining a utilization value that represents usage of a pinned memory space of the plurality of pinned memory spaces by a CPU during operation.
  • the CPU is coupled to a memory controller.
  • the memory controller has a plurality of channels, and one or more channels are coupled to one or more memory chips that include the pinned memory space.
  • the method includes determining if the utilization value exceeds an upper threshold and, when the utilization value exceeds the upper threshold, determining if a channel of the memory controller is coupled to a number of memory chips that include both the pinned memory space and a portion of the unallocated memory space.
  • FIG. 1A is a flow diagram of a method of memory management according to an embodiment.
  • FIG. 1B is a flow diagram of a method of memory management according to an embodiment.
  • FIG. 2 is a block diagram of an apparatus of memory management according to an embodiment.
  • FIG. 1A shows a method of memory management.
  • step S 101 at least one memory is pooled to generate a memory pool.
  • the memory mentioned in step S 101 may be a memory managed by a memory controller configured by a server in a server system.
  • Pooling includes, but not limited to, the setting of each memory managed by a memory controller configured by the server in the server system as a node, and the connecting of the nodes in the server system through a logic chip or a logic device to create a memory pool in units of the nodes.
  • At least one memory is pooled. There are additional steps after the pooling operation to create the memory pool.
  • Each CPU configured for each server in the server system is set as a CPU node.
  • the CPU nodes are interconnected through a logic chip or a logic device to create a CPU pool that includes the CPU nodes.
  • the pooling of the memories in the server system and the pooling of the CPUs realize a physical decoupling of the memory and the CPU in the server system. Therefore, as a result of this physical decoupling, unconstrained configurations between the memories and CPUs in the server system can be accomplished. That is, the number of the memories in the server system can be increased or decreased individually due to these unconstrained configurations. Additionally, the number of the CPUs in the server system may also be increased or decreased individually by the unconstrained configurations to avoid a needless lack of usage of memory and CPU resources.
  • step S 102 the memory pool is divided to generate at least one memory space, each of the memory spaces being allocated respectively to a plurality of CPUs in a one-to-one correspondence manner.
  • the memory space allocated to a CPU is set as a pinned memory of the CPU, and the unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated in the memory pool.
  • n memory controllers are randomly selected as the “n” memory controller corresponding to the “n” CPUs to establish a one-to-one mapping relationship.
  • the relationships between the memory controllers and the CPUs are one-to-one correspondence relationships.
  • correspondence relationships between the memory controllers and the CPUs such as in a one-to-many or many-to-one manner, which can be easily configured but will not be addressed in detail herein.
  • the Quick Path Interconnect “QPI” is a packet transmission-based high-speed serial point-to-point connecting agreement that is used by CPU to access data.
  • the DIMM or dual in-line memory module is a module that includes one or more random-access memory chips on a small circuit board with pins that connect to the computer motherboard.
  • a memory controller selects a memory for use to manage the exchange of data between the memory and the CPU.
  • Each memory controller can support 2-4 DDR (Double data rate synchronous dynamic random-access memory) channels, each channel can support 1-3 DIMM slots, for 1-3 memory chips. The memory space of each of memory chips corresponds to the memory space of the DIMM slots.
  • DDR Double data rate synchronous dynamic random-access memory
  • the memory space which corresponds to at least one DIMM slot in at least one channel that is controlled by each memory controller is allocated to the CPU.
  • the allocation of the memory space is performed in a one-to-one corresponding relationship as the pinned memory of the CPU.
  • the corresponding memory spaces of 4 DIMM slots in 2 channels controlled by each memory controller are allocated to the CPU that is in a one-to-one corresponding relationship with the memory controller, and the corresponding memory spaces being set as the pinned memory of the CPU, which is shown in Table 1 including:
  • the corresponding memory space of 3 DIMM slots (DIMM 1, DIMM 2 and DIMM 3) in the first channel (channel 01) under each memory controller and the corresponding memory space of 1 DIMM slot (DIMM 1) in the second channel (channel 02) are allocated respectively to each CPU of “n” CPUs as the pinned memory of each CPU in “n” CPUs.
  • the unallocated memory space which remains in the memory pool, is set as the shared memory pool.
  • the parts in the Table 1 including the corresponding memory spaces of DIMM 2 and DIMM 3 in channel 02, and the corresponding memory spaces of DIMM 1, DIMM 2, and DIMM 3 in channel 03 and channel 04 controlled by each memory controller are set as the shared memory pool.
  • the address of the memory space of the memory chip is a fixed address. That is to say the address of the corresponding memory space of the DIMM slot is a fixed address.
  • step S 103 a memory value that represents usage of the respective memory space by the respective CPU during operation is obtained.
  • the memory values that represent usage of the respective memory space by each CPU during operation are obtained for each of the above-described “n” CPUs.
  • the memory value used by each CPU during operation of all “n” CPUs obtained by the operating system in the server system includes the memory value used by the operating system and the memory value used by application programs.
  • the memory values that represent usage of the respective memory space by the respective CPU during operation in “n” CPUs are obtained. These memory values are obtained by setting each CPU of “n” CPUs as a determining unit to determine if the memory value used by the CPU exceeds a preset threshold range. According to the determination, further operation is taken such as whether to allocate additional memory space from the shared memory pool to the CPU, or to free part of original memory space that has been allocated to the CPU.
  • all the memory values used by at least one CPU during operation of “n” CPUs can be summed up to obtain an overall memory value used by at least one CPU during operation. Then, it is determined if the overall memory value used by at least one CPU during operation exceeds a preset threshold range. According to the determination, either additional memory space is selected from the shared memory pool, which is allocated to each of the CPUs, or the memory space is freed or released.
  • step S 104 a determination is made as to whether the memory value exceeds a preset threshold range.
  • the memory values of each CPU during operation of “n” CPUs are used to determine if each memory value exceeds a preset threshold range of each CPU.
  • one CPU of “n” CPUs is used in the process to determine if the memory value exceeds a preset threshold range of the CPU, including:
  • step S 105 is initiated to apply for additional memory space from the shared memory pool according to a certain proportion of the memory value, and to allocate the additional memory space to the corresponding CPU.
  • a next step is initiated to determine if the memory value used by the CPU during processing is less than a preset second threshold.
  • step S 105 the memory space allocated to the CPU is partially released according to a certain proportion of the memory value. The process of releasing memory continues until the memory space allocated to the CPU reaches the pinned memory.
  • step S 103 If the memory value used by the CPU during operation is not less than a preset second threshold, the process returns to step S 103 to obtain the memory values used by each of the CPUs of “n” CPUs.
  • step S 103 to obtain the memory values used by each CPU during operation of “n” CPUs, all the memory values used by at least one CPU during operation of “n” CPUs are summed up to obtain an overall memory value used by at least one CPU during processing. Next, it is determined if the overall memory value used by at least one CPU during operation exceeds a preset threshold range. According to the determination, either additional memory space is selected from the shared memory pool, which is allocated to each of the CPUs, or the memory space is freed or reallocated.
  • step S 104 the determining of whether the overall memory value used by at least one CPU during operation exceeds a preset threshold range can be implemented as described next.
  • At least one additional memory space is applied from the memory pool according to a certain proportion of the memory value such that to be allocated to each CPU in the at least one CPU in a one-to-one corresponding manner.
  • a next step is initiated to determine if the overall memory value used by at least one CPU during operation is less than a preset second threshold.
  • the memory space allocated to each CPU in the at least one CPU is partially released according to a certain proportion of the memory value. The process of releasing memory continues until the memory spaces allocated to each CPU in the at least one CPU reaches the size of the pinned memory.
  • step S 103 If the overall memory value used by at least one CPU during operation is not less than the preset first threshold, the process returns to step S 103 to obtain the memory values used by each CPU of “n” CPUs.
  • step S 105 additional memory space is applied from the memory pool to allocate to the CPU, or the memory space allocated to the CPU is reallocated or freed.
  • a precondition for implementing step S 105 is that, in step S 104 , it is determined the memory value used by a CPU during processing exceeds a preset first threshold, or the memory value used by a CPU during processing is less than a preset second threshold.
  • one additional memory space is applied from the memory pool according to a certain proportion of the memory value to be allocated to the CPU.
  • the memory space allocated to the CPU is partially released until the memory spaces allocated to the CPU reaches the size of the pinned memory.
  • the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • the size of the memory space applied from the memory space of the corresponding DIMM slot will be equivalent to 20% of the memory value used by the CPU during operation. This is the amount to be allocated to the CPU.
  • a next step is initiated to determine if there exists a DIMM slot in the memory pool located in the same channel under the same memory controller as the DIMM slot that corresponds to the pinned memory, which is described in the following.
  • DIMM slot it is determined if there is a DIMM slot in the memory pool locating in the same memory controller as the DIMM slot that corresponds to the pinned memory. If there is a DIMM slot in the memory pool located in the same memory controller as the DIMM slot that corresponds to the pinned memory, the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • the size of the memory space applied from the memory space of the corresponding DIMM slot will be equivalent to 20% of the memory value used by the CPU during operation, which is allocated to the CPU.
  • the memory space in the shared memory pool is allocated to the CPU according to a certain proportion of the memory value.
  • the memory space applied from the shared memory pool will be equivalent to the size of 20% of the memory value used by the CPU during operation, which is allocated to the CPU.
  • the step of releasing or reallocating memory spaces from the memory pool according to a certain proportion of the memory value may be implemented in other ways corresponding to the above mentioned method.
  • the size of the memory space released will be equivalent to 20% of the memory value used by the CPU during operation.
  • steps S 103 , S 104 , and S 105 are in a cyclic process of determination.
  • the memory values used by each CPU during operation of “n” CPUs are obtained in step S 103 .
  • Whether the memory values used by each CPU during operation of “n” CPUs exceeds a preset threshold range is determined in step S 104 .
  • step S 105 additional memory space is selected from the shared memory pool to be allocated to the CPU or the memory space allocated to the CPU is reallocated or freed.
  • the cyclic determination process of steps S 103 , S 104 , and S 105 implement a real time monitoring of the memory values used by the CPU during processing. According to the result of the monitoring, certain memory space is allocated to the CPU or certain memory space of the CPU is released such that the memory is utilized in a proper way and the functionality of the server system is enhanced.
  • the corresponding memory space of the DIMM slot is then allocated to the CPU according to a certain proportion of the memory value.
  • a next step is initiated to determine if there exists a DIMM slot in the shared memory pool located in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU in the at least one CPU.
  • DIMM slot in the shared memory pool located in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU of the set of CPUs.
  • the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • the memory space in the shared memory pool is allocated to the CPU according a certain proportion of the memory value.
  • FIG. 1B shows a flow diagram of a method of memory management according to another embodiment of the disclosure.
  • step S 110 at least one memory is pooled to generate a memory pool.
  • step S 130 the memory pool is divided to generate at least one memory space.
  • each of the memory spaces is allocated respectively to a plurality of CPUs in a one-to-one correspondence manner.
  • step S 141 the one-to-one allocated respective memory space is allocated to a respective CPU as a pinned memory of the respective CPU.
  • unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated in the memory pool.
  • step S 150 a memory value that represents usage of the respective memory space by the respective CPU during operation is obtained.
  • step S 160 it is determined as to whether the memory value obtained in step S 150 exceeds a preset threshold range. If the memory value used by the CPU during operation exceeds a preset first threshold, then step S 171 is initiated to apply for additional memory space from the shared memory pool according to a certain proportion of the memory value and to allocate the additional memory space to the corresponding CPU. If the memory value used by the CPU during operation is less than the preset second threshold, in step S 172 , the memory space allocated to the CPU is partially released according to a certain proportion of the memory value.
  • FIG. 2 there is shown another embodiment of the disclosure of a device 200 of memory management that includes the following units.
  • a memory pooling unit 210 configured to pool at least one memory to generate a memory pool.
  • a memory divider 220 divides the memory pool to generate at least one memory space and allocates a respective memory space to a respective CPU in a one-to-one correspondence manner.
  • the respective memory space allocated to a respective CPU is set as a pinned memory of the respective CPU.
  • An unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated or is not pinned memory in the memory pool.
  • a memory value obtaining unit 230 obtains a memory value that represents usage of the respective memory space by the respective CPU during operation.
  • a memory value determination unit 240 determines if the memory value exceeds a preset threshold range and whether to commence a memory manager 250 if the memory value exceeds the preset threshold range.
  • the memory manager 250 is configured for selecting additional memory space from the shared memory pool to allocate to the CPU and/or releasing or reallocate the memory space allocated to the CPU if the threshold is exceeded.
  • the memory divider 220 includes a memory controller selecting sub-module configured for selecting a set of memory controllers, the number of memory controllers is equal to the number of the set of CPUs. It also includes an allocator sub-module configured for matching a QPI port address of the CPU with a port address of the memory controller in a one-to-one correspondence manner and for allocating the corresponding memory space of at least one DIMM slot in at least one channel under the memory controller to the CPU.
  • the corresponding memory is set as a pinned memory of the CPU and the corresponding memory address of the DIMM slot is a fixed address.
  • the device 200 also includes a CPU pooling unit configured for setting each CPU in a set of CPUs as a node of the set of CPUs and for connecting all the nodes of the set of CPUs to generate a CPU pool.
  • a CPU pooling unit configured for setting each CPU in a set of CPUs as a node of the set of CPUs and for connecting all the nodes of the set of CPUs to generate a CPU pool.
  • a computing device includes one or more CPUs, an input/output port, an Internet port, and a memory.
  • the computer-readable media do not include the transitory media, such as the modulate data signal and carrier.
  • the computer-readable media disclosed include, but not limit to, the phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of the random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash RAM and other memory technologies, compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage, magnetic tape, magnetic disc, other magnetic storage, and any other non-transition media
  • PRAM phase-change memory
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash RAM and other memory technologies
  • compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage magnetic tape, magnetic disc, other magnetic storage, and any other non
  • the disclosure may be implemented as the methods, systems, and/or the instructions for a computer. It is intended that the disclosure may be implemented as hardware, software, and/or hardware and software combined. The disclosure may be implemented as a computer program product utilizing one or more storage mediums including computer program instructions.
  • the storage medium includes, but not limit to, the phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of the random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash RAM and other memory technologies, compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage, magnetic tape, magnetic disc, other magnetic storage, and any other storage type.
  • PRAM phase-change memory
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash RAM and other memory technologies
  • compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage magnetic tape, magnetic disc, other magnetic storage, and any other storage type.

Abstract

A method and apparatus of memory management are disclosed. Pooling of at least one memory to generate a memory pool, dividing the memory pool to generate at least one memory space, and allocating a respective memory space to a respective CPU in a one-to-one correspondence manner are performed. Further, the respective memory space allocated to the respective CPU is set as a pinned memory of the respective CPU. Additionally, setting unallocated memory space as a shared memory pool, obtaining a memory value that represents usage of the respective memory space by the respective CPU during operation, and determining if the memory value exceeds a preset threshold range are performed. Selecting, if the memory value exceeds the preset threshold range, additional memory space from the memory pool to allocate to the respective CPU or reallocating at least a portion of the respective memory space allocated to the CPU are performed.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of prior application Ser. No. 14/952,847, filed on Nov. 25, 2015, which claims priority to Chinese Patent Application No. 201410686872.8, filed on Nov. 25, 2014, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The disclosure relates generally to the field of memory management and, more particularly, to a method and apparatus for managing computer memory.
  • BACKGROUND
  • Generally, there is a fixed ratio between memory size and CPU processor capability in a single server. When there is a need to increase the power of a CPU or the memory size, the common manner in which this process is performed is to add one or more servers. As a result, there is an increase in the overall capability of the CPU or the size of the memory. By simply adding additional servers in order to increase memory space or CPU processing capability, there can be a resulting over-capacity that results in a decrease in the utilization of the overall available CPU processing capability or memory usage that can be considered a waste of resources.
  • Currently, software-based solutions are available to address the above-described problems of the decreased usage rate of CPU or memory as well as the associated waste of resources. Such software-based solutions manage and dispatch memory in a service system by protocol of encapsulation.
  • However, the current solutions have drawbacks. For example, a memory size is not capable to perform dynamic propagation when memory usage needs to be replicated. Moreover, due to the use of protocols of encapsulation at the software level, a CPU in a server system tends to utilize a relatively large amount of memory to process these protocols, causing a certain amount of delay in time and decrease in the processing efficiency of the server system.
  • SUMMARY OF THE DISCLOSURE
  • Embodiments of the disclosure provide methods and apparatus of memory management. Also, embodiments of the disclosure provide techniques to address the shortcomings of conventional solutions by providing better memory management and CPU processing allocation.
  • Accordingly, one embodiment of the disclosure provides a server system. The server system includes a plurality of memory chips that define a memory space that includes a plurality of pinned memory spaces and an unallocated memory space. The server system also includes a plurality of memory controllers. Each memory controller has a plurality of channels, and each channel is coupled to a number of memory chips of the plurality of memory chips.
  • In addition, the server system includes a plurality of CPUs coupled to the plurality of memory controllers in a one-to-one correspondence. The plurality of CPUs is pinned to the plurality of pinned memory spaces in a one-to-one correspondence. A CPU of the plurality of CPUs has a corresponding memory controller and a corresponding pinned memory space. The CPU to determine a utilization value of the corresponding pinned memory space and, when the utilization value exceeds an upper threshold, determine if a channel of the corresponding memory controller is coupled to a number of memory chips that include both the corresponding pinned memory space and a portion of the unallocated memory space.
  • According to another embodiment of the disclosure, a method of managing a memory is provided. The method includes pooling a plurality of memory chips to form a memory pool. The plurality of memory chips is coupled to a plurality of memory controllers. The plurality of memory controllers are coupled to a plurality of CPUs in a one-to-one correspondence.
  • The method also includes dividing the memory pool to form a plurality of pinned memory spaces and an unallocated memory space. In addition, the method includes pinning each pinned memory space to a corresponding CPU of the plurality of CPUs. The method further includes adding memory space from the unallocated memory space to a pinned memory space to form an increased memory space when a utilization of the pinned memory space exceeds an upper threshold.
  • Further, another embodiment of the disclosure provides a method of operating a memory space that has a plurality of pinned memory spaces and an unallocated memory space. The method includes obtaining a utilization value that represents usage of a pinned memory space of the plurality of pinned memory spaces by a CPU during operation. The CPU is coupled to a memory controller. The memory controller has a plurality of channels, and one or more channels are coupled to one or more memory chips that include the pinned memory space.
  • In addition, the method includes determining if the utilization value exceeds an upper threshold and, when the utilization value exceeds the upper threshold, determining if a channel of the memory controller is coupled to a number of memory chips that include both the pinned memory space and a portion of the unallocated memory space.
  • This summary includes, by necessity, simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the disclosure, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the disclosure will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters designate like elements.
  • FIG. 1A is a flow diagram of a method of memory management according to an embodiment.
  • FIG. 1B is a flow diagram of a method of memory management according to an embodiment.
  • FIG. 2 is a block diagram of an apparatus of memory management according to an embodiment.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. While the disclosure will be described in conjunction with the embodiments, it will be understood that they are not intended to limit the disclosure to these embodiments. On the contrary, the disclosure is intended to cover alternatives, modifications, and equivalents which may be included within the spirit and scope of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of embodiments, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. However, it will be recognized by one of ordinary skill in the art that embodiments may be practiced without these specific details. In other examples, well-known methods, procedures, and components have not been described in detail so as not to unnecessarily obscure aspects of embodiments. Further, embodiments of the disclosure provide a method and an apparatus of memory management in a server system. There can be embodiments based on other types of systems which is not limited here.
  • FIG. 1A shows a method of memory management. In step S101, at least one memory is pooled to generate a memory pool.
  • The memory mentioned in step S101 may be a memory managed by a memory controller configured by a server in a server system.
  • Pooling includes, but not limited to, the setting of each memory managed by a memory controller configured by the server in the server system as a node, and the connecting of the nodes in the server system through a logic chip or a logic device to create a memory pool in units of the nodes.
  • In an embodiment of the disclosure, at least one memory is pooled. There are additional steps after the pooling operation to create the memory pool.
  • Each CPU configured for each server in the server system is set as a CPU node. The CPU nodes are interconnected through a logic chip or a logic device to create a CPU pool that includes the CPU nodes.
  • The pooling of the memories in the server system and the pooling of the CPUs realize a physical decoupling of the memory and the CPU in the server system. Therefore, as a result of this physical decoupling, unconstrained configurations between the memories and CPUs in the server system can be accomplished. That is, the number of the memories in the server system can be increased or decreased individually due to these unconstrained configurations. Additionally, the number of the CPUs in the server system may also be increased or decreased individually by the unconstrained configurations to avoid a needless lack of usage of memory and CPU resources.
  • Additionally, there may be other ways or examples to implement or configure the pooling of memories and the CPUs that are not being addressed in detail herein.
  • In step S102, the memory pool is divided to generate at least one memory space, each of the memory spaces being allocated respectively to a plurality of CPUs in a one-to-one correspondence manner. The memory space allocated to a CPU is set as a pinned memory of the CPU, and the unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated in the memory pool.
  • Assuming the number of the CPUs in the server system is “n” and the number of the memory controllers is “m,” where (n<m), the steps to divide the memory pool to generate at least one memory space to allocate to a respective CPU and set the allocated memory space as the pinned memory of the respective CPU are described next.
  • Initially, selecting a number of memory controllers equal to the number of the CPUs in the plurality of CPUs. Here, from the above-described “m” memory controllers, “n” memory controllers are randomly selected as the “n” memory controller corresponding to the “n” CPUs to establish a one-to-one mapping relationship.
  • According to an embodiment of the disclosure, the relationships between the memory controllers and the CPUs are one-to-one correspondence relationships. There are other types of correspondence relationships between the memory controllers and the CPUs such as in a one-to-many or many-to-one manner, which can be easily configured but will not be addressed in detail herein.
  • Next, generating a set of relationships between “n” Quick Path Interconnect “QPI” port addresses of the “n” CPUs with “n” port addresses of the “n” memories in a one-to-one correspondence manner. Following this, allocating a corresponding memory space of at least one dual in-line memory module “DIMM” slot in at least one channel under each memory controller of the “n” memory controllers to the CPU that has a one-to-one corresponding relationship with the memory controller. The corresponding memory space to be set as the pinned memory of the CPU. The allocating is repeated until all the CPUs in “n” CPUs have obtained pinned memories.
  • The Quick Path Interconnect “QPI” is a packet transmission-based high-speed serial point-to-point connecting agreement that is used by CPU to access data. The DIMM or dual in-line memory module is a module that includes one or more random-access memory chips on a small circuit board with pins that connect to the computer motherboard.
  • A memory controller selects a memory for use to manage the exchange of data between the memory and the CPU. Each memory controller can support 2-4 DDR (Double data rate synchronous dynamic random-access memory) channels, each channel can support 1-3 DIMM slots, for 1-3 memory chips. The memory space of each of memory chips corresponds to the memory space of the DIMM slots.
  • In another embodiment according to the disclosure, the memory space which corresponds to at least one DIMM slot in at least one channel that is controlled by each memory controller is allocated to the CPU. The allocation of the memory space is performed in a one-to-one corresponding relationship as the pinned memory of the CPU.
  • For example, the corresponding memory spaces of 4 DIMM slots in 2 channels controlled by each memory controller are allocated to the CPU that is in a one-to-one corresponding relationship with the memory controller, and the corresponding memory spaces being set as the pinned memory of the CPU, which is shown in Table 1 including:
  • The corresponding memory space of 3 DIMM slots (DIMM 1, DIMM 2 and DIMM 3) in the first channel (channel 01) under each memory controller and the corresponding memory space of 1 DIMM slot (DIMM 1) in the second channel (channel 02) are allocated respectively to each CPU of “n” CPUs as the pinned memory of each CPU in “n” CPUs.
  • The unallocated memory space, which remains in the memory pool, is set as the shared memory pool. The parts in the Table 1 including the corresponding memory spaces of DIMM 2 and DIMM 3 in channel 02, and the corresponding memory spaces of DIMM 1, DIMM 2, and DIMM 3 in channel 03 and channel 04 controlled by each memory controller are set as the shared memory pool.
  • TABLE 1
    Memory controller DIMM
    CPU-QPI port address port address Channel slot
    1 1 01 DIMM 1
    DIMM 2
    DIMM 3
    02 DIMM 1
    DIMM 2
    DIMM 3
    03 DIMM 1
    DIMM 2
    DIMM 3
    04 DIMM 1
    DIMM 2
    DIMM 3
    2 2 01 DIMM 1
    DIMM 2
    DIMM 3
    02 DIMM 1
    DIMM 2
    DIMM 3
    03 DIMM 1
    DIMM 2
    DIMM 3
    04 DIMM 1
    DIMM 2
    DIMM 3
    3 3 . . . . . .
    . . . . . . . . . . . .
    N N . . . . . .
    n + 1 . . . . . .
    n + 2 . . . . . .
    n + 3 . . . . . .
    . . . . . . . . .
    M . . . . . .
  • In the above embodiment according to the disclosure, the address of the memory space of the memory chip is a fixed address. That is to say the address of the corresponding memory space of the DIMM slot is a fixed address.
  • There are other possible embodiments to implement the addresses of the corresponding memory space of a DIMM slot. An example is allocating a real time address by the server system. Other address implementation techniques will not be explained into detail here.
  • In step S103, a memory value that represents usage of the respective memory space by the respective CPU during operation is obtained.
  • In this embodiment, the memory values that represent usage of the respective memory space by each CPU during operation are obtained for each of the above-described “n” CPUs.
  • It should be noted that the memory value used by each CPU during operation of all “n” CPUs obtained by the operating system in the server system includes the memory value used by the operating system and the memory value used by application programs.
  • Other than the above-mentioned ways to obtain the used memory values, there are additional ways that are not going to be described in detail here.
  • In this embodiment, the memory values that represent usage of the respective memory space by the respective CPU during operation in “n” CPUs are obtained. These memory values are obtained by setting each CPU of “n” CPUs as a determining unit to determine if the memory value used by the CPU exceeds a preset threshold range. According to the determination, further operation is taken such as whether to allocate additional memory space from the shared memory pool to the CPU, or to free part of original memory space that has been allocated to the CPU.
  • Besides the method mentioned above, there are other ways for determining the memory space needed. For example, to obtain memory values used by each CPU during operation of “n” CPUs, all the memory values used by at least one CPU during operation of “n” CPUs can be summed up to obtain an overall memory value used by at least one CPU during operation. Then, it is determined if the overall memory value used by at least one CPU during operation exceeds a preset threshold range. According to the determination, either additional memory space is selected from the shared memory pool, which is allocated to each of the CPUs, or the memory space is freed or released.
  • In step S104, a determination is made as to whether the memory value exceeds a preset threshold range.
  • In this step, the memory values of each CPU during operation of “n” CPUs are used to determine if each memory value exceeds a preset threshold range of each CPU.
  • As an example, one CPU of “n” CPUs is used in the process to determine if the memory value exceeds a preset threshold range of the CPU, including:
  • Determining if the memory value used by the CPU during operation exceeds a preset first threshold.
  • If the memory value used by the CPU during operation exceeds a preset first threshold, then step S105 is initiated to apply for additional memory space from the shared memory pool according to a certain proportion of the memory value, and to allocate the additional memory space to the corresponding CPU.
  • If the memory value used by the CPU during processing does not exceed a preset first threshold, a next step is initiated to determine if the memory value used by the CPU during processing is less than a preset second threshold.
  • Determining if the memory value used by the CPU during operation is less than the preset second threshold.
  • If the memory value used by the CPU during operation is less than the preset second threshold, in step S105, the memory space allocated to the CPU is partially released according to a certain proportion of the memory value. The process of releasing memory continues until the memory space allocated to the CPU reaches the pinned memory.
  • If the memory value used by the CPU during operation is not less than a preset second threshold, the process returns to step S103 to obtain the memory values used by each of the CPUs of “n” CPUs.
  • In addition to these solutions, there are other ways to implement the steps. For example, it can be determined in turn for “n” CPUs if the memory values used by each CPU during operation of “n” CPUs exceed a preset first threshold, and/or it can be determined in turn for “n” CPUs if the memory values used by each CPU during operation of “n” CPUs is less than a preset second threshold. Then the mentioned determining steps are repeated “n” times until all “n” CPUs have been processed by the determining process. This repetition process will not be discussed in further detail here.
  • For the embodiment according to the disclosure, corresponding to step S103, to obtain the memory values used by each CPU during operation of “n” CPUs, all the memory values used by at least one CPU during operation of “n” CPUs are summed up to obtain an overall memory value used by at least one CPU during processing. Next, it is determined if the overall memory value used by at least one CPU during operation exceeds a preset threshold range. According to the determination, either additional memory space is selected from the shared memory pool, which is allocated to each of the CPUs, or the memory space is freed or reallocated.
  • In step S104, the determining of whether the overall memory value used by at least one CPU during operation exceeds a preset threshold range can be implemented as described next.
  • Determining if the overall memory value used by at least one CPU of the “n” CPUs during operation exceeds a preset first threshold.
  • If the overall memory value used by at least one CPU during operation exceeds a preset first threshold, at least one additional memory space is applied from the memory pool according to a certain proportion of the memory value such that to be allocated to each CPU in the at least one CPU in a one-to-one corresponding manner.
  • If the overall memory value used by at least one CPU during operation does not exceed a preset first threshold, a next step is initiated to determine if the overall memory value used by at least one CPU during operation is less than a preset second threshold.
  • Determining if the overall memory value used by at least one CPU during operation is less than the preset second threshold.
  • If the overall memory value used by at least one CPU during operation is less than the preset second threshold, the memory space allocated to each CPU in the at least one CPU is partially released according to a certain proportion of the memory value. The process of releasing memory continues until the memory spaces allocated to each CPU in the at least one CPU reaches the size of the pinned memory.
  • If the overall memory value used by at least one CPU during operation is not less than the preset first threshold, the process returns to step S103 to obtain the memory values used by each CPU of “n” CPUs.
  • In step S105, additional memory space is applied from the memory pool to allocate to the CPU, or the memory space allocated to the CPU is reallocated or freed.
  • A precondition for implementing step S105 is that, in step S104, it is determined the memory value used by a CPU during processing exceeds a preset first threshold, or the memory value used by a CPU during processing is less than a preset second threshold.
  • If the memory value used by a CPU during processing exceeds the preset first threshold, one additional memory space is applied from the memory pool according to a certain proportion of the memory value to be allocated to the CPU.
  • If the memory value used by a CPU during operation is less than the preset second threshold, the memory space allocated to the CPU is partially released until the memory spaces allocated to the CPU reaches the size of the pinned memory.
  • It should be noted that, in this embodiment, when applying additional memory spaces from the memory pool according to a certain proportion of the memory value to be allocated to the CPU, the corresponding memory space of the DIMM slot in the same channel or the corresponding memory space of DIMM slot in the same memory controller will be allocated to the CPU, as described next.
  • Initially, determining if there exists a DIMM slot in the memory pool located in the same channel controlled by the same memory controller as the DIMM slot that corresponds to the pinned memory of the CPU.
  • If there is such a DIMM slot in the memory pool located in the same channel under the same memory controller as the DIMM slot that corresponds to the pinned memory, then the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • For example, in a memory pool for the corresponding memory space of the DIMM slot in the same channel controlled by the same memory controller with the DIMM slot corresponding to the pinned memory of the CPU, the size of the memory space applied from the memory space of the corresponding DIMM slot will be equivalent to 20% of the memory value used by the CPU during operation. This is the amount to be allocated to the CPU.
  • If there does not exist a DIMM slot in the memory pool located in the same channel under the same memory controller as the DIMM slot that corresponds to the pinned memory, a next step is initiated to determine if there exists a DIMM slot in the memory pool located in the same memory controller as the DIMM slot that corresponds to the pinned memory, which is described in the following.
  • It is determined if there is a DIMM slot in the memory pool locating in the same memory controller as the DIMM slot that corresponds to the pinned memory. If there is a DIMM slot in the memory pool located in the same memory controller as the DIMM slot that corresponds to the pinned memory, the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • For example, in the memory pool, for the corresponding memory space of the DIMM slot in the same memory controller with the DIMM slot corresponding to the pinned memory of the CPU, the size of the memory space applied from the memory space of the corresponding DIMM slot will be equivalent to 20% of the memory value used by the CPU during operation, which is allocated to the CPU.
  • If there does not exist a DIMM slot in the memory pool located in the same memory controller as the DIMM slot that corresponds to the pinned memory, the memory space in the shared memory pool is allocated to the CPU according to a certain proportion of the memory value.
  • For example, the memory space applied from the shared memory pool will be equivalent to the size of 20% of the memory value used by the CPU during operation, which is allocated to the CPU.
  • There are other possible embodiments to implement the above-described application of memory space from the shared memory pool for allocation to a CPU, which will not be explained in detail herein.
  • Similarly, the step of releasing or reallocating memory spaces from the memory pool according to a certain proportion of the memory value may be implemented in other ways corresponding to the above mentioned method.
  • For example, for the corresponding memory space of the DIMM slot in the same memory controller with the DIMM slot corresponding to the pinned memory of the CPU, the size of the memory space released will be equivalent to 20% of the memory value used by the CPU during operation.
  • There are other possible methods to implement the releasing or reallocating of the memory space, which will not be explained in detail herein.
  • It should be noted that steps S103, S104, and S105 are in a cyclic process of determination. The memory values used by each CPU during operation of “n” CPUs are obtained in step S103. Whether the memory values used by each CPU during operation of “n” CPUs exceeds a preset threshold range is determined in step S104. Based on the result of step S104, in step S105, additional memory space is selected from the shared memory pool to be allocated to the CPU or the memory space allocated to the CPU is reallocated or freed.
  • The cyclic determination process of steps S103, S104, and S105 implement a real time monitoring of the memory values used by the CPU during processing. According to the result of the monitoring, certain memory space is allocated to the CPU or certain memory space of the CPU is released such that the memory is utilized in a proper way and the functionality of the server system is enhanced.
  • Further, corresponding to the steps S103 and S104, according to the obtained memory values used by each CPU during operation of “n” CPUs, all the memory values used by at least one CPU during operation of “n” CPUs are summed up to obtain an overall memory value used by at least one CPU during operation. Next, it is determined if the overall memory value used by at least one CPU during operation exceeds a preset threshold range. According to the result of the determination, either additional memory space is selected from the shared memory pool, which is allocated to each CPU in the at least one CPU, or the memory space is reallocated or freed.
  • The steps of applying for additional memory space from the shared memory pool to allocate to each CPU in the at least one CPU are further described next.
  • First, it is determined, in turn, if there is exists a DIMM slot in the shared memory pool located in the same channel as each DIMM slot that corresponds to a pinned memory of each CPU in the at least one CPU.
  • If there is such a DIMM slot in the shared memory pool located in the same channel as each DIMM slot that corresponds to a pinned memory of each CPU in the at least one CPU, the corresponding memory space of the DIMM slot is then allocated to the CPU according to a certain proportion of the memory value.
  • If there does not exist a DIMM slot in the shared memory pool locating in the same channel as a DIMM slot that corresponds to a pinned memory of each of the CPUs, a next step is initiated to determine if there exists a DIMM slot in the shared memory pool located in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU in the at least one CPU.
  • It is determined if there is a DIMM slot in the shared memory pool located in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU of the set of CPUs.
  • If there is a DIMM slot in the shared memory pool located in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU of the set of CPUs, the corresponding memory space of the DIMM slot is allocated to the CPU according to a certain proportion of the memory value.
  • If such DIMM slot does not exist in the shared memory pool locating in the same memory controller as the DIMM slot that corresponds to a pinned memory of each CPU in the at least one CPU, the memory space in the shared memory pool is allocated to the CPU according a certain proportion of the memory value.
  • The above-described process of determination is repeated until memory is allocated from each of the at least one CPU. Also, other embodiments are possible for implementation without limitation to the above-described process.
  • FIG. 1B shows a flow diagram of a method of memory management according to another embodiment of the disclosure. In step S110, at least one memory is pooled to generate a memory pool. In step S130, the memory pool is divided to generate at least one memory space. In step S140, each of the memory spaces is allocated respectively to a plurality of CPUs in a one-to-one correspondence manner. In step S141, the one-to-one allocated respective memory space is allocated to a respective CPU as a pinned memory of the respective CPU. In step S142, unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated in the memory pool.
  • In step S150, a memory value that represents usage of the respective memory space by the respective CPU during operation is obtained. In step S160, it is determined as to whether the memory value obtained in step S150 exceeds a preset threshold range. If the memory value used by the CPU during operation exceeds a preset first threshold, then step S171 is initiated to apply for additional memory space from the shared memory pool according to a certain proportion of the memory value and to allocate the additional memory space to the corresponding CPU. If the memory value used by the CPU during operation is less than the preset second threshold, in step S172, the memory space allocated to the CPU is partially released according to a certain proportion of the memory value.
  • In FIG. 2 there is shown another embodiment of the disclosure of a device 200 of memory management that includes the following units.
  • A memory pooling unit 210 configured to pool at least one memory to generate a memory pool.
  • A memory divider 220 divides the memory pool to generate at least one memory space and allocates a respective memory space to a respective CPU in a one-to-one correspondence manner. The respective memory space allocated to a respective CPU is set as a pinned memory of the respective CPU. An unallocated memory space is set as a shared memory pool of the memory pool. That is, the unallocated memory space includes memory space that remains unallocated or is not pinned memory in the memory pool.
  • A memory value obtaining unit 230 obtains a memory value that represents usage of the respective memory space by the respective CPU during operation. A memory value determination unit 240 determines if the memory value exceeds a preset threshold range and whether to commence a memory manager 250 if the memory value exceeds the preset threshold range.
  • The memory manager 250 is configured for selecting additional memory space from the shared memory pool to allocate to the CPU and/or releasing or reallocate the memory space allocated to the CPU if the threshold is exceeded.
  • Alternatively, the memory divider 220 includes a memory controller selecting sub-module configured for selecting a set of memory controllers, the number of memory controllers is equal to the number of the set of CPUs. It also includes an allocator sub-module configured for matching a QPI port address of the CPU with a port address of the memory controller in a one-to-one correspondence manner and for allocating the corresponding memory space of at least one DIMM slot in at least one channel under the memory controller to the CPU. The corresponding memory is set as a pinned memory of the CPU and the corresponding memory address of the DIMM slot is a fixed address.
  • Alternatively, the device 200 also includes a CPU pooling unit configured for setting each CPU in a set of CPUs as a node of the set of CPUs and for connecting all the nodes of the set of CPUs to generate a CPU pool.
  • In a typical configuration, a computing device includes one or more CPUs, an input/output port, an Internet port, and a memory.
  • According to the disclosure, the computer-readable media do not include the transitory media, such as the modulate data signal and carrier. The computer-readable media disclosed include, but not limit to, the phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of the random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash RAM and other memory technologies, compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage, magnetic tape, magnetic disc, other magnetic storage, and any other non-transition media
  • It is appreciated that the skilled in the art understands the disclosure may be implemented as the methods, systems, and/or the instructions for a computer. It is intended that the disclosure may be implemented as hardware, software, and/or hardware and software combined. The disclosure may be implemented as a computer program product utilizing one or more storage mediums including computer program instructions. The storage medium includes, but not limit to, the phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of the random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash RAM and other memory technologies, compact disc read-only memory (CD-ROM), digital video disc (DVD) and other optical storage, magnetic tape, magnetic disc, other magnetic storage, and any other storage type.
  • Although certain embodiments and methods have been disclosed herein, it will be apparent from the foregoing disclosure to those skilled in the art that variations and modifications of such embodiments and methods may be made without departing from the spirit and scope of the disclosure. It is intended that the disclosure be defined by the appended claims and the rules and principles of applicable law.

Claims (20)

What is claimed is:
1. A server system comprising:
a plurality of memory chips that define a memory space that includes a plurality of pinned memory spaces and an unallocated memory space;
a plurality of memory controllers having a plurality of channels, each channel being coupled to a number of memory chips of the plurality of memory chips; and
a plurality of CPUs coupled to the plurality of memory controllers in a one-to-one correspondence, the plurality of CPUs being pinned to the plurality of pinned memory spaces in a one-to-one correspondence, a CPU of the plurality of CPUs having a corresponding memory controller and a corresponding pinned memory space, the CPU to determine a utilization value of the corresponding pinned memory space and, when the utilization value exceeds an upper threshold, determine if a channel of the corresponding memory controller is coupled to a number of memory chips that include both the corresponding pinned memory space and a portion of the unallocated memory space.
2. The server system of claim 1 wherein when a channel of the corresponding memory controller is coupled to a number of memory chips that include both the corresponding pinned memory space and the portion of the unallocated memory space, the portion of the unallocated memory space is added to the corresponding pinned memory space to form an increased memory space.
3. The server system of claim 2 wherein the portion of the unallocated memory space is equal to 20% of the utilization value.
4. The server system of claim 2, further comprising:
when no channel of the corresponding memory controller is coupled to a number of memory chips that include both the corresponding pinned memory space and the portion of the unallocated memory space, the CPU to determine if a channel of the corresponding memory controller is coupled to a memory chip that includes a part of the unallocated memory space; and
when a channel of the corresponding memory controller is coupled to a memory chip that includes the part of the unallocated memory space, the part of the unallocated memory space is added to the corresponding pinned memory space to form the increased memory space.
5. The server system of claim 4, further comprising:
when no channel of the corresponding memory controller is coupled to a memory chip that includes the part of the unallocated memory space, the CPU to determine if a channel of a non-corresponding memory controller of the plurality of memory controllers is coupled to a memory chip that includes a piece of the unallocated memory space; and
when a channel of the non-corresponding memory controller is coupled to a memory chip that includes the piece of the unallocated memory space, the piece of the unallocated memory space is added to the pinned memory space to form the increased memory space.
6. The server system of claim 5, wherein the CPU to further:
determine a utilization value of the increased memory space;
when the utilization value of the increased memory space falls below the upper threshold, determine if the utilization value of the increased memory space falls below a lower threshold; and
when the utilization value of the increased memory space falls below the lower threshold, a section of the increased memory space is released to form a reduced memory space.
7. The server system of claim 6 wherein the section of the increased memory space is equal to 20% of the utilization value.
8. A method of managing a memory comprising:
pooling a plurality of memory chips to form a memory pool, the plurality of memory chips being coupled to a plurality of memory controllers, the plurality of memory controllers being coupled to a plurality of CPUs in a one-to-one correspondence;
dividing the memory pool to form a plurality of pinned memory spaces and an unallocated memory space;
pinning each pinned memory space to a corresponding CPU of the plurality of CPUs; and
adding memory space from the unallocated memory space to a pinned memory space to form an increased memory space when a utilization of the pinned memory space exceeds an upper threshold.
9. The method of claim 8 wherein the memory space added from the unallocated memory space is equal to 20% of the utilization.
10. The method of claim 9 and further comprising releasing memory space from the increased memory space when a utilization of the increased memory space falls below both the upper threshold and a lower threshold.
11. The method of claim 10 and further comprising adding memory to the memory pool to increase a size of the unallocated memory space.
12. The method of claim 11 wherein each memory controller has a plurality of channels, each channel being coupled to one or more memory chips.
13. The method of claim 12 wherein the plurality of memory controllers are electrically coupled together.
14. A method of operating a memory space that has a plurality of pinned memory spaces and an unallocated memory space, the method comprising:
obtaining a utilization value that represents usage of a pinned memory space of the plurality of pinned memory spaces by a CPU during operation, the CPU being coupled to a memory controller, the memory controller having a plurality of channels, one or more channels being coupled to one or more memory chips that include the pinned memory space;
determining if the utilization value exceeds an upper threshold;
when the utilization value exceeds the upper threshold, determining if a channel of the memory controller is coupled to a number of memory chips that include both the pinned memory space and a portion of the unallocated memory space.
15. The method of claim 14 wherein when a channel of the memory controller is coupled to a number of memory chips that include both the first pinned memory space and the portion of the unallocated memory space, adding the portion of the unallocated memory space to the pinned memory space to form an increased memory space.
16. The method of claim 15 wherein the portion of the unallocated memory space is equal to 20% of the utilization value.
17. The method of claim 15, further comprising:
when no channel of the memory controller is coupled to a number of memory chips that include both the pinned memory space and the portion of the unallocated memory space, determining if a channel of the memory controller is coupled to a memory chip that includes a part of the unallocated memory space; and
when a channel of the memory controller is coupled to a memory chip that includes the part of the unallocated memory space, adding the part of the unallocated memory space to the pinned memory space to form the increased memory space.
18. The method of claim 17, further comprising:
when no channel of the memory controller is coupled to a memory chip that includes the part of the unallocated memory space, determining if a channel of another memory controller is coupled to a memory chip that includes a piece of the unallocated memory space, said another memory controller being coupled to another CPU; and
when a channel of said another memory controller is coupled to a memory chip that includes the piece of the unallocated memory space, adding the piece of the unallocated memory space to the pinned memory space to form the increased memory space.
19. The method of claim 18, further comprising:
determining a utilization value of the increased memory space;
when the utilization value of the increased memory space falls below the upper threshold, determine if the utilization value of the increased memory space falls below a lower threshold; and
when the utilization value of the increased memory space falls below the lower threshold, releasing a section of the increased memory space to form a reduced memory space.
20. The method of claim 19 wherein the section of the increased memory space is equal to 20% of the utilization value.
US15/627,001 2014-11-25 2017-06-19 Method and apparatus for memory management Abandoned US20170308461A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/627,001 US20170308461A1 (en) 2014-11-25 2017-06-19 Method and apparatus for memory management

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201410686872.8A CN105701019A (en) 2014-11-25 2014-11-25 Memory management method and memory management device
CN201410686872.8 2014-11-25
US14/952,847 US9715443B2 (en) 2014-11-25 2015-11-25 Method and apparatus for memory management
US15/627,001 US20170308461A1 (en) 2014-11-25 2017-06-19 Method and apparatus for memory management

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/952,847 Continuation US9715443B2 (en) 2014-11-25 2015-11-25 Method and apparatus for memory management

Publications (1)

Publication Number Publication Date
US20170308461A1 true US20170308461A1 (en) 2017-10-26

Family

ID=56010340

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/952,847 Active US9715443B2 (en) 2014-11-25 2015-11-25 Method and apparatus for memory management
US15/627,001 Abandoned US20170308461A1 (en) 2014-11-25 2017-06-19 Method and apparatus for memory management

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/952,847 Active US9715443B2 (en) 2014-11-25 2015-11-25 Method and apparatus for memory management

Country Status (7)

Country Link
US (2) US9715443B2 (en)
EP (1) EP3224726B1 (en)
JP (1) JP2017535888A (en)
KR (1) KR102589155B1 (en)
CN (1) CN105701019A (en)
TW (1) TWI728949B (en)
WO (1) WO2016086203A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching
US11487445B2 (en) * 2016-11-22 2022-11-01 Intel Corporation Programmable integrated circuit with stacked memory die for storing configuration data

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2876379A1 (en) * 2014-12-29 2016-06-29 Adam J. Storm Memory management in presence of asymmetrical memory transfer costs
CN106681835B (en) * 2016-12-28 2019-04-05 华为技术有限公司 The method and resource manager of resource allocation
CN107179997A (en) * 2017-06-12 2017-09-19 合肥东芯通信股份有限公司 A kind of method and device of configuration memory cell
CN107766153A (en) * 2017-10-17 2018-03-06 华为技术有限公司 A kind of EMS memory management process and device
CN110032440A (en) * 2018-01-11 2019-07-19 武汉斗鱼网络科技有限公司 A kind of EMS memory management process and relevant apparatus
TWI722269B (en) * 2018-01-26 2021-03-21 和碩聯合科技股份有限公司 Firmware updating method and electronic device using the same
CN110162395B (en) * 2018-02-12 2021-07-20 杭州宏杉科技股份有限公司 Memory allocation method and device
CN109194721A (en) * 2018-08-15 2019-01-11 无锡江南计算技术研究所 A kind of asynchronous RDMA communication dynamic memory management method and system
CN109522113B (en) * 2018-09-28 2020-12-18 迈普通信技术股份有限公司 Memory management method and device
KR20210046348A (en) * 2019-10-18 2021-04-28 삼성전자주식회사 Memory system for flexibly allocating memory for multiple processors and operating method thereof
CN112988370A (en) * 2019-12-13 2021-06-18 南京品尼科自动化有限公司 Intelligent communication management machine
US20220066928A1 (en) * 2020-09-02 2022-03-03 Microsoft Technology Licensing, Llc Pooled memory controller for thin-provisioning disaggregated memory
US20230075329A1 (en) * 2021-08-25 2023-03-09 Western Digital Technologies, Inc. Super Block Allocation Across Super Device In ZNS SSD
US11640254B2 (en) 2021-08-25 2023-05-02 Western Digital Technologies, Inc. Controlled imbalance in super block allocation in ZNS SSD

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6571262B2 (en) 2000-02-14 2003-05-27 Apple Computer, Inc. Transparent local and distributed memory management system
US5687370A (en) 1995-01-31 1997-11-11 Next Software, Inc. Transparent local and distributed memory management system
ATE254778T1 (en) * 1997-09-05 2003-12-15 Sun Microsystems Inc LOOKUP TABLE AND METHOD FOR DATA STORAGE THEREIN
US6249802B1 (en) * 1997-09-19 2001-06-19 Silicon Graphics, Inc. Method, system, and computer program product for allocating physical memory in a distributed shared memory network
US6381682B2 (en) * 1998-06-10 2002-04-30 Compaq Information Technologies Group, L.P. Method and apparatus for dynamically sharing memory in a multiprocessor system
US20020016891A1 (en) * 1998-06-10 2002-02-07 Karen L. Noel Method and apparatus for reconfiguring memory in a multiprcessor system with shared memory
US6804766B1 (en) 1997-11-12 2004-10-12 Hewlett-Packard Development Company, L.P. Method for managing pages of a designated memory object according to selected memory management policies
US6327606B1 (en) 1998-06-24 2001-12-04 Oracle Corp. Memory management of complex objects returned from procedure calls
US6701420B1 (en) 1999-02-01 2004-03-02 Hewlett-Packard Company Memory management system and method for allocating and reusing memory
AU2001239492A1 (en) 2000-02-07 2001-08-14 Insignia Solutions Plc Global constant pool to allow deletion of constant pool entries
DE60115154T2 (en) 2000-06-19 2006-08-10 Broadcom Corp., Irvine Method and device for data frame forwarding in an exchange
US6981244B1 (en) 2000-09-08 2005-12-27 Cisco Technology, Inc. System and method for inheriting memory management policies in a data processing systems
CA2355473A1 (en) 2000-09-29 2002-03-29 Linghsiao Wang Buffer management for support of quality-of-service guarantees and data flow control in data switching
US7380085B2 (en) * 2001-11-14 2008-05-27 Intel Corporation Memory adapted to provide dedicated and or shared memory to multiple processors and method therefor
US6718451B2 (en) 2002-01-31 2004-04-06 Intel Corporation Utilizing overhead in fixed length memory block pools
US6738886B1 (en) * 2002-04-12 2004-05-18 Barsa Consulting Group, Llc Method and system for automatically distributing memory in a partitioned system to improve overall performance
CA2426619A1 (en) 2003-04-25 2004-10-25 Ibm Canada Limited - Ibm Canada Limitee Defensive heap memory management
US7827375B2 (en) 2003-04-30 2010-11-02 International Business Machines Corporation Defensive heap memory management
US7447943B2 (en) * 2003-05-28 2008-11-04 Hewlett-Packard Development Company, L.P. Handling memory errors in response to adding new memory to a system
US7707320B2 (en) 2003-09-05 2010-04-27 Qualcomm Incorporated Communication buffer manager and method therefor
US7783852B2 (en) * 2003-11-26 2010-08-24 Oracle International Corporation Techniques for automated allocation of memory among a plurality of pools
US7302546B2 (en) * 2004-01-09 2007-11-27 International Business Machines Corporation Method, system, and article of manufacture for reserving memory
US7231504B2 (en) * 2004-05-13 2007-06-12 International Business Machines Corporation Dynamic memory management of unallocated memory in a logical partitioned data processing system
GB2418751A (en) 2004-10-02 2006-04-05 Hewlett Packard Development Co Managing memory across a plurality of partitions
US8234378B2 (en) * 2005-10-20 2012-07-31 Microsoft Corporation Load balancing in a managed execution environment
US7953008B2 (en) 2005-11-10 2011-05-31 Broadcom Corporation Cell copy count hazard detection
US8150946B2 (en) * 2006-04-21 2012-04-03 Oracle America, Inc. Proximity-based memory allocation in a distributed memory system
US7840752B2 (en) 2006-10-30 2010-11-23 Microsoft Corporation Dynamic database memory management policies
US7698528B2 (en) 2007-06-28 2010-04-13 Microsoft Corporation Shared memory pool allocation during media rendering
US20090150640A1 (en) * 2007-12-11 2009-06-11 Royer Steven E Balancing Computer Memory Among a Plurality of Logical Partitions On a Computing System
US8402242B2 (en) 2009-07-29 2013-03-19 International Business Machines Corporation Write-erase endurance lifetime of memory storage devices
US8209510B1 (en) 2010-01-13 2012-06-26 Juniper Networks, Inc. Secure pool memory management
US8522244B2 (en) * 2010-05-07 2013-08-27 Advanced Micro Devices, Inc. Method and apparatus for scheduling for multiple memory controllers
US8578194B2 (en) 2010-06-21 2013-11-05 Broadcom Corporation Green mode data buffer control
US8312258B2 (en) * 2010-07-22 2012-11-13 Intel Corporation Providing platform independent memory logic
WO2012147116A1 (en) 2011-04-25 2012-11-01 Hitachi, Ltd. Computer system and virtual machine control method
JP5807458B2 (en) * 2011-08-31 2015-11-10 富士通株式会社 Storage system, storage control device, and storage control method
US9063844B2 (en) 2011-09-02 2015-06-23 SMART Storage Systems, Inc. Non-volatile memory management system with time measure mechanism and method of operation thereof
US8719464B2 (en) * 2011-11-30 2014-05-06 Advanced Micro Device, Inc. Efficient memory and resource management
US8954698B2 (en) * 2012-04-13 2015-02-10 International Business Machines Corporation Switching optically connected memory
US9274839B2 (en) * 2012-09-27 2016-03-01 Intel Corporation Techniques for dynamic physical memory partitioning
JP6136460B2 (en) * 2013-03-28 2017-05-31 富士通株式会社 Information processing apparatus, information processing apparatus control program, and information processing apparatus control method
KR102117511B1 (en) * 2013-07-30 2020-06-02 삼성전자주식회사 Processor and method for controling memory
CN103544063B (en) * 2013-09-30 2017-02-08 三星电子(中国)研发中心 Method and device for removing processes applied to Android platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487445B2 (en) * 2016-11-22 2022-11-01 Intel Corporation Programmable integrated circuit with stacked memory die for storing configuration data
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching

Also Published As

Publication number Publication date
EP3224726B1 (en) 2021-05-05
US20160147648A1 (en) 2016-05-26
KR102589155B1 (en) 2023-10-16
WO2016086203A1 (en) 2016-06-02
EP3224726A4 (en) 2018-07-25
KR20170087900A (en) 2017-07-31
EP3224726A1 (en) 2017-10-04
CN105701019A (en) 2016-06-22
US9715443B2 (en) 2017-07-25
TW201619829A (en) 2016-06-01
TWI728949B (en) 2021-06-01
JP2017535888A (en) 2017-11-30

Similar Documents

Publication Publication Date Title
US9715443B2 (en) Method and apparatus for memory management
US20230176919A1 (en) Cloud-based scale-up system composition
US10567166B2 (en) Technologies for dividing memory across socket partitions
US11630702B2 (en) Cloud-based scale-up system composition
US8695079B1 (en) Allocating shared resources
US11805070B2 (en) Technologies for flexible and automatic mapping of disaggregated network communication resources
JP2013168140A (en) Method for deploying virtual machines
US20140025852A1 (en) Configurable Response Generator for Varied Regions of System Address Space
US20140006644A1 (en) Address Remapping Using Interconnect Routing Identification Bits
CN111324461B (en) Memory allocation method, memory allocation device, computer equipment and storage medium
WO2018032519A1 (en) Resource allocation method and device, and numa system
US20210181959A1 (en) Computing system and operating method thereof
WO2022063273A1 (en) Resource allocation method and apparatus based on numa attribute
US11003616B1 (en) Data transfer using point-to-point interconnect
WO2020024113A1 (en) Memory interleaving method and device
CN110731109B (en) Resource indication method, equipment and computer storage medium
US20180341614A1 (en) System and Method for I/O Aware Processor Configuration
US11451435B2 (en) Technologies for providing multi-tenant support using one or more edge channels
US20200226044A1 (en) Memory system and data processing system
US11431648B2 (en) Technologies for providing adaptive utilization of different interconnects for workloads
US20240028344A1 (en) Core mapping based on latency in a multiple core processor
CN117149447B (en) Bandwidth adjustment method, device, equipment and storage medium
CN116582437A (en) Cluster instance adjustment method and device and related equipment
US10684968B2 (en) Conditional memory spreading for heterogeneous memory sizes
CN115576622A (en) Setting method and device of BIOS configuration mode and storage medium

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION