US20080222435A1 - Power management in a power-constrained processing system - Google Patents
Power management in a power-constrained processing system Download PDFInfo
- Publication number
- US20080222435A1 US20080222435A1 US11/681,818 US68181807A US2008222435A1 US 20080222435 A1 US20080222435 A1 US 20080222435A1 US 68181807 A US68181807 A US 68181807A US 2008222435 A1 US2008222435 A1 US 2008222435A1
- Authority
- US
- United States
- Prior art keywords
- power
- devices
- limit
- net
- power limit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
Definitions
- the present invention relates to power management in a computer system having multiple devices, such as in a rack-based server system or data center.
- Servers in a data center may be mounted in a rack to conserve space and place the servers and infrastructure within easy reach of an administrator.
- IBM eServer BLADECENTER is one example of a compact server arrangement (IBM and BLADECENTER are registered trademarks of International Business Machines Corporation, Armonk, N.Y.).
- Existing processing systems may be powered by a common power supply or power distribution unit (PDU).
- PDU power supply or power distribution unit
- Some of the systems include a circuit, such as a Baseboard Management Controller (BMC), that a service processor uses to monitor real-time power consumption by a server. Using this feedback, the service processor can “throttle” the processors and/or memory on the server to maintain the power consumption below a set point or “power ceiling” set by an administrator and monitored by the chassis management module.
- BMC Baseboard Management Controller
- U.S. Pat. No. 7,155,623 to IBM discloses a “Method and System for Power Management Including Local Bounding of Device Group Power Consumption.”
- U.S. Patent Application Publication No. US 2006/0156042 to IBM discloses a “Method, System, and Calibration Technique for Power Measurement and Management Over Multiple Time Frames.”
- a method of managing power in a processing system is provided.
- a net power limit is provided to a plurality of devices within the processing system. Power consumption of each device is detected. The net power limit is dynamically apportioned among the plurality of devices according to each device's detected power consumption.
- Each apportioned power limit is communicated to an associated one of a plurality of local controllers.
- Each local controller is coupled to an associated one of the plurality of devices.
- Each local controller is used to limit the amount of power to the associated device within the apportioned power limit of that local controller.
- a computer program product comprising a computer usable medium including computer usable program code for managing power in a computer system.
- the computer program product includes computer usable program code for providing a net power limit to a plurality of devices within the processing system, for detecting power consumption for each of the plurality of devices, for dynamically apportioning the net power limit among the plurality of devices according to their detected power consumption, for communicating each apportioned power limit to an associated one of a plurality of local controllers each coupled to an associated one of the plurality of devices, and for powering the associated device within the apportioned power limit of that local controller.
- a power-controlled processing system including a plurality of electronic devices.
- a shared power supply is coupled to the devices for supplying power to the devices.
- Each of a plurality of local controllers is coupled to an associated one of the electronic devices for detecting power consumption of the associated electronic device, outputting power consumption signals representative of the detected power consumption, and selectively controlling power to the associated device within an apportioned power limit.
- a power management module is in electronic communication with the plurality of local controllers for receiving the power consumption signals, apportioning a net power limit according to the detected power consumption, and communicating each apportioned power limit to the local controller of the associated electronic device.
- FIG. 1 is a perspective view of a rack-based server system to which power may be managed according to the invention.
- FIG. 2 is a schematic diagram of a representative embodiment of a power-managed target system according to the invention, in the context of a multi-server computer system.
- FIG. 3 is a bar graph illustrating a non-ideal distribution of power to the target system of FIG. 2 at an instant in time.
- FIG. 4 is a bar graph illustrating a more suitable apportionment of available power in the six-server target system for the instantaneous loading in FIG. 3 .
- FIG. 5 is a flowchart outlining a method of managing power in a computer system according to the invention.
- FIG. 6 is a schematic diagram of a computer system that may be configured for managing power in a target system of devices according to the invention.
- the present invention provides improved systems and methods for managing power in a processing system having multiple components or devices, such as in a multi-server computer system.
- Embodiments of the invention are particularly suitable for management of power in rack-based computer system, such as blade server systems and in data centers.
- the invention includes methods for budgeting the use of power from a limited power supply by detecting the power consumption of multiple devices (e.g. servers) in a processing system, and dynamically apportioning a net power limit among the devices according to their detected power consumption. This provides each device with power according to the needs of that device at any given moment, while maintaining net power consumption within a net power limit. Benefits of managing power according to the invention include increased efficiency, along with an associated reduction in operation costs, heat production, and noise.
- a method of managing power in a processing system is provided.
- a “target system” is selected for which power is to be managed.
- the target system may be, for example, an entire datacenter, one or more rack-based server systems in a datacenter, or a subsystem thereof.
- the target system includes a plurality of “devices” powered by a shared power supply.
- the selected target system may be the plurality of servers.
- a global (“net”) power limit is selected for the target system.
- the net power limit may be selected by a system designer, a system operator (user), or by hardware and/or software.
- the net power limit may be imposed, for example, to limit operating costs, heat, or sound levels generated by the target system.
- the net power limit is apportioned among the devices of the target system according to their respective power consumption.
- a power-regulated processing system apportions a net power limit among the devices of a target system.
- Each device may include an associated “local controller” for monitoring and controlling power to the device.
- the power management module and the local controllers may work in tandem to control the distribution of power to the servers according to the needs of the servers, as may be determined according to the real-time power consumption of the servers.
- the local controller typically includes a precision measurement and feedback control system that may be implemented, for example, using a hard, real-time function running on the BMC.
- Each local controller communicates information regarding the power consumption of its associated device to the management module.
- the management module apportions the net power limit among the devices according to their present power consumption and communicates the apportioned power limits to the local controllers.
- the local controller enforces the apportioned power limits on behalf of the MM.
- net power to the target system is maintained within the net power limit, while power to each device is individually maintained within its dynamically apportioned power limit.
- the management module will determine which device(s) have excess allocated power, and the associated local controllers (at the direction of the MM) would reclaim an excess portion of the allocated power before redistributing that reclaimed power among the devices.
- power limits may be reclaimed from the device(s) having excess power margins and substantially simultaneously redistributed among the devices without substantially exceeding the net power limit at any instant.
- the net power limit may be sufficient to dynamically apportion each device a power limit in excess of its power consumption. This results in a positive “power margin” or “overhead,” which is the difference between a device's apportioned power limit and its power consumption. Because the amount of power consumed by each device is typically dynamic, the apportioned power limit for each device is also dynamic.
- One approach that may be implemented is to provide each device with at least a selected minimum power margin. Typically, the net power limit is evenly apportioned among the devices of the target system in such a way that every device has about the same power margin at any given moment.
- the MM may respond by lowering the net power limit, to effectively impose a “negative” power margin or overhead on some or all of the devices of the target system, wherein the apportioned power limit for the devices is less than the power consumption detected prior to the imposition of the negative overhead.
- the BMC may respond to the imposition of negative overhead in such a contingency by throttling the servers and/or memory to reduce the power consumption of each device to within its apportioned power limit.
- FIG. 1 is a perspective view of a rack-based server system (“computer system”) 10 to which power may be managed according to the invention.
- the computer system 10 includes an enclosure 11 with an optional grillwork 19 .
- the enclosure 11 houses a plurality of system devices, including a plurality of servers 12 .
- Each server 12 is typically one node of the computer system 10 .
- a node is a device connected as part of a computer network.
- a node may include not only servers, but other devices of a computer system, such as a router or a memory module.
- Each server 12 may include one or more processors.
- a processor typically includes one or more microchip, which may be a “CPU,” and which is a device in a digital computer that interprets instructions and processes data contained in computer programs.
- the servers 12 may also include hard drives and memory to service one or more common or independent networks.
- the servers 12 are shown as “blade” type servers, although the invention is also useful with other types of rack-mounted server systems, as well as other types of computer systems and electronic equipment. Numerous other electronic devices are typically housed within the enclosure 11 , such as a power management module 15 , a power supply module 16 , at least one blower 17 , and a switch module 18 .
- the multiple servers 12 may share the power management module 15 , power supply module 16 , blower 17 , switch module 18 , and other support modules. Connectors couple the servers 12 with the support modules to reduce wiring requirements and facilitate installation and removal of the servers 12 . For instance, each server 12 may couple with a gigabit Ethernet network via the switch module 18 . The enclosure 11 may couple the servers 12 to the Ethernet network without connecting individual cables directly to each server. Multiple rack server systems like the computer system 10 are often grouped together in a data center.
- each server 12 consumes power and produces heat, which may be a function of numerous factors, such as the amount of load placed on its processor(s) (“processor load”).
- Processor load generally relates to computational throughput, and is typically tied to factors such as processor speed, clock speed, bus speed, the number of individual processors recruited for performing a task, and so forth.
- processor performance metrics such as MIPS (“million instructions per second”) or teraflops may be used to describe processor load.
- the amount of processor load may also be characterized in terms of a processor's maximum processing capacity, such as “percentage of full processor utilization.” The percent utilization of a group of processors may be expressed in terms of the combined processing capacity of the multiple processors.
- a hypothetical three-processor server may have a first processor operating at 33%, a second processor operating at 50%, and a third processor operating at 67%, with an overall/average processor utilization for the server of 50%.
- the load on processors is typically dynamic, so the percent utilization, itself, may be expressed instantaneously or as an average utilization over time.
- Techniques for reducing power consumption include selectively “throttling” the processor(s), placing subsystems into power-saving modes of operation, or powering off unused circuitry.
- Other examples of reducing processor load are reducing a clock frequency or operating voltage of one or more of the CPUs, or introducing wait or hold states into the activity of the CPUs.
- both net processor load and individual processor load may be controlled.
- power consumption is not a well-defined function of processor load. There are many cases where power consumption may be completely different when processor load appears to be 100%, for example. This is because of the behaviors of the underlying microarchitectures, transistor variability on a per-chip basis, and many other complex factors that affect power consumption.
- FIG. 2 is a schematic diagram of a representative embodiment of a power-managed target system 30 according to the invention, in the context of a multi-server computer system.
- the target system 30 includes a number “N” of servers 32 .
- Each server 32 includes one or more processors or CPUs 31 and memory 33 .
- the memory 33 may be, for example, a four slot-per-channel 533 MHz DDR 2 .
- a power supply 36 supplies power to the target system 30 and is shared among the servers 32 .
- a suitable power supply is not limited to a single, unitary power module.
- the power supply 36 may comprise multiple power modules, which, collectively, supply all the power needed by the target system.
- Each server 32 also includes an associated local controller 34 for monitoring and controlling power to the server 32 .
- the local controller typically includes a precision measurement and feedback control system that may be implemented, for example, using a hard, real-time function running on a BMC.
- Each local controller 34 may control power to its associated server 32 .
- the local controller 34 may dynamically throttle or adjust its processor(s) 31 and/or its memory 33 .
- the local controllers 34 by virtue of the BMC, are capable of adjusting power on a millisecond time scale, as a hard, real-time proportional control system.
- a power management module 38 is provided for apportioning a net power limit (P NET ) 37 among the servers 32 .
- the apportionment of power is illustrated in the figure by a representative, dynamic power distribution 39 , wherein each server 32 is allocated an individual power limit labeled in the figure as P 1 through P N .
- the power management module 38 works in tandem with the local controllers 34 to control the distribution of power from the shared power supply 36 to the servers 32 according to their needs, as may be determined from the real-time power consumption of the servers 32 .
- Each local controller 34 communicates information regarding the power consumption of its associated device 32 to the management module 38 .
- the management module 38 apportions the net power limit among the servers 32 considering their power consumption and communicates the apportioned power limits to the local controllers 34 .
- the local controllers 34 enforce the apportioned power limits for each of their associated servers 32 on behalf of the power management module 38 .
- the management module 38 will determine which server(s) 32 have excess allocated power, and the associated local controllers 34 (at the direction of the power management module 38 ) are instructed by the management module to reclaim an excess portion of the allocated power before the management module can begin redistributing it among the devices.
- net power to the target system 30 is maintained within the net power limit 37
- power to each server 32 is individually maintained within its apportioned power limit P N .
- the power management module 38 working in tandem with the local controllers 34 , efficiently budgets power within the net power limit 37 . Rather than inefficiently and arbitrarily providing equal power limits to each server 32 , power is dynamically apportioned to the servers 32 according to their real-time power consumption. Thus, for example, available power may be re-allocated from lesser-demanding servers to higher-demanding servers, while maintaining net power consumption of the target system 30 within the net power limit 37 .
- the power management module 38 dynamically apportions power to the servers 32 so that power caps imposed by the local controllers 34 on their associated servers 32 are assured on a millisecond timescale, to prevent overcurrent trips on power supplies that would otherwise bring down the entire group of servers 32 .
- FIG. 3 is a bar graph 40 graphically illustrating a simplified, hypothetical distribution of power to the target system 30 of FIG. 2 at an instant in time.
- Each server 32 has a maximum power capacity (P MAX ), which may vary from server to server.
- P MAX maximum power capacity
- a net power limit is assumed to be evenly distributed among the servers 32 at the instant in time, providing each server 32 with substantially the same power limit P L (P L ⁇ P MAX ). This equal allocation of available power is illustrated graphically by vertical bars of equal height.
- Each local controller 34 maintains power consumption P i (the shaded portion of the six vertical bars) of its associated server 32 within its individual power limit P L , such that P i ⁇ P L .
- the power consumption may be monitored in terms of instantaneous value of P i , a time-averaged value of P i over a prescribed time interval, and/or a peak value of P i .
- Time-averaged values of P i may be computed for time intervals of between about 1 millisecond and 2 seconds.
- the instantaneous distribution of power described in FIG. 3 is not ideal, and in most cases can be avoided by implementing the invention.
- All of the servers in the loading of FIG. 3 have the same power limit P L , despite the fact that each server is consuming a different amount of power at the instant in time.
- the server represented by vertical bar 46 is consuming comparatively little power (P i )
- the server represented by vertical bar 42 is consuming a large amount of power in comparison, but they each have the same P L .
- the invention may rectify this inefficient allocation of the net power limit by dynamically apportioning the net power limit P NET among the servers.
- the method may be used, for example, to redistribute power limits among the servers according to their detected power consumption.
- the servers may be continuously provided with substantially equal overheads, for example, without exceeding the net power limit.
- power would first be reclaimed from the device(s) that have excess allocated power, before the net power limit is redistributed among the devices. This may more reliably avoid briefly or instantaneously exceeding the net power limit during the step of redistributing the net power limit. In some systems, however, power may be reclaimed from the device(s) having excess power and substantially simultaneously redistributed among the devices with sufficient reliability to not exceed the net power limit at any instant.
- FIG. 4 is a bar graph illustrating a more suitable apportionment of available power in the six-server target system for the instantaneous loading in FIG. 3 .
- the net power limit P NET and the individual power consumption P i of the servers 32 is the same as in FIG. 3 .
- the net power limit P NET has been apportioned among the servers according to the invention, to provide each server with substantially the same overhead (“power margin”) 50 .
- the power limit P L for the server 46 has been reduced, while the power limit P L for the server 42 has been increased, giving the servers 42 , 46 substantially equal overheads 50 .
- the power consumption P i for each server is typically dynamic, changing over time. Therefore, the net power limit may be dynamically apportioned among the servers to account for the dynamic power consumption P i .
- the increased power limit P L for server 42 allows the server to handle greater processing loads before the local controller would need to take action toward limiting the load.
- a number of “trigger conditions” may optionally be selected to trigger an apportionment of power limits in a target system. Still referring to FIG. 4 , one optional trigger condition may be when the power margin for one or more server is less than a selected minimum power margin 51 . A system may retain the apportionment of power shown in FIG. 4 until the power consumption P i of one or more of the servers increases to a level at which the power margin 50 is less than the selected minimum power margin 51 . This example of a trigger condition is dependent on power consumption P i .
- Power limits may alternatively be regularly apportioned at selected time intervals.
- the time interval may be a precise function of the power distribution hierarchy in a target system and the power conversion devices at each level of that hierarchy.
- the response time to stay within a power limit is measured in intervals of between 100s of milliseconds up to about 2 seconds, depending on the rating of the fuse.
- a line cord feeds bulk power supplies for servers.
- the bulk power supply has an overcurrent level that is typically specified on the order of a few 10s of milliseconds.
- a BC-1 power supply may shut down after, e.g., 20 ms of overcurrent.
- the voltage regulator modules (VRM) which are powered by the bulk power supply, can enter overcurrent scenarios on the order of single-millisecond time scales.
- FIG. 5 is a flowchart outlining a method of managing power in a computer system according to the invention.
- a target system is identified.
- a target system is a rack-based server system having multiple server blades.
- Another example of a target system is an entire data center, wherein each “device” to be managed may include an entire rack of server blades.
- Other examples of power-managed target system will be apparent to one skilled in the art in view of this disclosure.
- various system parameters may be determined in step 102 .
- relevant system parameters include the power rating of a shared power supply used to power the devices, the maximum power capacity of each device, the maximum safe operating temperature of the target system or of the devices individually, limitations on the cost of operating the target system, and sound level restrictions imposed by a standards body.
- a net power limit provided for the target system in step 104 may be selected by the power management module or by a user.
- the net power limit may be determined, in part, according to the system parameters identified in step 102 .
- the net power limit may be selected to limit the operating temperature, sound level, and cost of operating the target system or its devices.
- the net power may be limited by the maximum available power of the power supplies used to power the target system.
- the power consumption of the devices in the target system is detected and monitored in step 106 .
- conditional step 108 determines whether the net power limit is ample to provide a desired overhead to all of the devices based on their power consumption detected in step 106 . If sufficient power is not available to provide the desired overhead, then evasive action may be taken in step 110 .
- Evasive action broadly encompasses any of a number of actions that may be used to avoid problems such as system or component failure, loss of data, inadvertent halting or improper shutting down of devices, and so forth. The evasive action will typically encompass temporarily reducing the net power limit and apportioning the reduced net power limit among the devices accordingly.
- evasive action may optionally include properly shutting down the target system or a device or subsystem thereof.
- the system administrator may also be alerted of a potential fault so that corrective action may be taken.
- the target system may be monitored for a “trigger condition” in step 112 for triggering apportionment of the net power limit in the target system in step 114 .
- the trigger condition is the passage of a selected time interval.
- the net power limit may be dynamically apportioned at regular intervals, to ensure continued operation of the devices within each of their apportioned power limits.
- a time interval may be between as short as a single millisecond and as long as about two seconds.
- Alternative trigger conditions may be selected for a system according to the desired power margins on one or more of the devices.
- the management module will determine which device(s) have excess allocated power, and the associated local controllers (at the direction of the MM) would reclaim an excess portion of the allocated power before that power is redistributing among the devices. In other embodiments, however, power may be reclaimed from the device(s) having excess power and substantially simultaneously redistributed among the devices with a sufficient degree of reliability not to exceed the net power limit at any instant.
- Conditional step 116 takes into account the power consumption of the devices and the apportionment of power, to determine when the desired power limit apportioned to any of the devices would exceed the maximum operating capacity. If the desired apportionment does exceed the physical parameters of any of the devices, then evasive action may be taken as broadly indicated in step 110 . As in the case of insufficient overhead (conditional step 108 ), the evasive action taken is typically to lower the net power limit generally and/or individually reduce the overhead on each device. This may be a short-term response to the situation, followed by shutting down one or more of the devices in a manner that does not cause a running application to fail.
- the invention may take the form of an embodiment containing hardware and/or software elements.
- Non-limiting examples of software include firmware, resident software, and microcode.
- the invention can take the form of a computer program product accessible from a computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate or transport the program for use by or in connection with the instruction execution system, apparatus or device.
- the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
- Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W), and DVD.
- a data processing system suitable for storing and/or executing program code typically includes at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- I/O devices such as keyboards, displays, or pointing devices can be coupled to the system, either directly or through intervening I/O controllers.
- Network adapters may also be used to allow the data processing system to couple to other data processing systems or remote printers or storage devices, such as through intervening private or public networks. Modems, cable modems, Ethernet cards, and wireless network adapters are examples of network adapters.
- FIG. 6 is a schematic diagram of a computer system generally indicated at 220 that may be configured for managing power in a target system of devices according to the invention.
- the computer system 220 may be a general-purpose computing device in the form of a conventional computer system 220 .
- the computer system 220 may, itself, include the target system for which power is to be managed. Alternatively, the computer system 220 may be external to the target system.
- computer system 220 includes a processing unit 221 , a system memory 222 , and a system bus 223 that couples various system devices, including the system memory 222 to processing unit 221 .
- System bus 223 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- the system memory includes a read only memory (ROM) 224 and random access memory (RAM) 225 .
- ROM read only memory
- RAM random access memory
- a basic input/output system (BIOS) 226 is stored in ROM 224 , containing the basic routines that help to transfer information between elements within computer system 220 , such as during start-up.
- Computer system 220 further includes a hard disk drive 235 for reading from and writing to a hard disk 227 , a magnetic disk drive 228 for reading from or writing to a removable magnetic disk 229 , and an optical disk drive 230 for reading from or writing to a removable optical disk 231 such as a CD-R, CD-RW, DV-R, or DV-RW.
- Hard disk drive 235 , magnetic disk drive 228 , and optical disk drive 230 are connected to system bus 223 by a hard disk drive interface 232 , a magnetic disk drive interface 233 , and an optical disk drive interface 234 , respectively.
- the exemplary environment described herein employs hard disk 227 , removable magnetic disk 229 , and removable optical disk 231 , it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAMs, ROMs, USB Drives, and the like, may also be used in the exemplary operating environment.
- the drives and their associated computer readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for computer system 220 .
- the operating system 240 and application programs 236 may be stored in the RAM 225 and/or hard disk 227 of the computer system 220 .
- a user may enter commands and information into computer system 220 through input devices, such as a keyboard 255 and a mouse 242 .
- Other input devices may include a microphone, joystick, game pad, touch pad, satellite dish, scanner, or the like.
- processing unit 222 may be connected by other interfaces, such as a serial port interface, a parallel port, game port, or the like.
- a display device 247 may also be connected to system bus 223 via an interface, such as a video adapter 248 .
- personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
- the computer system 220 may operate in a networked environment using logical connections to one or more remote computers 249 .
- each of the one or more remote computers 249 may be another personal computer, a server, a client, a router, a network PC, a peer device, a mainframe, a personal digital assistant, an internet-connected mobile telephone or other common network node.
- a remote computer 249 typically includes many or all of the elements described above relative to the computer system 220 , only a memory storage device 250 has been illustrated in FIG. 6 .
- the logical connections depicted in the figure include a local area network (LAN) 251 and a wide area network (WAN) 252 .
- LAN local area network
- WAN wide area network
- the computer system 220 When used in a LAN networking environment, the computer system 220 is often connected to the local area network 251 through a network interface or adapter 253 .
- the computer system 220 When used in a WAN networking environment, the computer system 220 typically includes a modem 254 or other means for establishing high-speed communications over WAN 252 , such as the internet Modem 254 , which may be internal or external, is connected to system bus 223 via USB interface 246 .
- program modules depicted relative to computer system 220 may be stored in the remote memory storage device 250 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- Program modules may be stored on hard disk 227 , optical disk 231 , ROM 224 , RAM 225 , or even magnetic disk 229 .
- the program modules may include portions of an operating system 240 , application programs 236 , or the like.
- a system parameter database 238 may be included, which may contain parameters of the computer system 220 and its many nodes and other devices, such as the devices of the target system, along with their maximum operating capacities, maximum operating temperatures, and so forth that may be relevant to the management of power in the target system.
- a user preferences database 239 may also be included, which may contain parameters and procedures for how to apportion power among various devices of the target system, including any trigger conditions that may be used to initiate re-apportionment of power.
- the user preferences database 239 may also include, for example, a user preference designating whether power is to be apportioned evenly among the devices.
- Application program 236 may be informed by or otherwise associated with system parameter database 238 and/or user preference database 239 .
- the application program 236 generally comprises computer-executable instructions for managing power in the target system according to the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Power Sources (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
Description
- 1. Field of the Invention
- The present invention relates to power management in a computer system having multiple devices, such as in a rack-based server system or data center.
- 2. Description of the Related Art
- Multiple servers and other computer hardware are often consolidated into a centralized data center. Servers in a data center may be mounted in a rack to conserve space and place the servers and infrastructure within easy reach of an administrator. The IBM eServer BLADECENTER is one example of a compact server arrangement (IBM and BLADECENTER are registered trademarks of International Business Machines Corporation, Armonk, N.Y.).
- When multiple servers and other computing hardware are consolidated, power to the servers must be carefully monitored and controlled. Power consumption affects many aspects of operating a data center, such as the costs of operating the servers, the heat generated by the servers, and the performance and efficiency of the system. The individual servers and the system as a whole are limited by design parameters such as maximum power consumption, maximum operating temperature, processing efficiency, and so forth. Thus, it is important to control power to the system in consideration of these parameters.
- Existing processing systems may be powered by a common power supply or power distribution unit (PDU). Some of the systems include a circuit, such as a Baseboard Management Controller (BMC), that a service processor uses to monitor real-time power consumption by a server. Using this feedback, the service processor can “throttle” the processors and/or memory on the server to maintain the power consumption below a set point or “power ceiling” set by an administrator and monitored by the chassis management module. U.S. Pat. No. 7,155,623 to IBM discloses a “Method and System for Power Management Including Local Bounding of Device Group Power Consumption.” U.S. Patent Application Publication No. US 2006/0156042 to IBM discloses a “Method, System, and Calibration Technique for Power Measurement and Management Over Multiple Time Frames.”
- Improved ways of managing power are needed to accommodate the increasing demands placed on server systems. It would be desirable to improve the power handling capabilities of server systems, so that increasingly powerful and dense systems would continue to be reliably operated within the constraints of available power. Furthermore, it would be desirable to operate server systems in manner that that does not unduly restrict operations within the capacity of the system.
- In a first embodiment, a method of managing power in a processing system is provided. A net power limit is provided to a plurality of devices within the processing system. Power consumption of each device is detected. The net power limit is dynamically apportioned among the plurality of devices according to each device's detected power consumption. Each apportioned power limit is communicated to an associated one of a plurality of local controllers. Each local controller is coupled to an associated one of the plurality of devices. Each local controller is used to limit the amount of power to the associated device within the apportioned power limit of that local controller.
- In a second embodiment, a computer program product is provided, comprising a computer usable medium including computer usable program code for managing power in a computer system. The computer program product includes computer usable program code for providing a net power limit to a plurality of devices within the processing system, for detecting power consumption for each of the plurality of devices, for dynamically apportioning the net power limit among the plurality of devices according to their detected power consumption, for communicating each apportioned power limit to an associated one of a plurality of local controllers each coupled to an associated one of the plurality of devices, and for powering the associated device within the apportioned power limit of that local controller.
- In a third embodiment, a power-controlled processing system is provided, including a plurality of electronic devices. A shared power supply is coupled to the devices for supplying power to the devices. Each of a plurality of local controllers is coupled to an associated one of the electronic devices for detecting power consumption of the associated electronic device, outputting power consumption signals representative of the detected power consumption, and selectively controlling power to the associated device within an apportioned power limit. A power management module is in electronic communication with the plurality of local controllers for receiving the power consumption signals, apportioning a net power limit according to the detected power consumption, and communicating each apportioned power limit to the local controller of the associated electronic device.
- Other embodiments, aspects, and advantages of the invention will be apparent from the following description and the appended claims.
-
FIG. 1 is a perspective view of a rack-based server system to which power may be managed according to the invention. -
FIG. 2 is a schematic diagram of a representative embodiment of a power-managed target system according to the invention, in the context of a multi-server computer system. -
FIG. 3 is a bar graph illustrating a non-ideal distribution of power to the target system ofFIG. 2 at an instant in time. -
FIG. 4 is a bar graph illustrating a more suitable apportionment of available power in the six-server target system for the instantaneous loading inFIG. 3 . -
FIG. 5 is a flowchart outlining a method of managing power in a computer system according to the invention. -
FIG. 6 is a schematic diagram of a computer system that may be configured for managing power in a target system of devices according to the invention. - The present invention provides improved systems and methods for managing power in a processing system having multiple components or devices, such as in a multi-server computer system. Embodiments of the invention are particularly suitable for management of power in rack-based computer system, such as blade server systems and in data centers. The invention includes methods for budgeting the use of power from a limited power supply by detecting the power consumption of multiple devices (e.g. servers) in a processing system, and dynamically apportioning a net power limit among the devices according to their detected power consumption. This provides each device with power according to the needs of that device at any given moment, while maintaining net power consumption within a net power limit. Benefits of managing power according to the invention include increased efficiency, along with an associated reduction in operation costs, heat production, and noise.
- According to one embodiment, a method of managing power in a processing system is provided. A “target system” is selected for which power is to be managed. The target system may be, for example, an entire datacenter, one or more rack-based server systems in a datacenter, or a subsystem thereof. The target system includes a plurality of “devices” powered by a shared power supply. For example, in a rack-based server system having a plurality of servers, blowers, switches, power supplies, and other support modules, the selected target system may be the plurality of servers. A global (“net”) power limit is selected for the target system. The net power limit may be selected by a system designer, a system operator (user), or by hardware and/or software. The net power limit may be imposed, for example, to limit operating costs, heat, or sound levels generated by the target system. The net power limit is apportioned among the devices of the target system according to their respective power consumption.
- According to another embodiment, a power-regulated processing system is provided. A power management module (MM) apportions a net power limit among the devices of a target system. Each device may include an associated “local controller” for monitoring and controlling power to the device. The power management module and the local controllers may work in tandem to control the distribution of power to the servers according to the needs of the servers, as may be determined according to the real-time power consumption of the servers. The local controller typically includes a precision measurement and feedback control system that may be implemented, for example, using a hard, real-time function running on the BMC. Each local controller communicates information regarding the power consumption of its associated device to the management module. The management module apportions the net power limit among the devices according to their present power consumption and communicates the apportioned power limits to the local controllers. The local controller enforces the apportioned power limits on behalf of the MM. Thus, net power to the target system is maintained within the net power limit, while power to each device is individually maintained within its dynamically apportioned power limit. Typically, the management module will determine which device(s) have excess allocated power, and the associated local controllers (at the direction of the MM) would reclaim an excess portion of the allocated power before redistributing that reclaimed power among the devices. In other embodiments, however, power limits may be reclaimed from the device(s) having excess power margins and substantially simultaneously redistributed among the devices without substantially exceeding the net power limit at any instant.
- Under usual operating conditions, the net power limit may be sufficient to dynamically apportion each device a power limit in excess of its power consumption. This results in a positive “power margin” or “overhead,” which is the difference between a device's apportioned power limit and its power consumption. Because the amount of power consumed by each device is typically dynamic, the apportioned power limit for each device is also dynamic. One approach that may be implemented is to provide each device with at least a selected minimum power margin. Typically, the net power limit is evenly apportioned among the devices of the target system in such a way that every device has about the same power margin at any given moment. If the circumstance arises that the net power consumption of the target system exceeds the net power limit, the MM may respond by lowering the net power limit, to effectively impose a “negative” power margin or overhead on some or all of the devices of the target system, wherein the apportioned power limit for the devices is less than the power consumption detected prior to the imposition of the negative overhead. The BMC may respond to the imposition of negative overhead in such a contingency by throttling the servers and/or memory to reduce the power consumption of each device to within its apportioned power limit.
- The invention may be applied to a rack-based server system environment.
FIG. 1 is a perspective view of a rack-based server system (“computer system”) 10 to which power may be managed according to the invention. Thecomputer system 10 includes anenclosure 11 with anoptional grillwork 19. Theenclosure 11 houses a plurality of system devices, including a plurality ofservers 12. Eachserver 12 is typically one node of thecomputer system 10. Generally, a node is a device connected as part of a computer network. A node may include not only servers, but other devices of a computer system, such as a router or a memory module. - Each
server 12 may include one or more processors. A processor typically includes one or more microchip, which may be a “CPU,” and which is a device in a digital computer that interprets instructions and processes data contained in computer programs. Theservers 12 may also include hard drives and memory to service one or more common or independent networks. Theservers 12 are shown as “blade” type servers, although the invention is also useful with other types of rack-mounted server systems, as well as other types of computer systems and electronic equipment. Numerous other electronic devices are typically housed within theenclosure 11, such as apower management module 15, apower supply module 16, at least oneblower 17, and aswitch module 18. Themultiple servers 12 may share thepower management module 15,power supply module 16,blower 17,switch module 18, and other support modules. Connectors couple theservers 12 with the support modules to reduce wiring requirements and facilitate installation and removal of theservers 12. For instance, eachserver 12 may couple with a gigabit Ethernet network via theswitch module 18. Theenclosure 11 may couple theservers 12 to the Ethernet network without connecting individual cables directly to each server. Multiple rack server systems like thecomputer system 10 are often grouped together in a data center. - The
servers 12 and other devices generate heat within thecomputer system 10. In particular, eachserver 12 consumes power and produces heat, which may be a function of numerous factors, such as the amount of load placed on its processor(s) (“processor load”). Processor load generally relates to computational throughput, and is typically tied to factors such as processor speed, clock speed, bus speed, the number of individual processors recruited for performing a task, and so forth. Thus, processor performance metrics such as MIPS (“million instructions per second”) or teraflops may be used to describe processor load. The amount of processor load may also be characterized in terms of a processor's maximum processing capacity, such as “percentage of full processor utilization.” The percent utilization of a group of processors may be expressed in terms of the combined processing capacity of the multiple processors. For example, at an instant in time, a hypothetical three-processor server may have a first processor operating at 33%, a second processor operating at 50%, and a third processor operating at 67%, with an overall/average processor utilization for the server of 50%. The load on processors is typically dynamic, so the percent utilization, itself, may be expressed instantaneously or as an average utilization over time. - Techniques for reducing power consumption include selectively “throttling” the processor(s), placing subsystems into power-saving modes of operation, or powering off unused circuitry. Other examples of reducing processor load are reducing a clock frequency or operating voltage of one or more of the CPUs, or introducing wait or hold states into the activity of the CPUs. Thus, both net processor load and individual processor load may be controlled. Although there may be some correlation between processor load and power consumption in a given system, power consumption is not a well-defined function of processor load. There are many cases where power consumption may be completely different when processor load appears to be 100%, for example. This is because of the behaviors of the underlying microarchitectures, transistor variability on a per-chip basis, and many other complex factors that affect power consumption.
-
FIG. 2 is a schematic diagram of a representative embodiment of a power-managedtarget system 30 according to the invention, in the context of a multi-server computer system. Thetarget system 30 includes a number “N” ofservers 32. Eachserver 32 includes one or more processors orCPUs 31 andmemory 33. Thememory 33 may be, for example, a four slot-per-channel 533 MHz DDR2. Apower supply 36 supplies power to thetarget system 30 and is shared among theservers 32. A suitable power supply is not limited to a single, unitary power module. For example, thepower supply 36 may comprise multiple power modules, which, collectively, supply all the power needed by the target system. Eachserver 32 also includes an associatedlocal controller 34 for monitoring and controlling power to theserver 32. The local controller typically includes a precision measurement and feedback control system that may be implemented, for example, using a hard, real-time function running on a BMC. Eachlocal controller 34 may control power to its associatedserver 32. For example, thelocal controller 34 may dynamically throttle or adjust its processor(s) 31 and/or itsmemory 33. Thelocal controllers 34, by virtue of the BMC, are capable of adjusting power on a millisecond time scale, as a hard, real-time proportional control system. - A
power management module 38 is provided for apportioning a net power limit (PNET) 37 among theservers 32. The apportionment of power is illustrated in the figure by a representative,dynamic power distribution 39, wherein eachserver 32 is allocated an individual power limit labeled in the figure as P1 through PN. Thepower management module 38 works in tandem with thelocal controllers 34 to control the distribution of power from the sharedpower supply 36 to theservers 32 according to their needs, as may be determined from the real-time power consumption of theservers 32. Eachlocal controller 34 communicates information regarding the power consumption of its associateddevice 32 to themanagement module 38. Themanagement module 38, in turn, apportions the net power limit among theservers 32 considering their power consumption and communicates the apportioned power limits to thelocal controllers 34. Thelocal controllers 34 enforce the apportioned power limits for each of their associatedservers 32 on behalf of thepower management module 38. Typically, themanagement module 38 will determine which server(s) 32 have excess allocated power, and the associated local controllers 34 (at the direction of the power management module 38) are instructed by the management module to reclaim an excess portion of the allocated power before the management module can begin redistributing it among the devices. Thus, net power to thetarget system 30 is maintained within thenet power limit 37, while power to eachserver 32 is individually maintained within its apportioned power limit PN. - The
power management module 38, working in tandem with thelocal controllers 34, efficiently budgets power within thenet power limit 37. Rather than inefficiently and arbitrarily providing equal power limits to eachserver 32, power is dynamically apportioned to theservers 32 according to their real-time power consumption. Thus, for example, available power may be re-allocated from lesser-demanding servers to higher-demanding servers, while maintaining net power consumption of thetarget system 30 within thenet power limit 37. Thepower management module 38 dynamically apportions power to theservers 32 so that power caps imposed by thelocal controllers 34 on their associatedservers 32 are assured on a millisecond timescale, to prevent overcurrent trips on power supplies that would otherwise bring down the entire group ofservers 32. -
FIG. 3 is abar graph 40 graphically illustrating a simplified, hypothetical distribution of power to thetarget system 30 ofFIG. 2 at an instant in time. In this example, thetarget system 30 includes six of the servers 32 (N=6), whose loading parameters at the instant in time are represented by six vertical bars. Eachserver 32 has a maximum power capacity (PMAX), which may vary from server to server. For the purpose of this example, a net power limit is assumed to be evenly distributed among theservers 32 at the instant in time, providing eachserver 32 with substantially the same power limit PL (PL<PMAX). This equal allocation of available power is illustrated graphically by vertical bars of equal height. Eachlocal controller 34 maintains power consumption Pi (the shaded portion of the six vertical bars) of its associatedserver 32 within its individual power limit PL, such that Pi<PL. The power consumption may be monitored in terms of instantaneous value of Pi, a time-averaged value of Pi over a prescribed time interval, and/or a peak value of Pi. Time-averaged values of Pi may be computed for time intervals of between about 1 millisecond and 2 seconds. - The instantaneous distribution of power described in
FIG. 3 is not ideal, and in most cases can be avoided by implementing the invention. All of the servers in the loading ofFIG. 3 have the same power limit PL, despite the fact that each server is consuming a different amount of power at the instant in time. For example, the server represented byvertical bar 46 is consuming comparatively little power (Pi), while the server represented byvertical bar 42 is consuming a large amount of power in comparison, but they each have the same PL. The invention may rectify this inefficient allocation of the net power limit by dynamically apportioning the net power limit PNET among the servers. The method may be used, for example, to redistribute power limits among the servers according to their detected power consumption. By dynamically apportioning the net power limit on a sufficiently small time scale, such as on the order of milliseconds, the servers may be continuously provided with substantially equal overheads, for example, without exceeding the net power limit. Typically, power would first be reclaimed from the device(s) that have excess allocated power, before the net power limit is redistributed among the devices. This may more reliably avoid briefly or instantaneously exceeding the net power limit during the step of redistributing the net power limit. In some systems, however, power may be reclaimed from the device(s) having excess power and substantially simultaneously redistributed among the devices with sufficient reliability to not exceed the net power limit at any instant. -
FIG. 4 is a bar graph illustrating a more suitable apportionment of available power in the six-server target system for the instantaneous loading inFIG. 3 . The net power limit PNET and the individual power consumption Pi of theservers 32 is the same as inFIG. 3 . However, the net power limit PNET has been apportioned among the servers according to the invention, to provide each server with substantially the same overhead (“power margin”) 50. Thus, the power limit PL for theserver 46 has been reduced, while the power limit PL for theserver 42 has been increased, giving theservers equal overheads 50. The power consumption Pi for each server is typically dynamic, changing over time. Therefore, the net power limit may be dynamically apportioned among the servers to account for the dynamic power consumption Pi. The increased power limit PL forserver 42 allows the server to handle greater processing loads before the local controller would need to take action toward limiting the load. - A number of “trigger conditions” may optionally be selected to trigger an apportionment of power limits in a target system. Still referring to
FIG. 4 , one optional trigger condition may be when the power margin for one or more server is less than a selectedminimum power margin 51. A system may retain the apportionment of power shown inFIG. 4 until the power consumption Pi of one or more of the servers increases to a level at which thepower margin 50 is less than the selectedminimum power margin 51. This example of a trigger condition is dependent on power consumption Pi. - Power limits may alternatively be regularly apportioned at selected time intervals. Thus, the passage of a selected time interval is another example of a trigger condition that may be chosen for triggering the apportionment of the net power limit. The time interval may be a precise function of the power distribution hierarchy in a target system and the power conversion devices at each level of that hierarchy. For a fuse on a line cord, the response time to stay within a power limit is measured in intervals of between 100s of milliseconds up to about 2 seconds, depending on the rating of the fuse. A line cord feeds bulk power supplies for servers. The bulk power supply has an overcurrent level that is typically specified on the order of a few 10s of milliseconds. For example, a BC-1 power supply may shut down after, e.g., 20 ms of overcurrent. The voltage regulator modules (VRM), which are powered by the bulk power supply, can enter overcurrent scenarios on the order of single-millisecond time scales.
-
FIG. 5 is a flowchart outlining a method of managing power in a computer system according to the invention. In step 100 a target system is identified. One example of a target system is a rack-based server system having multiple server blades. Another example of a target system is an entire data center, wherein each “device” to be managed may include an entire rack of server blades. Other examples of power-managed target system will be apparent to one skilled in the art in view of this disclosure. - Once the target system has been identified, various system parameters may be determined in
step 102. Examples of relevant system parameters include the power rating of a shared power supply used to power the devices, the maximum power capacity of each device, the maximum safe operating temperature of the target system or of the devices individually, limitations on the cost of operating the target system, and sound level restrictions imposed by a standards body. - A net power limit provided for the target system in
step 104 may be selected by the power management module or by a user. The net power limit may be determined, in part, according to the system parameters identified instep 102. For example, the net power limit may be selected to limit the operating temperature, sound level, and cost of operating the target system or its devices. Alternatively, the net power may be limited by the maximum available power of the power supplies used to power the target system. The power consumption of the devices in the target system is detected and monitored instep 106. - An overriding consideration when managing power in the target system is whether the net power limit is sufficient to power the target system. Therefore,
conditional step 108 determines whether the net power limit is ample to provide a desired overhead to all of the devices based on their power consumption detected instep 106. If sufficient power is not available to provide the desired overhead, then evasive action may be taken instep 110. Evasive action broadly encompasses any of a number of actions that may be used to avoid problems such as system or component failure, loss of data, inadvertent halting or improper shutting down of devices, and so forth. The evasive action will typically encompass temporarily reducing the net power limit and apportioning the reduced net power limit among the devices accordingly. This may impose a negative overhead as compared to the amount of power the servers would normally want to consume based on their loading. However, the local controllers provided to each server will enforce the reduced overhead on the servers, ensuring the systems would all continue to operate normally, albeit at some reduced performance due to clock throttling, DVFS, or some other power saving technique used to satisfy the reduced power budget. In rare instances, evasive action may optionally include properly shutting down the target system or a device or subsystem thereof. The system administrator may also be alerted of a potential fault so that corrective action may be taken. - Assuming the net power limit is sufficient according to
conditional step 108, the target system may be monitored for a “trigger condition” instep 112 for triggering apportionment of the net power limit in the target system instep 114. Typically, the trigger condition is the passage of a selected time interval. The net power limit may be dynamically apportioned at regular intervals, to ensure continued operation of the devices within each of their apportioned power limits. Depending on the system, a time interval may be between as short as a single millisecond and as long as about two seconds. Alternative trigger conditions may be selected for a system according to the desired power margins on one or more of the devices. - Typically, the management module will determine which device(s) have excess allocated power, and the associated local controllers (at the direction of the MM) would reclaim an excess portion of the allocated power before that power is redistributing among the devices. In other embodiments, however, power may be reclaimed from the device(s) having excess power and substantially simultaneously redistributed among the devices with a sufficient degree of reliability not to exceed the net power limit at any instant.
- The limitations of the target system and its devices may affect how power is apportioned in the target system.
Conditional step 116 takes into account the power consumption of the devices and the apportionment of power, to determine when the desired power limit apportioned to any of the devices would exceed the maximum operating capacity. If the desired apportionment does exceed the physical parameters of any of the devices, then evasive action may be taken as broadly indicated instep 110. As in the case of insufficient overhead (conditional step 108), the evasive action taken is typically to lower the net power limit generally and/or individually reduce the overhead on each device. This may be a short-term response to the situation, followed by shutting down one or more of the devices in a manner that does not cause a running application to fail. Fortunately, no catastrophic problems are likely to occur unless power consumption of the system had reached a “line feed limit,” which is unlikely on responsibly managed systems. For example, a serious problem could occur if a line feed had a 24 kWatt limit and two blade centers had their power supplies hooked up to the common line feed. If the power consumption of all the servers in the two blade centers exceeded the 24 kWatt line feed limit, the circuit breaker on that line feed would pop, and all the servers would immediately crash. - It should be recognized that the invention may take the form of an embodiment containing hardware and/or software elements. Non-limiting examples of software include firmware, resident software, and microcode. More generally, the invention can take the form of a computer program product accessible from a computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate or transport the program for use by or in connection with the instruction execution system, apparatus or device.
- The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W), and DVD.
- A data processing system suitable for storing and/or executing program code typically includes at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- Input/output (I/O) devices such as keyboards, displays, or pointing devices can be coupled to the system, either directly or through intervening I/O controllers. Network adapters may also be used to allow the data processing system to couple to other data processing systems or remote printers or storage devices, such as through intervening private or public networks. Modems, cable modems, Ethernet cards, and wireless network adapters are examples of network adapters.
-
FIG. 6 is a schematic diagram of a computer system generally indicated at 220 that may be configured for managing power in a target system of devices according to the invention. Thecomputer system 220 may be a general-purpose computing device in the form of aconventional computer system 220. Thecomputer system 220 may, itself, include the target system for which power is to be managed. Alternatively, thecomputer system 220 may be external to the target system. Generally,computer system 220 includes aprocessing unit 221, asystem memory 222, and asystem bus 223 that couples various system devices, including thesystem memory 222 toprocessing unit 221.System bus 223 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory includes a read only memory (ROM) 224 and random access memory (RAM) 225. A basic input/output system (BIOS) 226 is stored inROM 224, containing the basic routines that help to transfer information between elements withincomputer system 220, such as during start-up. -
Computer system 220 further includes ahard disk drive 235 for reading from and writing to ahard disk 227, amagnetic disk drive 228 for reading from or writing to a removablemagnetic disk 229, and anoptical disk drive 230 for reading from or writing to a removableoptical disk 231 such as a CD-R, CD-RW, DV-R, or DV-RW.Hard disk drive 235,magnetic disk drive 228, andoptical disk drive 230 are connected tosystem bus 223 by a harddisk drive interface 232, a magneticdisk drive interface 233, and an opticaldisk drive interface 234, respectively. Although the exemplary environment described herein employshard disk 227, removablemagnetic disk 229, and removableoptical disk 231, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAMs, ROMs, USB Drives, and the like, may also be used in the exemplary operating environment. The drives and their associated computer readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data forcomputer system 220. For example, theoperating system 240 andapplication programs 236 may be stored in theRAM 225 and/orhard disk 227 of thecomputer system 220. - A user may enter commands and information into
computer system 220 through input devices, such as akeyboard 255 and amouse 242. Other input devices (not shown) may include a microphone, joystick, game pad, touch pad, satellite dish, scanner, or the like. These and other input devices are often connected toprocessing unit 222 through a USB (universal serial bus) 246 that is coupled to thesystem bus 223, but may be connected by other interfaces, such as a serial port interface, a parallel port, game port, or the like. Adisplay device 247 may also be connected tosystem bus 223 via an interface, such as avideo adapter 248. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers. - The
computer system 220 may operate in a networked environment using logical connections to one or moreremote computers 249. each of the one or moreremote computers 249 may be another personal computer, a server, a client, a router, a network PC, a peer device, a mainframe, a personal digital assistant, an internet-connected mobile telephone or other common network node. While aremote computer 249 typically includes many or all of the elements described above relative to thecomputer system 220, only amemory storage device 250 has been illustrated inFIG. 6 . The logical connections depicted in the figure include a local area network (LAN) 251 and a wide area network (WAN) 252. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the internet - When used in a LAN networking environment, the
computer system 220 is often connected to thelocal area network 251 through a network interface oradapter 253. When used in a WAN networking environment, thecomputer system 220 typically includes amodem 254 or other means for establishing high-speed communications overWAN 252, such as theinternet Modem 254, which may be internal or external, is connected tosystem bus 223 viaUSB interface 246. In a networked environment, program modules depicted relative tocomputer system 220, or portions thereof, may be stored in the remotememory storage device 250. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. - Program modules may be stored on
hard disk 227,optical disk 231,ROM 224,RAM 225, or evenmagnetic disk 229. The program modules may include portions of anoperating system 240,application programs 236, or the like. Asystem parameter database 238 may be included, which may contain parameters of thecomputer system 220 and its many nodes and other devices, such as the devices of the target system, along with their maximum operating capacities, maximum operating temperatures, and so forth that may be relevant to the management of power in the target system. Auser preferences database 239 may also be included, which may contain parameters and procedures for how to apportion power among various devices of the target system, including any trigger conditions that may be used to initiate re-apportionment of power. Theuser preferences database 239 may also include, for example, a user preference designating whether power is to be apportioned evenly among the devices. - Aspects of the present invention may be implemented in the form of an
application program 236.Application program 236 may be informed by or otherwise associated withsystem parameter database 238 and/oruser preference database 239. Theapplication program 236 generally comprises computer-executable instructions for managing power in the target system according to the invention. - The terms “comprising,” “including,” and “having,” as used in the claims and specification herein, shall be considered as indicating an open group that may include other elements not specified. The terms “a,” “an,” and the singular forms of words shall be taken to include the plural form of the same words, such that the terms mean that one or more of something is provided. The term “one” or “single” may be used to indicate that one and only one of something is intended. Similarly, other specific integer values, such as “two,” may be used when a specific number of things is intended. The terms “preferably,” “preferred,” “prefer,” “optionally,” “may,” and similar terms are used to indicate that an item, condition or step being referred to is an optional (not required) feature of the invention.
- While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/681,818 US7779276B2 (en) | 2007-03-05 | 2007-03-05 | Power management in a power-constrained processing system |
PCT/EP2008/052319 WO2008107344A2 (en) | 2007-03-05 | 2008-02-26 | Power management in a power-constrained processing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/681,818 US7779276B2 (en) | 2007-03-05 | 2007-03-05 | Power management in a power-constrained processing system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20080222435A1 true US20080222435A1 (en) | 2008-09-11 |
US7779276B2 US7779276B2 (en) | 2010-08-17 |
Family
ID=39726316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/681,818 Expired - Fee Related US7779276B2 (en) | 2007-03-05 | 2007-03-05 | Power management in a power-constrained processing system |
Country Status (2)
Country | Link |
---|---|
US (1) | US7779276B2 (en) |
WO (1) | WO2008107344A2 (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100037070A1 (en) * | 2008-08-08 | 2010-02-11 | Dell Products L.P. | Demand based power allocation |
US20100037077A1 (en) * | 2008-08-06 | 2010-02-11 | Vivek Kashyap | Multiple-node system power utilization management |
US20100182055A1 (en) * | 2009-01-16 | 2010-07-22 | Anton Rozen | Device and method for detecting and correcting timing errors |
US20100332872A1 (en) * | 2009-06-30 | 2010-12-30 | International Business Machines Corporation | Priority-Based Power Capping in Data Processing Systems |
US8028183B2 (en) | 2008-09-18 | 2011-09-27 | International Business Machines Corporation | Power cap lower bound exploration in a server environment |
US8151122B1 (en) * | 2007-07-05 | 2012-04-03 | Hewlett-Packard Development Company, L.P. | Power budget managing method and system |
JP2012185693A (en) * | 2011-03-07 | 2012-09-27 | Nec Corp | Power consumption controller, power consumption control method and program |
US20120283892A1 (en) * | 2010-01-29 | 2012-11-08 | Daniel Humphrey | Managing Electric Energy Distribution To Multiple Loads Using Selective Capping |
US20130073096A1 (en) * | 2011-09-16 | 2013-03-21 | International Business Machines Corporation | Proactive cooling control using power consumption trend analysis |
US8429435B1 (en) * | 2008-07-25 | 2013-04-23 | Autani Corporation | Automation devices, systems, architectures, and methods for energy management and other applications |
US20130138980A1 (en) * | 2011-11-28 | 2013-05-30 | Inventec Corporation | Server rack system for managing power supply |
CN103138941A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Communication method for server rack system |
CN103138967A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system and delayed start-up method thereof |
CN103139248A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Rack system |
CN103138972A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server cabinet system |
CN103138940A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system |
CN103138970A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system |
CN103138999A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Monitor method of multiple rack systems |
US20130160003A1 (en) * | 2011-12-19 | 2013-06-20 | Vmware, Inc. | Managing resource utilization within a cluster of computing devices |
US8589556B2 (en) | 2010-11-05 | 2013-11-19 | International Business Machines Corporation | Allocation of energy budgets to individual partitions |
US20130318371A1 (en) * | 2012-05-22 | 2013-11-28 | Robert W. Hormuth | Systems and methods for dynamic power allocation in an information handling system environment |
CN103677097A (en) * | 2012-09-18 | 2014-03-26 | 英业达科技有限公司 | Server rack system and server |
US20140149761A1 (en) * | 2012-11-27 | 2014-05-29 | International Business Machines Corporation | Distributed power budgeting |
US8918657B2 (en) | 2008-09-08 | 2014-12-23 | Virginia Tech Intellectual Properties | Systems, devices, and/or methods for managing energy usage |
US8935010B1 (en) * | 2011-12-13 | 2015-01-13 | Juniper Networks, Inc. | Power distribution within high-power networking equipment |
US20150113300A1 (en) * | 2013-10-22 | 2015-04-23 | Nvidia Corporation | Battery operated computer system |
US20150241943A1 (en) * | 2014-02-25 | 2015-08-27 | International Business Machines Corporation | Distributed power management with performance and power boundaries |
US20150241945A1 (en) * | 2014-02-25 | 2015-08-27 | Dell Products L.P. | Methods and systems for multiple module power regulation in a modular chassis |
CN104969182A (en) * | 2012-12-28 | 2015-10-07 | 英特尔公司 | High dynamic range software-transparent heterogeneous computing element processors, methods, and systems |
US9355055B1 (en) * | 2012-09-07 | 2016-05-31 | Amazon Technologies, Inc. | Network and power connection management |
US9477286B2 (en) | 2010-11-05 | 2016-10-25 | International Business Machines Corporation | Energy allocation to groups of virtual machines |
US9684366B2 (en) | 2014-02-25 | 2017-06-20 | International Business Machines Corporation | Distributed power management system with plurality of power management controllers controlling zone and component power caps of respective zones by determining priority of other zones |
US20170322613A1 (en) * | 2016-05-06 | 2017-11-09 | Quanta Computer Inc. | Server rack power management |
US20170336855A1 (en) * | 2016-05-20 | 2017-11-23 | Dell Products L.P. | Systems and methods for chassis-level view of information handling system power capping |
US20170336856A1 (en) * | 2016-05-20 | 2017-11-23 | Dell Products L.P. | Systems and methods for autonomously adapting powering budgeting in a multi-information handling system passive chassis environment |
US9891700B2 (en) * | 2015-10-02 | 2018-02-13 | Infineon Technologies Austria Ag | Power management for datacenter power architectures |
US20200029284A1 (en) * | 2018-07-18 | 2020-01-23 | Dell Products L.P. | Method for control and distribution of the amount of power to be lowered or raised in a multi-load system |
US20200089298A1 (en) * | 2018-09-14 | 2020-03-19 | Quanta Computer Inc. | Method and system for dynamically allocating and optimizing power resources |
US20200142465A1 (en) * | 2018-11-02 | 2020-05-07 | Dell Products L.P. | Power management system |
WO2021074160A1 (en) * | 2019-10-17 | 2021-04-22 | Fujitsu Technology Solutions Intellectual Property Gmbh | Method for specifying a power limit of a processor |
US11157056B2 (en) * | 2019-11-01 | 2021-10-26 | Dell Products L.P. | System and method for monitoring a maximum load based on an aggregate load profile of a system |
US11181961B2 (en) * | 2020-04-07 | 2021-11-23 | Dell Products L.P. | System and method for increasing power delivery to information handling systems |
US11249533B2 (en) * | 2020-06-22 | 2022-02-15 | Dell Products L.P. | Systems and methods for enabling power budgeting in an information handling system comprising a plurality of modular information handling systems |
US11281275B2 (en) * | 2019-10-10 | 2022-03-22 | Dell Products L.P. | System and method for using input power line telemetry in an information handling system |
US11635798B2 (en) * | 2019-02-08 | 2023-04-25 | Hewlett Packard Enterprise Development Lp | Dynamic OCP adjustment |
US20230189469A1 (en) * | 2021-12-13 | 2023-06-15 | Dell Products, L.P. | Distribution of available power to devices in a group |
US20230205248A1 (en) * | 2021-12-23 | 2023-06-29 | Advanced Micro Devices, Inc. | Controlling Electrical Power Consumption for Elements in an Electronic Device based on a Platform Electrical Power Limit |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4395800B2 (en) * | 2007-09-18 | 2010-01-13 | 日本電気株式会社 | Power management system and power management method |
US8266456B2 (en) | 2007-10-15 | 2012-09-11 | Apple Inc. | Supplying remaining available current to port in excess of bus standard limit |
US8069359B2 (en) | 2007-12-28 | 2011-11-29 | Intel Corporation | System and method to establish and dynamically control energy consumption in large-scale datacenters or IT infrastructures |
US8898484B2 (en) * | 2008-10-27 | 2014-11-25 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Optimizing delivery of regulated power from a voltage regulator to an electrical component |
US8024606B2 (en) * | 2009-08-21 | 2011-09-20 | International Business Machines Corporation | Power restoration to blade servers |
GB2481422B (en) * | 2010-06-23 | 2016-09-21 | 1E Ltd | Controlling the power consumption of computers |
US8612801B2 (en) * | 2011-01-25 | 2013-12-17 | Dell Products, Lp | System and method for extending system uptime while running on backup power |
US8862924B2 (en) * | 2011-11-15 | 2014-10-14 | Advanced Micro Devices, Inc. | Processor with power control via instruction issuance |
US8984307B2 (en) * | 2012-05-21 | 2015-03-17 | Qualcomm Incorporated | System and method for dynamic battery current load management in a portable computing device |
CN103793361A (en) * | 2012-10-31 | 2014-05-14 | 郑州孜晗软件科技有限公司 | Mobile consumption calculation machine |
CN104679703A (en) * | 2013-11-29 | 2015-06-03 | 英业达科技有限公司 | High-density server system |
CA3090944A1 (en) | 2017-02-08 | 2018-08-16 | Upstream Data Inc. | Blockchain mine at oil or gas facility |
WO2019139632A1 (en) | 2018-01-11 | 2019-07-18 | Lancium Llc | Method and system for dynamic power delivery to a flexible datacenter using unutilized energy sources |
US10873211B2 (en) | 2018-09-14 | 2020-12-22 | Lancium Llc | Systems and methods for dynamic power routing with behind-the-meter energy storage |
US11025060B2 (en) | 2018-09-14 | 2021-06-01 | Lancium Llc | Providing computational resource availability based on power-generation signals |
US11016553B2 (en) | 2018-09-14 | 2021-05-25 | Lancium Llc | Methods and systems for distributed power control of flexible datacenters |
US11031787B2 (en) | 2018-09-14 | 2021-06-08 | Lancium Llc | System of critical datacenters and behind-the-meter flexible datacenters |
US10367353B1 (en) | 2018-10-30 | 2019-07-30 | Lancium Llc | Managing queue distribution between critical datacenter and flexible datacenter |
US11031813B2 (en) | 2018-10-30 | 2021-06-08 | Lancium Llc | Systems and methods for auxiliary power management of behind-the-meter power loads |
US10452127B1 (en) | 2019-01-11 | 2019-10-22 | Lancium Llc | Redundant flexible datacenter workload scheduling |
US11128165B2 (en) | 2019-02-25 | 2021-09-21 | Lancium Llc | Behind-the-meter charging station with availability notification |
US11907029B2 (en) | 2019-05-15 | 2024-02-20 | Upstream Data Inc. | Portable blockchain mining system and methods of use |
US11397999B2 (en) | 2019-08-01 | 2022-07-26 | Lancium Llc | Modifying computing system operations based on cost and power conditions |
US11868106B2 (en) | 2019-08-01 | 2024-01-09 | Lancium Llc | Granular power ramping |
US10618427B1 (en) | 2019-10-08 | 2020-04-14 | Lancium Llc | Behind-the-meter branch loads for electrical vehicle charging |
US10608433B1 (en) | 2019-10-28 | 2020-03-31 | Lancium Llc | Methods and systems for adjusting power consumption based on a fixed-duration power option agreement |
US11042948B1 (en) | 2020-02-27 | 2021-06-22 | Lancium Llc | Computing component arrangement based on ramping capabilities |
US11815967B2 (en) * | 2021-10-15 | 2023-11-14 | Dell Products L.P. | Power throttling of high performance computing (HPC) platform components |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5719800A (en) * | 1995-06-30 | 1998-02-17 | Intel Corporation | Performance throttling to reduce IC power consumption |
US6564328B1 (en) * | 1999-12-23 | 2003-05-13 | Intel Corporation | Microprocessor with digital power throttle |
US20040003303A1 (en) * | 2002-07-01 | 2004-01-01 | Newisys, Inc. | Methods and apparatus for power management |
US20040163001A1 (en) * | 2003-02-14 | 2004-08-19 | Bodas Devadatta V. | Enterprise power and thermal management |
US20050015632A1 (en) * | 2003-07-18 | 2005-01-20 | Chheda Sachin Navin | Rack-level power management of computer systems |
US20050102544A1 (en) * | 2003-11-10 | 2005-05-12 | Dell Products L.P. | System and method for throttling power in one or more information handling systems |
US6931559B2 (en) * | 2001-12-28 | 2005-08-16 | Intel Corporation | Multiple mode power throttle mechanism |
US20050283624A1 (en) * | 2004-06-17 | 2005-12-22 | Arvind Kumar | Method and an apparatus for managing power consumption of a server |
US20050289362A1 (en) * | 2004-06-24 | 2005-12-29 | Merkin Aaron E | Maintaining server performance in a power constrained environment |
US7032119B2 (en) * | 2000-09-27 | 2006-04-18 | Amphus, Inc. | Dynamic power and workload management for multi-server system |
US20060161794A1 (en) * | 2005-01-18 | 2006-07-20 | Dell Products L.P. | Prioritizing power throttling in an information handling system |
US7281146B2 (en) * | 2004-06-30 | 2007-10-09 | Intel Corporation | Dynamic power requirement budget manager |
US7400062B2 (en) * | 2002-10-15 | 2008-07-15 | Microsemi Corp. - Analog Mixed Signal Group Ltd. | Rack level power management |
US7562234B2 (en) * | 2005-08-25 | 2009-07-14 | Apple Inc. | Methods and apparatuses for dynamic power control |
US7607030B2 (en) * | 2006-06-27 | 2009-10-20 | Hewlett-Packard Development Company, L.P. | Method and apparatus for adjusting power consumption during server initial system power performance state |
-
2007
- 2007-03-05 US US11/681,818 patent/US7779276B2/en not_active Expired - Fee Related
-
2008
- 2008-02-26 WO PCT/EP2008/052319 patent/WO2008107344A2/en active Application Filing
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5719800A (en) * | 1995-06-30 | 1998-02-17 | Intel Corporation | Performance throttling to reduce IC power consumption |
US6564328B1 (en) * | 1999-12-23 | 2003-05-13 | Intel Corporation | Microprocessor with digital power throttle |
US7032119B2 (en) * | 2000-09-27 | 2006-04-18 | Amphus, Inc. | Dynamic power and workload management for multi-server system |
US6931559B2 (en) * | 2001-12-28 | 2005-08-16 | Intel Corporation | Multiple mode power throttle mechanism |
US20040003303A1 (en) * | 2002-07-01 | 2004-01-01 | Newisys, Inc. | Methods and apparatus for power management |
US7400062B2 (en) * | 2002-10-15 | 2008-07-15 | Microsemi Corp. - Analog Mixed Signal Group Ltd. | Rack level power management |
US20040163001A1 (en) * | 2003-02-14 | 2004-08-19 | Bodas Devadatta V. | Enterprise power and thermal management |
US20050015632A1 (en) * | 2003-07-18 | 2005-01-20 | Chheda Sachin Navin | Rack-level power management of computer systems |
US20050102544A1 (en) * | 2003-11-10 | 2005-05-12 | Dell Products L.P. | System and method for throttling power in one or more information handling systems |
US20050283624A1 (en) * | 2004-06-17 | 2005-12-22 | Arvind Kumar | Method and an apparatus for managing power consumption of a server |
US20050289362A1 (en) * | 2004-06-24 | 2005-12-29 | Merkin Aaron E | Maintaining server performance in a power constrained environment |
US7281146B2 (en) * | 2004-06-30 | 2007-10-09 | Intel Corporation | Dynamic power requirement budget manager |
US20060161794A1 (en) * | 2005-01-18 | 2006-07-20 | Dell Products L.P. | Prioritizing power throttling in an information handling system |
US7562234B2 (en) * | 2005-08-25 | 2009-07-14 | Apple Inc. | Methods and apparatuses for dynamic power control |
US7607030B2 (en) * | 2006-06-27 | 2009-10-20 | Hewlett-Packard Development Company, L.P. | Method and apparatus for adjusting power consumption during server initial system power performance state |
Cited By (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8151122B1 (en) * | 2007-07-05 | 2012-04-03 | Hewlett-Packard Development Company, L.P. | Power budget managing method and system |
US9575472B1 (en) * | 2008-07-25 | 2017-02-21 | Autani, Llc | Automation devices, systems, architectures, and methods for energy management and other applications |
US8429435B1 (en) * | 2008-07-25 | 2013-04-23 | Autani Corporation | Automation devices, systems, architectures, and methods for energy management and other applications |
US8375228B2 (en) * | 2008-08-06 | 2013-02-12 | International Business Machines Corporation | Multiple-node system power utilization management |
US20100037077A1 (en) * | 2008-08-06 | 2010-02-11 | Vivek Kashyap | Multiple-node system power utilization management |
US8713334B2 (en) | 2008-08-08 | 2014-04-29 | Dell Products L.P. | Demand based power allocation |
US20100037070A1 (en) * | 2008-08-08 | 2010-02-11 | Dell Products L.P. | Demand based power allocation |
US7984311B2 (en) * | 2008-08-08 | 2011-07-19 | Dell Products L.P. | Demand based power allocation |
US8381000B2 (en) | 2008-08-08 | 2013-02-19 | Dell Products L.P. | Demand based power allocation |
US8918657B2 (en) | 2008-09-08 | 2014-12-23 | Virginia Tech Intellectual Properties | Systems, devices, and/or methods for managing energy usage |
US8028183B2 (en) | 2008-09-18 | 2011-09-27 | International Business Machines Corporation | Power cap lower bound exploration in a server environment |
US7971105B2 (en) | 2009-01-16 | 2011-06-28 | Freescale Semiconductor, Inc. | Device and method for detecting and correcting timing errors |
US20100182055A1 (en) * | 2009-01-16 | 2010-07-22 | Anton Rozen | Device and method for detecting and correcting timing errors |
US8276012B2 (en) | 2009-06-30 | 2012-09-25 | International Business Machines Corporation | Priority-based power capping in data processing systems |
US9026818B2 (en) | 2009-06-30 | 2015-05-05 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Priority-based power capping in data processing systems |
US20100332872A1 (en) * | 2009-06-30 | 2010-12-30 | International Business Machines Corporation | Priority-Based Power Capping in Data Processing Systems |
US8707074B2 (en) | 2009-06-30 | 2014-04-22 | International Business Machines Corporation | Priority-based power capping in data processing systems |
US20120283892A1 (en) * | 2010-01-29 | 2012-11-08 | Daniel Humphrey | Managing Electric Energy Distribution To Multiple Loads Using Selective Capping |
US9229514B2 (en) * | 2010-01-29 | 2016-01-05 | Hewlett Parkard Enterprise Development LP | Managing electric energy distribution to multiple loads using selective capping |
US8589556B2 (en) | 2010-11-05 | 2013-11-19 | International Business Machines Corporation | Allocation of energy budgets to individual partitions |
US9494991B2 (en) | 2010-11-05 | 2016-11-15 | International Business Machines Corporation | Energy allocation to groups of virtual machines |
US9477286B2 (en) | 2010-11-05 | 2016-10-25 | International Business Machines Corporation | Energy allocation to groups of virtual machines |
JP2012185693A (en) * | 2011-03-07 | 2012-09-27 | Nec Corp | Power consumption controller, power consumption control method and program |
US10180665B2 (en) * | 2011-09-16 | 2019-01-15 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Fluid-cooled computer system with proactive cooling control using power consumption trend analysis |
US20130073096A1 (en) * | 2011-09-16 | 2013-03-21 | International Business Machines Corporation | Proactive cooling control using power consumption trend analysis |
CN103138999A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Monitor method of multiple rack systems |
CN103138972A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server cabinet system |
CN103138945A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system for managing power supply |
CN103138940A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system |
US20130138980A1 (en) * | 2011-11-28 | 2013-05-30 | Inventec Corporation | Server rack system for managing power supply |
CN103138941A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Communication method for server rack system |
CN103138967A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system and delayed start-up method thereof |
CN103139248A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Rack system |
CN103138970A (en) * | 2011-11-28 | 2013-06-05 | 英业达科技有限公司 | Server rack system |
US8930725B2 (en) * | 2011-11-28 | 2015-01-06 | Inventec Corporation | Server rack system for managing power supply |
US8935010B1 (en) * | 2011-12-13 | 2015-01-13 | Juniper Networks, Inc. | Power distribution within high-power networking equipment |
US20130160003A1 (en) * | 2011-12-19 | 2013-06-20 | Vmware, Inc. | Managing resource utilization within a cluster of computing devices |
US8843772B2 (en) * | 2012-05-22 | 2014-09-23 | Dell Products Lp | Systems and methods for dynamic power allocation in an information handling system environment |
US20130318371A1 (en) * | 2012-05-22 | 2013-11-28 | Robert W. Hormuth | Systems and methods for dynamic power allocation in an information handling system environment |
US9355055B1 (en) * | 2012-09-07 | 2016-05-31 | Amazon Technologies, Inc. | Network and power connection management |
CN103677097A (en) * | 2012-09-18 | 2014-03-26 | 英业达科技有限公司 | Server rack system and server |
US11073891B2 (en) | 2012-11-27 | 2021-07-27 | International Business Machines Corporation | Distributed power budgeting |
US9292074B2 (en) * | 2012-11-27 | 2016-03-22 | International Business Machines Corporation | Distributed power budgeting |
US9298247B2 (en) * | 2012-11-27 | 2016-03-29 | International Business Machines Corporation | Distributed power budgeting |
US10331192B2 (en) | 2012-11-27 | 2019-06-25 | International Business Machines Corporation | Distributed power budgeting |
US20140149760A1 (en) * | 2012-11-27 | 2014-05-29 | International Business Machines Corporation | Distributed power budgeting |
US20140149761A1 (en) * | 2012-11-27 | 2014-05-29 | International Business Machines Corporation | Distributed power budgeting |
CN104969182A (en) * | 2012-12-28 | 2015-10-07 | 英特尔公司 | High dynamic range software-transparent heterogeneous computing element processors, methods, and systems |
US10162687B2 (en) | 2012-12-28 | 2018-12-25 | Intel Corporation | Selective migration of workloads between heterogeneous compute elements based on evaluation of migration performance benefit and available energy and thermal budgets |
US20150113300A1 (en) * | 2013-10-22 | 2015-04-23 | Nvidia Corporation | Battery operated computer system |
US20150241946A1 (en) * | 2014-02-25 | 2015-08-27 | International Business Machines Corporation | Distributed power management with performance and power boundaries |
US9740275B2 (en) * | 2014-02-25 | 2017-08-22 | International Business Machines Corporation | Method performed by an associated power management controller of a zone based on node power consumption and priority data for each of the plurality of zones |
US20150241943A1 (en) * | 2014-02-25 | 2015-08-27 | International Business Machines Corporation | Distributed power management with performance and power boundaries |
US9746909B2 (en) * | 2014-02-25 | 2017-08-29 | International Business Machines Corporation | Computer program product and a node implementing power management by associated power management controllers based on distributed node power consumption and priority data |
US10353453B2 (en) * | 2014-02-25 | 2019-07-16 | Dell Products L.P. | Methods and systems for multiple module power regulation in a modular chassis |
US20150241945A1 (en) * | 2014-02-25 | 2015-08-27 | Dell Products L.P. | Methods and systems for multiple module power regulation in a modular chassis |
US9684366B2 (en) | 2014-02-25 | 2017-06-20 | International Business Machines Corporation | Distributed power management system with plurality of power management controllers controlling zone and component power caps of respective zones by determining priority of other zones |
US9891700B2 (en) * | 2015-10-02 | 2018-02-13 | Infineon Technologies Austria Ag | Power management for datacenter power architectures |
US10509456B2 (en) * | 2016-05-06 | 2019-12-17 | Quanta Computer Inc. | Server rack power management |
US20170322613A1 (en) * | 2016-05-06 | 2017-11-09 | Quanta Computer Inc. | Server rack power management |
US10126798B2 (en) * | 2016-05-20 | 2018-11-13 | Dell Products L.P. | Systems and methods for autonomously adapting powering budgeting in a multi-information handling system passive chassis environment |
US20170336856A1 (en) * | 2016-05-20 | 2017-11-23 | Dell Products L.P. | Systems and methods for autonomously adapting powering budgeting in a multi-information handling system passive chassis environment |
US10437303B2 (en) * | 2016-05-20 | 2019-10-08 | Dell Products L.P. | Systems and methods for chassis-level view of information handling system power capping |
US20170336855A1 (en) * | 2016-05-20 | 2017-11-23 | Dell Products L.P. | Systems and methods for chassis-level view of information handling system power capping |
US11039404B2 (en) * | 2018-07-18 | 2021-06-15 | Dell Products L.P. | Method for control and distribution of the amount of power to be lowered or raised in a multi-load system |
US20200029284A1 (en) * | 2018-07-18 | 2020-01-23 | Dell Products L.P. | Method for control and distribution of the amount of power to be lowered or raised in a multi-load system |
US20200089298A1 (en) * | 2018-09-14 | 2020-03-19 | Quanta Computer Inc. | Method and system for dynamically allocating and optimizing power resources |
US10884469B2 (en) * | 2018-09-14 | 2021-01-05 | Quanta Computer Inc. | Method and system for dynamically allocating and optimizing power resources |
US20200142465A1 (en) * | 2018-11-02 | 2020-05-07 | Dell Products L.P. | Power management system |
US10852804B2 (en) * | 2018-11-02 | 2020-12-01 | Dell Products L.P. | Power management system |
US11520396B2 (en) | 2018-11-02 | 2022-12-06 | Dell Products L.P. | Power management system |
US11635798B2 (en) * | 2019-02-08 | 2023-04-25 | Hewlett Packard Enterprise Development Lp | Dynamic OCP adjustment |
US11281275B2 (en) * | 2019-10-10 | 2022-03-22 | Dell Products L.P. | System and method for using input power line telemetry in an information handling system |
WO2021074160A1 (en) * | 2019-10-17 | 2021-04-22 | Fujitsu Technology Solutions Intellectual Property Gmbh | Method for specifying a power limit of a processor |
US11157056B2 (en) * | 2019-11-01 | 2021-10-26 | Dell Products L.P. | System and method for monitoring a maximum load based on an aggregate load profile of a system |
US11181961B2 (en) * | 2020-04-07 | 2021-11-23 | Dell Products L.P. | System and method for increasing power delivery to information handling systems |
US11249533B2 (en) * | 2020-06-22 | 2022-02-15 | Dell Products L.P. | Systems and methods for enabling power budgeting in an information handling system comprising a plurality of modular information handling systems |
US20230189469A1 (en) * | 2021-12-13 | 2023-06-15 | Dell Products, L.P. | Distribution of available power to devices in a group |
US11844188B2 (en) * | 2021-12-13 | 2023-12-12 | Dell Products, L.P. | Distribution of available power to devices in a group |
US20230205248A1 (en) * | 2021-12-23 | 2023-06-29 | Advanced Micro Devices, Inc. | Controlling Electrical Power Consumption for Elements in an Electronic Device based on a Platform Electrical Power Limit |
US11714442B2 (en) * | 2021-12-23 | 2023-08-01 | Ati Technologies Ulc | Controlling electrical power consumption for elements in an electronic device based on a platform electrical power limit |
Also Published As
Publication number | Publication date |
---|---|
WO2008107344A2 (en) | 2008-09-12 |
WO2008107344A3 (en) | 2008-11-13 |
US7779276B2 (en) | 2010-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7779276B2 (en) | Power management in a power-constrained processing system | |
US7831843B2 (en) | Apparatus and methods for managing power in an information handling system | |
US8006108B2 (en) | Dynamic selection of group and device power limits | |
US8250382B2 (en) | Power control of servers using advanced configuration and power interface (ACPI) states | |
US20210232198A1 (en) | Method and apparatus for performing power analytics of a storage system | |
US7272732B2 (en) | Controlling power consumption of at least one computer system | |
US7418608B2 (en) | Method and an apparatus for managing power consumption of a server | |
US9568966B2 (en) | Dynamic power budget allocation | |
US8390148B2 (en) | Systems and methods for power supply wear leveling in a blade server chassis | |
US8001407B2 (en) | Server configured for managing power and performance | |
US9436256B2 (en) | Dynamic CPU voltage regulator phase shedding | |
US7131019B2 (en) | Method of managing power of control box | |
US8473768B2 (en) | Power control apparatus and method for cluster system | |
US7529949B1 (en) | Heterogeneous power supply management system | |
US9329586B2 (en) | Information handling system dynamic fan power management | |
US20090119523A1 (en) | Managing Power Consumption Based on Historical Average | |
US20090307514A1 (en) | System and Method for Managing Power Supply Units | |
US8065537B2 (en) | Adjusting cap settings of electronic devices according to measured workloads | |
EP2804076A2 (en) | Adaptively Limiting a Maximum Operating Frequency in a Multicore Processor | |
US20170097673A1 (en) | Computer system and method providing both main and auxiliary power over a single power bus | |
US20240111349A1 (en) | Data center power consumption through modulation of power supply unit conversion frequency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOLAN, JOSEPH EDWARD, MR.;CAMPBELL, KEITH MANDERS, MR.;KUMAR, VIJAY, MR.;AND OTHERS;REEL/FRAME:018957/0228;SIGNING DATES FROM 20070220 TO 20070223 Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOLAN, JOSEPH EDWARD, MR.;CAMPBELL, KEITH MANDERS, MR.;KUMAR, VIJAY, MR.;AND OTHERS;SIGNING DATES FROM 20070220 TO 20070223;REEL/FRAME:018957/0228 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
SULP | Surcharge for late payment | ||
AS | Assignment |
Owner name: LINKEDIN CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:035201/0479 Effective date: 20140331 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LINKEDIN CORPORATION;REEL/FRAME:044746/0001 Effective date: 20171018 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220817 |