US20020184327A1 - System and method for partitioning address space in a proxy cache server cluster - Google Patents
System and method for partitioning address space in a proxy cache server cluster Download PDFInfo
- Publication number
- US20020184327A1 US20020184327A1 US09/853,290 US85329001A US2002184327A1 US 20020184327 A1 US20020184327 A1 US 20020184327A1 US 85329001 A US85329001 A US 85329001A US 2002184327 A1 US2002184327 A1 US 2002184327A1
- Authority
- US
- United States
- Prior art keywords
- server
- request
- proxy
- load
- proxy cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/35—Network arrangements, protocols or services for addressing or naming involving non-standard use of addresses for implementing network functionalities, e.g. coding subscription information within the address or functional addressing, i.e. assigning an address to a function
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1023—Server selection for load balancing based on a hash applied to IP addresses or costs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/563—Data redirection of data network streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
- H04L69/322—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
- H04L69/329—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
Definitions
- the present invention relates to services of communications networks and, more specifically, to a system and method for increasing the availability of services offered by a service provider of a communications network.
- proxy cache server may be used to accelerate client access to the Internet (“forward proxy”), to accelerate Internet access to a web server (“reverse proxy”), or to accelerate Internet access transparently to either client access or web server access (“transparent proxy”).
- forward proxy client access to the Internet
- reverse proxy web server
- transparent proxy Internet access transparently to either client access or web server access
- the proxy may access frequently requested services from the web servers and store (“host”) them locally to effectively speed-up access to future requests for the services.
- host may host frequently requested web pages of a web site.
- the proxy attempts to fulfill that request from its local storage.
- the proxy forwards the request to a web site server that can satisfy the request.
- the web server then responds by transferring a stream of information to the proxy, which stores and forwards the information over the Internet onto the client.
- TCP/IP Transmission Control Protocol/ Internet Protocol
- DNS Domain Name System
- proxies “front-end” the web servers (and may, in fact, be resident on the web servers) and the network addresses of the proxies (rather than the actual web site) are generally mapped to the domain name of the service provider.
- communication exchanges with the proxies generally comprise IP packets or UDP/TCP-socketed traffic, such as socket requests and responses.
- a socket is essentially an interface between an application layer and transport layer of a protocol stack that enables the transport layer to identify which application it must communicate with in the application layer.
- a socket interfaces to a TCP/IP protocol stack via a set of application programming interfaces (API) consisting of a plurality of entry points into that stack.
- API application programming interfaces
- the socket may be considered a session; however, for a connectionless protocol such as IP datagram using the User Datagram Protocol (UDP), the socket is an entity/handle that the networking software (protocol stack) uses to uniquely identify an application layer end point, typically through the use of port numbers.
- the software entity within the server that manages the communication exchanges is a TCP/IP process, which is schematically illustrated as layers of a typical Internet communications protocol stack. Protocol stacks and the TCP/IP reference model are well-known and are, for example, described in Computer Networks by Andrew S. Tanenbaum, printed by Prentice Hall PTR, Upper Saddle River, N.J., 1996.
- the popularity of Internet caching due to the increased efficiencies it offers, has led to the construction of caching architectures that link multiple proxy cache servers in a single site.
- the servers may service an even larger cluster of content servers.
- the proxy cache servers are sometimes accessed via a Layer 4 (L 4 ) switch that is tied to the site's edge router.
- L 4 switch is generally responsible for load-balancing functions across the cache. By load-balancing it is meant that the switch assigns requests to various caches based upon a mechanism that attempts to balance the usage of the caches so that no single cache is over-utilized while taking into account any connection context associated with the client requesting the dataflow.
- the cache structure will become quickly filled with these large files.
- the problem is compounded by the load-balancing function of an L 4 switch because all caches tend to be populated with the same large files. This loading of all caches occurs, in part, because the L 4 switch is designed to provide load-balancing across all caches in the cluster. Accordingly, the behavior of the switch causes further copies of the file across each of the existing proxies. Since a large number of requests for a given file may be present at any one time, the L 4 switch will naturally fill all caches with copies of the file generally without regard to how large the cache cluster (number of servers) is made.
- L 4 switches typically include functions for selectively routing requests to a particular cache without regard to the load-balancing imperative based upon a specific cache location for the file. This would ensure that other caches remain free of the over-requested file. However, this may involve the use of internal hashing functions with respect to the requested file's URL, that ties up processor time, thus delaying the switch's load-balancing operations. As caching arrangements become increasingly efficient, this often undesirably slows the system. In addition, L 4 switches often cannot distinguish between large and small files. Where files are small (example 50 Kbytes or less) it is generally more efficient to allow load-balancing unmodified by the invention to occur, regardless of the number of request for a given file.
- PPC proxy partition cache
- the PPC architecture should thereby relieve congestion, and overfilling of the caches with duplicate copies of large files.
- the present invention overcomes the disadvantages of the prior art by providing a proxy partition cache (PPC) architecture and a technique for address-partitioning a proxy cache consisting of a grouping of discrete, cooperating caches (servers) by redirecting or reassigning client requests for objects (files) of a given size to a single cache in the grouping notwithstanding the cache to which the request is made by the L 4 switch based upon load-balancing considerations. The file is then returned to the L 4 switch (or another load-balancing mechanism) via the switch-designated cache for vending to the requesting client.
- PPC proxy partition cache
- the redirection/reassignment occurs according to a function within the cache to which the request is directed so that the L 4 switch remains freed from additional tasks that can compromise speed. In this manner, congestion in the caches resulting from overfilling of the entire address space with a particular large file or files is avoided-the particular large file or file being retained on the predetermined cache, to which requests to all other caches are redirected/reassigned.
- the PCC is member of a proxy cache cluster consisting of a number of coordinated processor/memory mechanisms (PMMs).
- the grouping of caches can be interconnected to the L 4 switch and associated router by a first network segment. This enables communication with the client via the Internet cloud or another network.
- a second network segment interconnects the caches to a content server farm in which large files are stored, handled and selectively transferred to the cache as requested.
- the separate servers can be interconnected via a third network segment that enables redirection of client file requests (where individual server addresses are publicly available), or that enables tunneling (according to known tunneling standards where the servers are non-public) of data between the servers.
- Each cache is provided with a directory and hashing algorithm (any conventional hashing function can be used).
- the URL of a client request is subjected to the hash, and the hash result can be processed using, for example, a modulo function with respect to the number of servers in the grouping. This provides a discrete number equal to the number of servers.
- the file is stored in and retrieved only from the server corresponding to the value provided by the modulo of the hash. In this manner discrete content files are located uniquely within the servers in the group and typically not generally replicated in separate servers. This allows other servers to operate with less congestion from particular large, often-requested files.
- the caches can each be adapted to determine a cutoff size for redirection/reassignment of requests. If a requested file/object is below a certain size, then the switch-designated cache server caches and vends the file/object directly. This prevents address-partition resources from being used inefficiently on small, quickly vended objects.
- a request for an object is referred to a different discrete server unless the discrete server and the original receiving server (as assigned by the oad-balancing mechanism) are identical—whereby the request is optimally processed on the receiving server
- FIG. 1 is a block diagram of a computer internetwork including a collection of network segments connected to a plurality of client and server computers, the latter of which may be organized as a service provider;
- FIG. 2 is a highly schematized diagram of software components of the service provider servers of FIG. 1;
- FIG. 3 is a schematic block diagram of a proxy cache cluster (PCC) comprising a group of processor/memory mechanisms (PMMs) that cooperatively interact to host PCC services associated with network addresses of the service provider including an inventive Partition Proxy Cache (PPC) cluster member according to this invention;
- PCC proxy cache cluster
- PMMs processor/memory mechanisms
- FIG. 4 is a schematic block diagram of a PPC and associated interconnected components of the present invention.
- FIG. 5 is a flowchart depicting a sequence of steps associated with a proxy cache-based address partition procedure according to this invention.
- FIG. 1 is a block diagram of a computer internetwork including a collection of network segments connected to a plurality of client and server computers, the latter of which may be organized as a service provider;
- FIG. 2 is a highly schematized diagram of software components of the service provider servers of FIG. 1;
- FIG. 3 is a schematic block diagram of a proxy cache cluster (PCC) comprising a group of processor/memory mechanisms (PMMs) that cooperatively interact to host PCC services associated with network addresses of the service provider including an inventive Partition Proxy Cache (PPC) cluster member according to this invention;
- PCC proxy cache cluster
- PMMs processor/memory mechanisms
- FIG. 4 is a schematic block diagram of a PPC and associated interconnected components of the present invention.
- FIG. 5 is a flowchart depicting a sequence of steps associated with a proxy cache-based address partition procedure according to this invention.
- FIG. 1 is a schematic block diagram of a computer internetwork 100 comprising a collection of network segments connected to a plurality of computers 120 and servers 130 , 200 , 202 and 206 , as well router 140 and switch units 142 .
- Each computer generally comprises a central processing unit (CPU) 102 , a memory 104 and an input/output (I/O) unit 106 interconnected by a system bus 108 .
- the memory 104 may comprise storage locations typically composed of random access memory (RAM) devices, which are addressable by the CPU 102 and I/O unit 106 .
- RAM random access memory
- An operating system 105 portions of which are typically resident in memory and executed by CPU, functionally organizes the computer by, inter alia, invoking network operations in support of application programs executing on the CPU.
- An example of such an application program is a web browser 110 , such as Netscape Navigator® available from Netscape Communications Corporation.
- the network segments may comprise local area networks 145 or intranets, point-to-point links 135 and an Internet cloud 150 .
- the segments are interconnected by intermediate stations, such as a network switch 142 or router 140 , and configured to form an internetwork of computers that communicate by exchanging data packets according to a predefined set of protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP).
- TCP/IP Transmission Control Protocol/Internet Protocol
- IPX Internet Packet Exchange
- HTTP Hypertext Transfer Protocol
- the internetwork 100 is organized in accordance with a client/server architecture wherein computers 120 are personal computers or workstations configured as clients for interaction with users and computers 130 , 200 and 202 are configured as servers that perform services as directed by the clients.
- the servers 200 may be configured to operate as a service provider (e.g., a service provider/web site 180 and are coordinated by a load-balancing server 202 ) as described further herein, whereas servers 130 may be configured as domain name system (DNS) servers and/or Internet provider servers.
- DNS domain name system
- the DNS servers provide the clients 120 , origin servers and proxies with the network (e.g., IP) address(es) of requested services in response to packets directed to the domain names for those services.
- the Internet providers provide Internet access to the clients via, e.g., dial-up telephone lines or cable links.
- the client 120 may utilize the web browser 110 to gain access to the web site 180 and to navigate, view or retrieve services stored on the servers 200 , hereinafter “web” servers.
- web servers may be associated with one or more proxy cache servers 206 . While the proxy cache and web server functions can be combined in a single server box, it is more common to divide the web server and proxy caching component and interconnect them via the local area network, or other dedicated connections therebetween.
- One web server can be associated with a plurality of proxy cache servers. Alternatively, a single proxy cache can be a reverse proxy for many web servers.
- FIG. 2 is a highly schematized diagram of software components of the web server 200 and proxy cache server 206 .
- the web server includes an operating system 250 having utility programs that interact with various application program components to provide, e.g., a storage interface 254 and a network interface 252 , the latter enabling communication with a client browser 110 over the internetwork 100 .
- a local memory 204 and disk storage 208 is also provided.
- the application program components include a web server application 210 .
- the proxy cache server 206 includes an operating system 260 having utility programs that interact with various application program components to provide, e.g., a storage interface 264 , a network interface 262 .
- a local memory 274 and disk 268 are also provided to the proxy cache server.
- the application program components include a proxy server application (“proxy”) 280 .
- the reverse proxy 280 “front-ends” the web server such that the network address of the proxy (rather than the actual web site) is published in the DNS server 130 and mapped to the domain name of the service provider.
- the client sends a request packet directed to the network address of a particular proxy 280 of the web site.
- the proxy 280 receives the request from the web server and, if the client is authorized to access services from the web site, the proxy attempts to fulfill that request locally from information stored, e.g., in memory 274 or on disk 268 —in either case, the memory and/or disk function as a “cache” for quickly storing and retrieving the services.
- the proxy forwards the request onto the web server application 210 .
- the web server application then responds by transferring a stream of information to the proxy, which stores and forwards the information onto the client browser 110 .
- the proxy 280 is shown as a separate platform, it should be noted that the proxy may also be configured to run on the same server platform as the web server.
- FIG. 3 is a schematic block diagram of a PCC 300 comprising a cluster of processor/memory mechanisms (PMMs 310 ) with associated network connectivity that share a common configuration.
- the PMMs cooperatively interact as a system to host PCC services associated with network addresses of the web site/service provider 180 .
- the common configuration describes the PCC in terms of the PMMs, static PCC configuration parameters and hosted PCC services, which may include any service that is provided on the World Wide Web—an example of a common web site service is an HTTP service.
- a PCC service is characterized by (i) a load rating, which is a number/value that reflects a measure of the PCC service's resource consumption, such as the amount of traffic at the web site 180 .
- the actual value is not specified; just a consistent value to measure the resource consumption metric.
- the value is a relative value; i.e., relative to load rating of the other services hosted by the PCC. Measurement of this metric can be performed manually or via a software agent that dynamically (event-driven or continuous) assesses the load rating.
- the agent may comprise conventional instrumentation software, such as a simple network management protocol (SNMP) agent, or any other application that instruments the service to calculate its rating.
- SNMP simple network management protocol
- Calculations are performed using normalized units of measurement to provide flexibility in the ratings and to facilitate dynamic (“on-the-fly”) computer-generated measurements, particularly in the event of inclusion of additional PMMs to a cluster in accordance with the novel clustering technique.
- the agent further maintains (updates) these generated statistics for use when balancing the hosted services across the PCC.
- the PCC service is further characterized by (ii) service type, such as an HTTP proxy service, an HTTP accelerator service or a file transfer protocol (FTP) proxy service; further examples of service types include Real Audio, Real Video, NNTP and DNS; and (iii) service type parameters that are unique to each service.
- service type such as an HTTP proxy service, an HTTP accelerator service or a file transfer protocol (FTP) proxy service
- service types include Real Audio, Real Video, NNTP and DNS
- service type parameters that are unique to each service.
- Typical examples of conventional parameters run by an HTTP proxy service include (a) a list of network addresses of, e.g., the web site that allows access to the web servers, (b) whether logging is activated, (c) the format of the activated log (common or extended) and (d) the log roll rate.
- the PMMs are organized as a PCC in accordance with the proxy cache clustering technique that dynamically assigns each PMM to the PCC.
- each PMM is configured with a unique identifier (ID), a network address and PCC configuration software to enable participation in the clustering process.
- ID may be the media access control (MAC) address of a network interface card of the PMM or it may be the network address of the PMM.
- MAC media access control
- the PMM “listens” for a mechanism notifying the PMM that it is a member of a PCC 300 .
- a designated PMM of is the PCC functions as a PCC coordinator 350 to administer the common configuration and, as such, is responsible for assigning PCC service address(es) to each PMM.
- Coordination messages 320 are passed between the PCC-coordinating PMM 350 and the other PMMs in the cluster.
- the techniques for organizing and coordinating the various PMMs are described in detail in the above-incorporated U.S. patent application Ser. No. 09/195,982 entitled PROXY CACHE CLUSTER by Brent R. Christensen, et al.
- one cluster member is defined as a Partition Proxy Cache (PPC) 370 according to this invention.
- This cluster member can be handled by the PCC as a PMM, and in communication with the coordinating PMM.
- FIG. 4 further details the arrangement of the PPC according to an embodiment of this invention.
- a router 410 with backup functions for retaining information in the event of a failure is provided in communication with the Internet cloud 150 .
- the router receives and transmits data using a TCP/IP protocol based upon the received requests by clients for vended content in the form of data files.
- the requests can be made in a variety of formats HTTP, file transfer protocol (FTP), and the like.
- the content file is particularly identified based upon a specific URL associated with the file.
- the router 410 is connected through a layer 4 (L 4 ) switch 430 .
- the L 4 switch is arranged to provide load-balancing functions to the overall proxy cache address space ( 452 , 454 , 456 ).
- An optional backup switch 420 is also shown interconnected between the router 410 and downstream server architecture.
- the backup switch 420 operates as a backup to the L 4 switch 430 , and its functions and architecture are essentially the same.
- the L 4 switch 430 (and backup switch 420 ) communicate with the network segment for the network interface card NIC 1 ( 440 ), typically associated with incoming traffic to the interconnected proxy cache 450 .
- the cache is organized as a group of cooperating proxy cache servers 452 , 454 and 456 .
- the servers 452 , 454 and 456 can operate in accordance with the commercially available Excelerator from Volera, Inc. of San Jose, Calif. However, any acceptable operating system can be employed.
- the cache 450 is interconnected to the network segment NIC 2 ( 460 ), typically associated with outgoing traffic from the cache 450 .
- NIC 2 branches to a content server farm 470 sized and arranged to handle the expected traffic and volume of requested content.
- Individual servers 472 , 474 , 476 , 478 and 479 are shown in this example as part of the server farm 470 . Each server handles a certain volume of overall file storage, transferring specific files to the cache when requested.
- NIC 1 and NIC 2 need not be exclusive of each other. Accordingly, NIC 1 and NIC 2 can be the same network interface. If so, NIC 1 and NIC 2 would share the same network segment in communicating with both the Internet cloud 150 and server farm 470 .
- the cache servers 452 , 454 and 456 are themselves interconnected through a network segment for interface NIC 3 ( 480 , 482 ). This inventive segment 480 , 482 is provided to enable redirection or reassignment of a request to a particular cache server despite its normal assignment to any server in the overall cache based upon normal L 4 load-balancing procedures.
- NIC 3 can be part of the same network interface card as NICI and NIC 2 or these functions can be divided in a variety of ways between one or more interfaces. The segments are as separate in part for illustration purposes. However, three discrete interfaces can be used in an embodiment of this invention.
- a form of tunneling is used to transfer requests from one server to another and to receive vended content files.
- Tunneling is a preferred technique (according to any number of acceptable, currently practiced tunneling techniques; in which an encapsulation of data transferred between servers is employed whereby the data is maintained within a private network) as the addresses of the servers 452 , 454 and 456 are typically unpublished with respect to the outside world, and may be defined as 10-net addresses.
- a client redirect procedure can be used according to an alternate embodiment in which the server addresses are available to outside clients. This generally involves a tearing down of the currently established by the client with the site, and the reestablishing of a new connection to the particular server to retrieve the file.
- the reassignment/redirection procedure 500 is described.
- the cache servers 452 , 454 and 456 each contain a resident directory 492 , 494 and 496 and an application for performing a hash of the requested file's URL.
- the request is received by the switch and directed to a particular server based upon normal load-balancing considerations (step 502 ).
- the cache attempts to determine (decision step 504 ) the file size by analyzing the file size header information in the request or based upon directory information if any.
- the L 4 -assigned cache is filled with the requested object until a determination as to whether the size of the fill exceeds a cutoff limit.
- Files under a certain size are not generally subject to redirection and are vended directly from the switch-designated cache (step 506 ).
- the cutoff for file size can be based upon the tradeoff between time and bandwidth used to perform the redirection versus the resources used and the impact on overall system performance in having to cache and vend multiple copies of a given-sized file across the entire address space of the cache. In one example a file size of 50 Kbytes is used as a cut-off—wherein files less than 50 Kbytes are handled directly.
- a file If, however, a file is greater than the cutoff size, it may still be vended from the initially requested cache. If a file is being cached and it's directory entry is still alive (e.g. a partition time to live has not expired—decision step 507 ), then performance considerations may favor completing the fill to the requested cache, and thereby vending the file directly from the requested cache (step 506 ).
- step 507 If the file is larger than the cutoff and not otherwise vended from the originally requested cache (e.g. step 507 ), then its URL is then subjected to the cache's internal hashing function (step 508 ).
- Any acceptable hashing function can be employed including a conventional MD-2, MD-3, MD-4 or even checksum-based hash.
- the hash URL value (URLHASH) is then subjected to a modulo arithmetic function (step 510 ) by the cache defmed by the expression (URLHASH)MOD(#CACHESERVERS).
- This formula advantageously returns an integer value in a range equal to the number of cache servers, assuring that one of the caches is always designated by the solution.
- the exact technique for reassigning cache servers is highly variable.
- the request is then referred or forwarded to the reassigned cache server once the switch-designated cache consults its directory for the appropriate routing information.
- the request is referred over the NIC 3 network segment as a standard HTTP GET request (step 512 ).
- the request is received by the reassigned cache and then filled by this cache. This entails the transfer of the file using the tunneling mechanism or another technique back to the switch-designated cache, and then the vending of the content out to the client over the Internet cloud 150 (step 514 ).
- the reassigned cache server may receive a notification alerting it that a forwarded request is to be received so that a forwarding of the request is expected at the reassigned cache.
- the reassigned cache can be configured to reject the referred request as an unallowed external request when the forwarded request is not recognized as part of the defined redirection system. In other words, the external request is most-likely unauthorized or improperly made.
- the L 4 switch operates without any knowledge of the cache-based reassignment described above.
- the cache-based address partitioning technique provides a substantial increase in efficiency and concomitant reduction in cache congestion, as all requests for a given file remain on a designated cache regardless of the number of requests and their timing. Even where the connection must be torn down and reestablished, as in a redirection of the client, the increased resources used to do so are offset significantly by the reduction in underlying cache congestion.
- the teachings herein can be adapted so that a given file is cached on a plurality of servers rather one.
- the number of servers is preset, or can be dynamically altered. Redirection or tunneling of a file request from a switch-designated server to one of the plurality of specified servers, tasked to cache the particular file, can be made according to a variety of techniques.
- TTL value is needed to determine how long an object should be considered fresh while being partitioned. This provides a mechanism to check to see if the object should be moved to another partition member before the object actually becomes stale.
- This box is the partition member that should fill the request.
- the administrator has determined that the IP addresses of the cache boxes are not to be made available for a direct connection from an outside client.
Abstract
Description
- This application is related to commonly owned U.S. patent application Ser. No. 09/195,982 entitled PROXY CACHE CLUSTER by Brent R. Christensen, et al. and U.S. patent application Ser. No. 09/337,241, entitled CACHE OBJECT STORE, by Robert Drew Major the teachings of which applications are expressly incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to services of communications networks and, more specifically, to a system and method for increasing the availability of services offered by a service provider of a communications network.
- 2. Background Information
- It is increasingly common for users having standalone computers, or computers interconnected by an institutional intranet or local area network, to gain access to various remote sites (such as those on the “World Wide Web”) via the well-known Internet communications network. Using resident web browser applications executing on the computers, these clients may navigate among services (“pages”) stored on various servers of a service provider (“web site”) and may further request these services as desired. In a basic network communication arrangement, clients are free to access any remote web site for which uniform resource locators (URLs) are available.
- It is also increasingly common in network applications to provide the web site servers with associated proxy cache servers that link (“front-end”) the servers with the Internet. A proxy cache server (“proxy”) may be used to accelerate client access to the Internet (“forward proxy”), to accelerate Internet access to a web server (“reverse proxy”), or to accelerate Internet access transparently to either client access or web server access (“transparent proxy”). As for the latter reverse proxy environment, the proxy may access frequently requested services from the web servers and store (“host”) them locally to effectively speed-up access to future requests for the services. For instance, a proxy may host frequently requested web pages of a web site. In response to a request from a browser executing on a client, the proxy attempts to fulfill that request from its local storage. If it cannot, the proxy forwards the request to a web site server that can satisfy the request. The web server then responds by transferring a stream of information to the proxy, which stores and forwards the information over the Internet onto the client. The illustrative embodiment of the invention described herein is applicable to a proxy environment
- As Internet traffic to the web site increases, the network infrastructure of the service provider may become strained attempting to keep up with the increased traffic. In order to satisfy such demand, the service provider may increase the number of network addresses for a particular service by providing additional web servers and/or associated proxies. These network addresses are typically Transmission Control Protocol/ Internet Protocol (TCP/IP) addresses that are represented by URLs or wordtext (domain) names and that are published in a directory service, such as the well-known Domain Name System (DNS). Computers referred to as name servers implement DNS by mapping between the domain names and TCP/IP address(es).
- In the case of a “reverse proxy,” the proxies “front-end” the web servers (and may, in fact, be resident on the web servers) and the network addresses of the proxies (rather than the actual web site) are generally mapped to the domain name of the service provider. As a result, communication exchanges with the proxies generally comprise IP packets or UDP/TCP-socketed traffic, such as socket requests and responses. A socket is essentially an interface between an application layer and transport layer of a protocol stack that enables the transport layer to identify which application it must communicate with in the application layer. For example, a socket interfaces to a TCP/IP protocol stack via a set of application programming interfaces (API) consisting of a plurality of entry points into that stack. Applications that require TCP/IP connectivity typically utilize the socket API to interface into the TCP/IP stack.
- For a connection-oriented protocol such as TCP, the socket may be considered a session; however, for a connectionless protocol such as IP datagram using the User Datagram Protocol (UDP), the socket is an entity/handle that the networking software (protocol stack) uses to uniquely identify an application layer end point, typically through the use of port numbers. The software entity within the server that manages the communication exchanges is a TCP/IP process, which is schematically illustrated as layers of a typical Internet communications protocol stack. Protocol stacks and the TCP/IP reference model are well-known and are, for example, described inComputer Networks by Andrew S. Tanenbaum, printed by Prentice Hall PTR, Upper Saddle River, N.J., 1996.
- The popularity of Internet caching, due to the increased efficiencies it offers, has led to the construction of caching architectures that link multiple proxy cache servers in a single site. The servers may service an even larger cluster of content servers. The proxy cache servers are sometimes accessed via a Layer4 (L4) switch that is tied to the site's edge router. The L4 switch is generally responsible for load-balancing functions across the cache. By load-balancing it is meant that the switch assigns requests to various caches based upon a mechanism that attempts to balance the usage of the caches so that no single cache is over-utilized while taking into account any connection context associated with the client requesting the dataflow.
- Requests by users to various sites for large files have grown significantly. One particular class of often-requested files is feature-length movies stored using MPEG-2 compression or another standard. These files may easily exceed 1 gigabyte of storage. Moreover, certain movies in a site (highly popular releases) may be accessed continuously in a given period. Problems specific to the vending of large files have been observed. Conventional caches have become severely congested because of the time required to vend a file to a remote user/client. Part of this problem results from the low bandwidth of many clients (often using slower connections) that tends to tie up the server for hours on the vending of a single large file, and the fact that certain files are being vended much more often than others. Specifically, when a large proportion of the cache is being devoted to large files (over 50 Kbytes) and many requests are being made for the same file, the cache structure will become quickly filled with these large files. The problem is compounded by the load-balancing function of an L4 switch because all caches tend to be populated with the same large files. This loading of all caches occurs, in part, because the L4 switch is designed to provide load-balancing across all caches in the cluster. Accordingly, the behavior of the switch causes further copies of the file across each of the existing proxies. Since a large number of requests for a given file may be present at any one time, the L4 switch will naturally fill all caches with copies of the file generally without regard to how large the cache cluster (number of servers) is made. While it may be possible to cache a single copy to a designated cache, and vend it to clients all at once (via multicast techniques, and the like) within a given time frame, this may involve unacceptable delays for certain clients. The result of the filling of all caches for a long period of time is that other content is delayed due to the resulting congestion.
- One solution to the problem of congestion is to address-partition the storage of files in the cache. L4 switches typically include functions for selectively routing requests to a particular cache without regard to the load-balancing imperative based upon a specific cache location for the file. This would ensure that other caches remain free of the over-requested file. However, this may involve the use of internal hashing functions with respect to the requested file's URL, that ties up processor time, thus delaying the switch's load-balancing operations. As caching arrangements become increasingly efficient, this often undesirably slows the system. In addition, L4 switches often cannot distinguish between large and small files. Where files are small (example 50 Kbytes or less) it is generally more efficient to allow load-balancing unmodified by the invention to occur, regardless of the number of request for a given file.
- It is an object of the present invention to provide a technique for address-partitioning a proxy cache cluster and associated proxy partition cache (PPC) that enables address partitioning at the proxy cache at the cache situs without an external load-balancing mechanism, thereby freeing the L4 switch from any additional address partitioning responsibilities. The PPC architecture should thereby relieve congestion, and overfilling of the caches with duplicate copies of large files.
- The present invention overcomes the disadvantages of the prior art by providing a proxy partition cache (PPC) architecture and a technique for address-partitioning a proxy cache consisting of a grouping of discrete, cooperating caches (servers) by redirecting or reassigning client requests for objects (files) of a given size to a single cache in the grouping notwithstanding the cache to which the request is made by the L4 switch based upon load-balancing considerations. The file is then returned to the L4 switch (or another load-balancing mechanism) via the switch-designated cache for vending to the requesting client. The redirection/reassignment occurs according to a function within the cache to which the request is directed so that the L4 switch remains freed from additional tasks that can compromise speed. In this manner, congestion in the caches resulting from overfilling of the entire address space with a particular large file or files is avoided-the particular large file or file being retained on the predetermined cache, to which requests to all other caches are redirected/reassigned.
- In a preferred embodiment the PCC is member of a proxy cache cluster consisting of a number of coordinated processor/memory mechanisms (PMMs). The grouping of caches can be interconnected to the L4 switch and associated router by a first network segment. This enables communication with the client via the Internet cloud or another network. A second network segment interconnects the caches to a content server farm in which large files are stored, handled and selectively transferred to the cache as requested. The separate servers can be interconnected via a third network segment that enables redirection of client file requests (where individual server addresses are publicly available), or that enables tunneling (according to known tunneling standards where the servers are non-public) of data between the servers. Each cache is provided with a directory and hashing algorithm (any conventional hashing function can be used). The URL of a client request is subjected to the hash, and the hash result can be processed using, for example, a modulo function with respect to the number of servers in the grouping. This provides a discrete number equal to the number of servers. The file is stored in and retrieved only from the server corresponding to the value provided by the modulo of the hash. In this manner discrete content files are located uniquely within the servers in the group and typically not generally replicated in separate servers. This allows other servers to operate with less congestion from particular large, often-requested files.
- In a preferred embodiment the caches can each be adapted to determine a cutoff size for redirection/reassignment of requests. If a requested file/object is below a certain size, then the switch-designated cache server caches and vends the file/object directly. This prevents address-partition resources from being used inefficiently on small, quickly vended objects. In general, a request for an object is referred to a different discrete server unless the discrete server and the original receiving server (as assigned by the oad-balancing mechanism) are identical—whereby the request is optimally processed on the receiving server
- The foregoing and other objects and advantages of the invention will become clearer with reference to the following detailed description as illustrated by the drawings in which:
- FIG. 1 is a block diagram of a computer internetwork including a collection of network segments connected to a plurality of client and server computers, the latter of which may be organized as a service provider;
- FIG. 2 is a highly schematized diagram of software components of the service provider servers of FIG. 1;
- FIG. 3 is a schematic block diagram of a proxy cache cluster (PCC) comprising a group of processor/memory mechanisms (PMMs) that cooperatively interact to host PCC services associated with network addresses of the service provider including an inventive Partition Proxy Cache (PPC) cluster member according to this invention;
- FIG. 4 is a schematic block diagram of a PPC and associated interconnected components of the present invention; and
- FIG. 5 is a flowchart depicting a sequence of steps associated with a proxy cache-based address partition procedure according to this invention.
- The foregoing and other objects and advantages of the invention will become clearer with reference to the following detailed description as illustrated by the drawings in which:
- FIG. 1 is a block diagram of a computer internetwork including a collection of network segments connected to a plurality of client and server computers, the latter of which may be organized as a service provider;
- FIG. 2 is a highly schematized diagram of software components of the service provider servers of FIG. 1;
- FIG. 3 is a schematic block diagram of a proxy cache cluster (PCC) comprising a group of processor/memory mechanisms (PMMs) that cooperatively interact to host PCC services associated with network addresses of the service provider including an inventive Partition Proxy Cache (PPC) cluster member according to this invention;
- FIG. 4 is a schematic block diagram of a PPC and associated interconnected components of the present invention; and
- FIG. 5 is a flowchart depicting a sequence of steps associated with a proxy cache-based address partition procedure according to this invention.
- FIG. 1 is a schematic block diagram of a
computer internetwork 100 comprising a collection of network segments connected to a plurality ofcomputers 120 andservers router 140 and switchunits 142. Each computer generally comprises a central processing unit (CPU) 102, amemory 104 and an input/output (I/O)unit 106 interconnected by asystem bus 108. Thememory 104 may comprise storage locations typically composed of random access memory (RAM) devices, which are addressable by theCPU 102 and I/O unit 106. Anoperating system 105, portions of which are typically resident in memory and executed by CPU, functionally organizes the computer by, inter alia, invoking network operations in support of application programs executing on the CPU. An example of such an application program is aweb browser 110, such as Netscape Navigator® available from Netscape Communications Corporation. - The network segments may comprise
local area networks 145 or intranets, point-to-point links 135 and anInternet cloud 150. Collectively, the segments are interconnected by intermediate stations, such as anetwork switch 142 orrouter 140, and configured to form an internetwork of computers that communicate by exchanging data packets according to a predefined set of protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). It should be noted that other techniques/protocols, such as Internet Packet Exchange (IPX) protocol and the Hypertext Transfer Protocol (HTTP), may be advantageously used with the present invention. - In the illustrative embodiment, the
internetwork 100 is organized in accordance with a client/server architecture whereincomputers 120 are personal computers or workstations configured as clients for interaction with users andcomputers servers 200 may be configured to operate as a service provider (e.g., a service provider/web site 180 and are coordinated by a load-balancing server 202) as described further herein, whereasservers 130 may be configured as domain name system (DNS) servers and/or Internet provider servers. In general, the DNS servers provide theclients 120, origin servers and proxies with the network (e.g., IP) address(es) of requested services in response to packets directed to the domain names for those services. The Internet providers, on the other hand, provide Internet access to the clients via, e.g., dial-up telephone lines or cable links. - The
client 120 may utilize theweb browser 110 to gain access to theweb site 180 and to navigate, view or retrieve services stored on theservers 200, hereinafter “web” servers. In order to effectively speed-up access to the service provider and reduce the retrieval time for stored services, one or more web servers may be associated with one or moreproxy cache servers 206. While the proxy cache and web server functions can be combined in a single server box, it is more common to divide the web server and proxy caching component and interconnect them via the local area network, or other dedicated connections therebetween. One web server can be associated with a plurality of proxy cache servers. Alternatively, a single proxy cache can be a reverse proxy for many web servers. - FIG. 2 is a highly schematized diagram of software components of the
web server 200 andproxy cache server 206. The web server includes anoperating system 250 having utility programs that interact with various application program components to provide, e.g., astorage interface 254 and anetwork interface 252, the latter enabling communication with aclient browser 110 over theinternetwork 100. Alocal memory 204 anddisk storage 208 is also provided. The application program components include aweb server application 210. - Likewise, the
proxy cache server 206 includes anoperating system 260 having utility programs that interact with various application program components to provide, e.g., astorage interface 264, a network interface 262. Alocal memory 274 anddisk 268 are also provided to the proxy cache server. In this case, the application program components include a proxy server application (“proxy”) 280. - As noted, the
reverse proxy 280 “front-ends” the web server such that the network address of the proxy (rather than the actual web site) is published in theDNS server 130 and mapped to the domain name of the service provider. To access a service of theservice provider 180, the client sends a request packet directed to the network address of aparticular proxy 280 of the web site. Theproxy 280 receives the request from the web server and, if the client is authorized to access services from the web site, the proxy attempts to fulfill that request locally from information stored, e.g., inmemory 274 or ondisk 268—in either case, the memory and/or disk function as a “cache” for quickly storing and retrieving the services. If it cannot satisfy the request, the proxy forwards the request onto theweb server application 210. The web server application then responds by transferring a stream of information to the proxy, which stores and forwards the information onto theclient browser 110. Although theproxy 280 is shown as a separate platform, it should be noted that the proxy may also be configured to run on the same server platform as the web server. - By way of background a proxy cache cluster (PCC) “front-ends” the servers of a service provider to increase the availability of services offered by the provider. As noted, the clients access the services by issuing requests to network addresses associated with the services. The PCC increases the availability of the services by receiving and servicing those requests on behalf of the service provider in accordance with a novel proxy cache clustering technique described herein. FIG. 3 is a schematic block diagram of a
PCC 300 comprising a cluster of processor/memory mechanisms (PMMs 310) with associated network connectivity that share a common configuration. The PMMs cooperatively interact as a system to host PCC services associated with network addresses of the web site/service provider 180. In fact, the common configuration describes the PCC in terms of the PMMs, static PCC configuration parameters and hosted PCC services, which may include any service that is provided on the World Wide Web—an example of a common web site service is an HTTP service. - According to an aspect of the invention, a PCC service is characterized by (i) a load rating, which is a number/value that reflects a measure of the PCC service's resource consumption, such as the amount of traffic at the
web site 180. Notably, the actual value is not specified; just a consistent value to measure the resource consumption metric. Furthermore, the value is a relative value; i.e., relative to load rating of the other services hosted by the PCC. Measurement of this metric can be performed manually or via a software agent that dynamically (event-driven or continuous) assesses the load rating. The agent may comprise conventional instrumentation software, such as a simple network management protocol (SNMP) agent, or any other application that instruments the service to calculate its rating. Calculations are performed using normalized units of measurement to provide flexibility in the ratings and to facilitate dynamic (“on-the-fly”) computer-generated measurements, particularly in the event of inclusion of additional PMMs to a cluster in accordance with the novel clustering technique. The agent further maintains (updates) these generated statistics for use when balancing the hosted services across the PCC. - The PCC service is further characterized by (ii) service type, such as an HTTP proxy service, an HTTP accelerator service or a file transfer protocol (FTP) proxy service; further examples of service types include Real Audio, Real Video, NNTP and DNS; and (iii) service type parameters that are unique to each service. Typical examples of conventional parameters run by an HTTP proxy service include (a) a list of network addresses of, e.g., the web site that allows access to the web servers, (b) whether logging is activated, (c) the format of the activated log (common or extended) and (d) the log roll rate. Most web sites that provide an HTTP service run logging operations to determine the type of requests issued by users, the kinds of errors received by those users and the source network addresses of the requests. This latter log provides an indication of geography with respect to, e.g., the locations of the highest concentration of users.
- The PMMs are organized as a PCC in accordance with the proxy cache clustering technique that dynamically assigns each PMM to the PCC. To that end, each PMM is configured with a unique identifier (ID), a network address and PCC configuration software to enable participation in the clustering process. The unique ID may be the media access control (MAC) address of a network interface card of the PMM or it may be the network address of the PMM. Once configured and activated, the PMM “listens” for a mechanism notifying the PMM that it is a member of a
PCC 300. A designated PMM of is the PCC functions as aPCC coordinator 350 to administer the common configuration and, as such, is responsible for assigning PCC service address(es) to each PMM.Coordination messages 320 are passed between the PCC-coordinatingPMM 350 and the other PMMs in the cluster. The techniques for organizing and coordinating the various PMMs are described in detail in the above-incorporated U.S. patent application Ser. No. 09/195,982 entitled PROXY CACHE CLUSTER by Brent R. Christensen, et al. - Referring further to FIG. 3, one cluster member is defined as a Partition Proxy Cache (PPC)370 according to this invention. This cluster member can be handled by the PCC as a PMM, and in communication with the coordinating PMM. FIG. 4 further details the arrangement of the PPC according to an embodiment of this invention. A
router 410 with backup functions for retaining information in the event of a failure is provided in communication with theInternet cloud 150. The router receives and transmits data using a TCP/IP protocol based upon the received requests by clients for vended content in the form of data files. The requests can be made in a variety of formats HTTP, file transfer protocol (FTP), and the like. The content file is particularly identified based upon a specific URL associated with the file. - The
router 410 is connected through a layer 4 (L4)switch 430. As described generally above, the L4 switch is arranged to provide load-balancing functions to the overall proxy cache address space (452, 454, 456). Anoptional backup switch 420 is also shown interconnected between therouter 410 and downstream server architecture. Thebackup switch 420 operates as a backup to theL4 switch 430, and its functions and architecture are essentially the same. The L4 switch 430 (and backup switch 420) communicate with the network segment for the network interface card NIC1 (440), typically associated with incoming traffic to the interconnected proxy cache 450. The cache is organized as a group of cooperatingproxy cache servers servers content server farm 470 sized and arranged to handle the expected traffic and volume of requested content.Individual servers 472, 474, 476, 478 and 479 are shown in this example as part of theserver farm 470. Each server handles a certain volume of overall file storage, transferring specific files to the cache when requested. Requested files reside in the cache for a given period that varies based upon a variety of parameters that are generally accepted in the art. Note that interfaces NIC1 and NIC2 need not be exclusive of each other. Accordingly, NIC1 and NIC2 can be the same network interface. If so, NIC1 and NIC2 would share the same network segment in communicating with both theInternet cloud 150 andserver farm 470. Thecache servers inventive segment - According to one embodiment a form of tunneling is used to transfer requests from one server to another and to receive vended content files. Tunneling is a preferred technique (according to any number of acceptable, currently practiced tunneling techniques; in which an encapsulation of data transferred between servers is employed whereby the data is maintained within a private network) as the addresses of the
servers - With further reference to FIG. 5 the reassignment/
redirection procedure 500 is described. In order to accomplish the reassignment of a content request from the switch-selected cache server to the single server assigned to the particular content, thecache servers resident directory - If, however, a file is greater than the cutoff size, it may still be vended from the initially requested cache. If a file is being cached and it's directory entry is still alive (e.g. a partition time to live has not expired—decision step507), then performance considerations may favor completing the fill to the requested cache, and thereby vending the file directly from the requested cache (step 506).
- If the file is larger than the cutoff and not otherwise vended from the originally requested cache (e.g. step507), then its URL is then subjected to the cache's internal hashing function (step 508). Any acceptable hashing function can be employed including a conventional MD-2, MD-3, MD-4 or even checksum-based hash.
- In this embodiment, the hash URL value (URLHASH) is then subjected to a modulo arithmetic function (step510) by the cache defmed by the expression (URLHASH)MOD(#CACHESERVERS). This formula advantageously returns an integer value in a range equal to the number of cache servers, assuring that one of the caches is always designated by the solution. The exact technique for reassigning cache servers is highly variable.
- The request is then referred or forwarded to the reassigned cache server once the switch-designated cache consults its directory for the appropriate routing information. The request is referred over the NIC3 network segment as a standard HTTP GET request (step 512). The request is received by the reassigned cache and then filled by this cache. This entails the transfer of the file using the tunneling mechanism or another technique back to the switch-designated cache, and then the vending of the content out to the client over the Internet cloud 150 (step 514).
- Note that the reassigned cache server may receive a notification alerting it that a forwarded request is to be received so that a forwarding of the request is expected at the reassigned cache. The reassigned cache can be configured to reject the referred request as an unallowed external request when the forwarded request is not recognized as part of the defined redirection system. In other words, the external request is most-likely unauthorized or improperly made.
- The L4 switch operates without any knowledge of the cache-based reassignment described above. The cache-based address partitioning technique provides a substantial increase in efficiency and concomitant reduction in cache congestion, as all requests for a given file remain on a designated cache regardless of the number of requests and their timing. Even where the connection must be torn down and reestablished, as in a redirection of the client, the increased resources used to do so are offset significantly by the reduction in underlying cache congestion.
- Note that it is contemplated that, where there are significant increases in server numbers and associated traffic, the teachings herein can be adapted so that a given file is cached on a plurality of servers rather one. The number of servers is preset, or can be dynamically altered. Redirection or tunneling of a file request from a switch-designated server to one of the plurality of specified servers, tasked to cache the particular file, can be made according to a variety of techniques.
- The following is a generalized pseudo-code description of the address partitioning procedure according to an embodiment of this invention:
If the requested object is in the cache and the Partition Time To Live (TTL) (1) indicates that the object should still be partitioned and the TTL indicates that the object is still fresh { If the Cache Object Store directory entry indicates that this is a partitioned object { If the object is not oversized (2) { Delete the directory entry Fill and vend the object from this cache } Else { Get the IP address of the partition member from the direc- tory If the IP address selected is the same as this box (3) { Fill and vend the object from this cache } Else { If the partition member is still up (4) { If not allowing a direct connect (5) { Forward the request to the appropri- ate cache and tunnel the response } Else { Send an indication to the appropriate cache to expect a client connec- tion(6) Redirect the client to the appropriate cache } } Else { Select the IP address of the partition mem- ber to handle the oversized object (7) Mark the directory entry to indicate that this is a partitioned object Mark the directory entry with the Partition TTL Store the partition member IP address in the directory If the IP address selected is the same as this box { Fill and vend the object from this cache } Else { If not allowing a direct connect { Forward the request to the appropri- ate cache and tunnel the response } Else { Send an indication to the appropriate cache to expect a connection from the client Redirect the client to the appropriate cache } } } } } } Else { Vend the requested object from this cache (8) } } Else { Start the fill from the origin serve (9) If the content size in the header or the size of the proceeding fill (10) indi- cates that the object is oversized { Select the IP address of the partition member to handle the over- sized object (11) Mark the directory entry to indicate that this is a partitioned object Mark the directory entry with the PartitionTTL Store the partition member IP address in the directory If the IP address selected is the same as this box { Continue the fill of oversized object and vend object (12) } Else { Abort the fill If not allowing a direct connect { Forward the request to the appropriate cache and tunnel the response { Else { Send an indication to the appropriate cache to ex- pect a connection from the client Redirect the client to the appropriate cache } } } Else { Continue the fill and vend the object } }. - The Cache Object Store directory is described in the above-incorporated OBJECT CACHE STORE Patent Application. With reference to the above listing the following comments apply to the steps numbered in parenthesis:
- 1. TTL value is needed to determine how long an object should be considered fresh while being partitioned. This provides a mechanism to check to see if the object should be moved to another partition member before the object actually becomes stale.
- 2. In this case the object was previously oversized but no longer meets that criteria (e.g. the administrator changed the oversized object threshold).
- 3. This box is the partition member that should fill the request.
- 4. This will be determined via the membership services provided by the current PCC mechanism.
- 5. The administrator has determined that the IP addresses of the cache boxes are not to be made available for a direct connection from an outside client.
- 6. Direct connections are allowed by outside clients. This function provides advanced warning to the partition member to receive the request that the request is coming. This will allow the partition member to deny any direct connections that are not expected.
- 7. Since the previous partition member is not available we must select a new one.
- 8. There is no reason to force cache partitioning if the object is in the cache. We will wait until the cache object becomes stale or is replaced because of cache contention. This will be an issue only when the cache partition is started or is reconfiguring after a partition member is leaving/joining. However, the most common case here will be that the object is in the cache of the partition member to handle the oversized object.
- 9. We need to discover the size of the object.
- 10. Sometimes the server does not send the size header; in this case we continue the fill until we see that it is a large object at which time this logic kicks in.
- 11. This will be done by hashing the URL and taking the modulo based on the number of IP addresses handling the cache partitioning (this should effectively partition the address space of oversized objects among the partition members). We can use the “PCC membership services” to maintain a list of active and alive partition members (using the existing heart-beat, maintenance beat interval, failure-mode beat interval, etc. as described in the above-incorporated PROXY CACHE CLUSTER Patent Application).
- If the membership number changes (thus changing the modulo) a new member will be chosen to handle the redefined partition. When the lost member returns, its cache will still be valid (excepting TTL expiration) and the objects filled by other members will finally expire and leave the cache because of a lack of requests.
- 12. This is the default behavior of the partition member that should handle the oversized object.
- The foregoing has been a detailed description of a preferred embodiment of the invention. Various modifications and additions can be made without departing from the spirit and scope thereof. For example the clustering of PMMs can vary from that depicted. The organization of the internetwork components can also vary from that shown as can the nature of the components. In addition, while an L4 switch is used to accomplish load-balancing, it is expressly contemplated that any acceptable mechanism for achieving load-balance between caches can be employed according to this invention and the term “load-balancing mechanism” should be taken broadly to include any such mechanism including an L4 switch, Layer 3 (L3) switch or other network switch. Finally, it is expressly contemplated that any of the operations described herein can be implemented in hardware, software—in the form of a computer-readable medium comprising program instructions—or a combination of hardware and software. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of the invention.
Claims (19)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/853,290 US20020184327A1 (en) | 2001-05-11 | 2001-05-11 | System and method for partitioning address space in a proxy cache server cluster |
US09/877,918 US6862606B1 (en) | 2001-05-11 | 2001-06-07 | System and method for partitioning address space in a proxy cache server cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/853,290 US20020184327A1 (en) | 2001-05-11 | 2001-05-11 | System and method for partitioning address space in a proxy cache server cluster |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/877,918 Continuation US6862606B1 (en) | 2001-05-11 | 2001-06-07 | System and method for partitioning address space in a proxy cache server cluster |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020184327A1 true US20020184327A1 (en) | 2002-12-05 |
Family
ID=25315625
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/853,290 Abandoned US20020184327A1 (en) | 2001-05-11 | 2001-05-11 | System and method for partitioning address space in a proxy cache server cluster |
US09/877,918 Expired - Lifetime US6862606B1 (en) | 2001-05-11 | 2001-06-07 | System and method for partitioning address space in a proxy cache server cluster |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/877,918 Expired - Lifetime US6862606B1 (en) | 2001-05-11 | 2001-06-07 | System and method for partitioning address space in a proxy cache server cluster |
Country Status (1)
Country | Link |
---|---|
US (2) | US20020184327A1 (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030005116A1 (en) * | 2001-06-28 | 2003-01-02 | Chase Jeffrey Scott | Method, system and computer program product for hierarchical load balancing |
US20030037120A1 (en) * | 2001-08-17 | 2003-02-20 | Doug Rollins | Network computer providing mass storage, broadband access, and other enhanced functionality |
US20030154260A1 (en) * | 2002-02-13 | 2003-08-14 | Mebane Cummins Aiken | Computer-implemented data messaging ring |
US20040199572A1 (en) * | 2003-03-06 | 2004-10-07 | Hunt Galen C. | Architecture for distributed computing system and automated design, deployment, and management of distributed applications |
US20040205179A1 (en) * | 2003-03-06 | 2004-10-14 | Hunt Galen C. | Integrating design, deployment, and management phases for systems |
US20050193203A1 (en) * | 2004-02-27 | 2005-09-01 | Microsoft Corporation | Security associations for devices |
US7613300B2 (en) * | 2003-09-26 | 2009-11-03 | Genesis Microchip Inc. | Content-protected digital link over a single signal line |
US7689676B2 (en) | 2003-03-06 | 2010-03-30 | Microsoft Corporation | Model-based policy application |
US20100106934A1 (en) * | 2008-10-24 | 2010-04-29 | Microsoft Corporation | Partition management in a partitioned, scalable, and available structured storage |
US7711121B2 (en) | 2000-10-24 | 2010-05-04 | Microsoft Corporation | System and method for distributed management of shared computers |
US7733915B2 (en) | 2003-05-01 | 2010-06-08 | Genesis Microchip Inc. | Minimizing buffer requirements in a digital video system |
US7797147B2 (en) | 2005-04-15 | 2010-09-14 | Microsoft Corporation | Model-based system monitoring |
US7800623B2 (en) | 2003-09-18 | 2010-09-21 | Genesis Microchip Inc. | Bypassing pixel clock generation and CRTC circuits in a graphics controller chip |
US7802144B2 (en) | 2005-04-15 | 2010-09-21 | Microsoft Corporation | Model-based system monitoring |
US20100289812A1 (en) * | 2009-05-13 | 2010-11-18 | Stmicroelectronics, Inc. | Device, system, and method for wide gamut color space support |
US7839860B2 (en) | 2003-05-01 | 2010-11-23 | Genesis Microchip Inc. | Packet based video display interface |
US7941309B2 (en) | 2005-11-02 | 2011-05-10 | Microsoft Corporation | Modeling IT operations/policies |
US8059673B2 (en) | 2003-05-01 | 2011-11-15 | Genesis Microchip Inc. | Dynamic resource re-allocation in a packet based video display interface |
US8068485B2 (en) | 2003-05-01 | 2011-11-29 | Genesis Microchip Inc. | Multimedia interface |
US8156238B2 (en) | 2009-05-13 | 2012-04-10 | Stmicroelectronics, Inc. | Wireless multimedia transport method and apparatus |
US8204076B2 (en) | 2003-05-01 | 2012-06-19 | Genesis Microchip Inc. | Compact packet based multimedia interface |
EP2495937A1 (en) * | 2011-03-01 | 2012-09-05 | Telefonaktiebolaget LM Ericsson (publ) | Tunnel gateway managed caching architecture |
US8291207B2 (en) | 2009-05-18 | 2012-10-16 | Stmicroelectronics, Inc. | Frequency and symbol locking using signal generated clock frequency and symbol identification |
US8370554B2 (en) | 2009-05-18 | 2013-02-05 | Stmicroelectronics, Inc. | Operation of video source and sink with hot plug detection not asserted |
US8385544B2 (en) | 2003-09-26 | 2013-02-26 | Genesis Microchip, Inc. | Packet based high definition high-bandwidth digital content protection |
US8429440B2 (en) | 2009-05-13 | 2013-04-23 | Stmicroelectronics, Inc. | Flat panel display driver method and system |
US8468285B2 (en) | 2009-05-18 | 2013-06-18 | Stmicroelectronics, Inc. | Operation of video source and sink with toggled hot plug detection |
US8489728B2 (en) | 2005-04-15 | 2013-07-16 | Microsoft Corporation | Model-based system monitoring |
US8549513B2 (en) | 2005-06-29 | 2013-10-01 | Microsoft Corporation | Model-based virtual system provisioning |
US8582452B2 (en) | 2009-05-18 | 2013-11-12 | Stmicroelectronics, Inc. | Data link configuration by a receiver in the absence of link training data |
CN103442000A (en) * | 2013-08-22 | 2013-12-11 | 北京星网锐捷网络技术有限公司 | Method and device for replacing WEB caches and HTTP proxy server |
US8671234B2 (en) | 2010-05-27 | 2014-03-11 | Stmicroelectronics, Inc. | Level shifting cable adaptor and chip system for use with dual-mode multi-media device |
US8799432B1 (en) * | 2006-10-31 | 2014-08-05 | Hewlett-Packard Development Company, L.P. | Managed computer network caching requested and related data from remote computers |
US8860888B2 (en) | 2009-05-13 | 2014-10-14 | Stmicroelectronics, Inc. | Method and apparatus for power saving during video blanking periods |
US8886796B2 (en) * | 2008-10-24 | 2014-11-11 | Microsoft Corporation | Load balancing when replicating account data |
US20150039674A1 (en) * | 2013-07-31 | 2015-02-05 | Citrix Systems, Inc. | Systems and methods for performing response based cache redirection |
US9223710B2 (en) | 2013-03-16 | 2015-12-29 | Intel Corporation | Read-write partitioning of cache memory |
US20160119420A1 (en) * | 2013-05-02 | 2016-04-28 | International Business Machines Corporation | Replication of content to one or more servers |
CN106713506A (en) * | 2017-02-22 | 2017-05-24 | 郑州云海信息技术有限公司 | Data acquisition method and data acquisition system |
US20180077121A1 (en) * | 2016-09-14 | 2018-03-15 | Wanpath, LLC | Reverse proxy for accessing local network over the internet |
US10223431B2 (en) * | 2013-01-31 | 2019-03-05 | Facebook, Inc. | Data stream splitting for low-latency data access |
CN109995855A (en) * | 2019-03-20 | 2019-07-09 | 北京奇艺世纪科技有限公司 | A kind of data capture method, device and terminal |
US20190260847A1 (en) * | 2015-12-31 | 2019-08-22 | International Business Machines Corporation | Caching for data store clients using expiration times |
CN114466004A (en) * | 2022-03-24 | 2022-05-10 | 成都新希望金融信息有限公司 | File transmission method, system, electronic equipment and storage medium |
Families Citing this family (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7996517B2 (en) * | 2002-01-23 | 2011-08-09 | Novell, Inc. | Transparent network connection takeover |
US7886298B2 (en) * | 2002-03-26 | 2011-02-08 | Hewlett-Packard Development Company, L.P. | Data transfer protocol for data replication between multiple pairs of storage controllers on a san fabric |
US8117328B2 (en) * | 2002-06-25 | 2012-02-14 | Microsoft Corporation | System and method for automatically recovering from failed network connections in streaming media scenarios |
EP1561159A4 (en) * | 2002-11-12 | 2007-08-29 | Zetera Corp | Electrical devices with improved communication |
US7649880B2 (en) * | 2002-11-12 | 2010-01-19 | Mark Adams | Systems and methods for deriving storage area commands |
US7742473B2 (en) * | 2002-11-12 | 2010-06-22 | Mark Adams | Accelerator module |
US7170890B2 (en) | 2002-12-16 | 2007-01-30 | Zetera Corporation | Electrical devices with improved communication |
US8005918B2 (en) * | 2002-11-12 | 2011-08-23 | Rateze Remote Mgmt. L.L.C. | Data storage devices having IP capable partitions |
US20040218599A1 (en) * | 2003-05-01 | 2004-11-04 | Genesis Microchip Inc. | Packet based video display interface and methods of use thereof |
US20040221315A1 (en) * | 2003-05-01 | 2004-11-04 | Genesis Microchip Inc. | Video interface arranged to provide pixel data independent of a link character clock |
US20040221312A1 (en) * | 2003-05-01 | 2004-11-04 | Genesis Microchip Inc. | Techniques for reducing multimedia data packet overhead |
US20040218624A1 (en) * | 2003-05-01 | 2004-11-04 | Genesis Microchip Inc. | Packet based closed loop video display interface with periodic status checks |
US7620062B2 (en) * | 2003-05-01 | 2009-11-17 | Genesis Microchips Inc. | Method of real time optimizing multimedia packet transmission rate |
US7405719B2 (en) * | 2003-05-01 | 2008-07-29 | Genesis Microchip Inc. | Using packet transfer for driving LCD panel driver electronics |
US20050038890A1 (en) * | 2003-08-11 | 2005-02-17 | Hitachi., Ltd. | Load distribution method and client-server system |
US8782654B2 (en) | 2004-03-13 | 2014-07-15 | Adaptive Computing Enterprises, Inc. | Co-allocating a reservation spanning different compute resources types |
US20050271047A1 (en) * | 2004-06-02 | 2005-12-08 | Huonder Russell J | Method and system for managing multiple overlapping address domains |
US20070266388A1 (en) | 2004-06-18 | 2007-11-15 | Cluster Resources, Inc. | System and method for providing advanced reservations in a compute environment |
US8176490B1 (en) | 2004-08-20 | 2012-05-08 | Adaptive Computing Enterprises, Inc. | System and method of interfacing a workload manager and scheduler with an identity manager |
CA2586763C (en) | 2004-11-08 | 2013-12-17 | Cluster Resources, Inc. | System and method of providing system jobs within a compute environment |
US8863143B2 (en) | 2006-03-16 | 2014-10-14 | Adaptive Computing Enterprises, Inc. | System and method for managing a hybrid compute environment |
US7702850B2 (en) * | 2005-03-14 | 2010-04-20 | Thomas Earl Ludwig | Topology independent storage arrays and methods |
CN100342352C (en) * | 2005-03-14 | 2007-10-10 | 北京邦诺存储科技有限公司 | Expandable high speed storage network buffer system |
US9231886B2 (en) | 2005-03-16 | 2016-01-05 | Adaptive Computing Enterprises, Inc. | Simple integration of an on-demand compute environment |
EP2348409B1 (en) * | 2005-03-16 | 2017-10-04 | III Holdings 12, LLC | Automatic workload transfer to an on-demand center |
US9015324B2 (en) | 2005-03-16 | 2015-04-21 | Adaptive Computing Enterprises, Inc. | System and method of brokering cloud computing resources |
EP3203374B1 (en) | 2005-04-07 | 2021-11-24 | III Holdings 12, LLC | On-demand access to compute resources |
US8782120B2 (en) | 2005-04-07 | 2014-07-15 | Adaptive Computing Enterprises, Inc. | Elastic management of compute resources between a web server and an on-demand compute environment |
US7620981B2 (en) | 2005-05-26 | 2009-11-17 | Charles William Frank | Virtual devices and virtual bus tunnels, modules and methods |
US7743214B2 (en) | 2005-08-16 | 2010-06-22 | Mark Adams | Generating storage system commands |
US8819092B2 (en) | 2005-08-16 | 2014-08-26 | Rateze Remote Mgmt. L.L.C. | Disaggregated resources and access methods |
US9270532B2 (en) * | 2005-10-06 | 2016-02-23 | Rateze Remote Mgmt. L.L.C. | Resource command messages and methods |
US7512707B1 (en) * | 2005-11-03 | 2009-03-31 | Adobe Systems Incorporated | Load balancing of server clusters |
US7924881B2 (en) * | 2006-04-10 | 2011-04-12 | Rateze Remote Mgmt. L.L.C. | Datagram identifier management |
US8041773B2 (en) | 2007-09-24 | 2011-10-18 | The Research Foundation Of State University Of New York | Automatic clustering for self-organizing grids |
US20090094658A1 (en) * | 2007-10-09 | 2009-04-09 | Genesis Microchip Inc. | Methods and systems for driving multiple displays |
US8150970B1 (en) * | 2007-10-12 | 2012-04-03 | Adobe Systems Incorporated | Work load distribution among server processes |
US8516293B2 (en) * | 2009-11-05 | 2013-08-20 | Novell, Inc. | System and method for implementing a cloud computer |
US9658891B2 (en) * | 2009-03-13 | 2017-05-23 | Micro Focus Software Inc. | System and method for providing key-encrypted storage in a cloud computing environment |
US9614855B2 (en) * | 2009-11-05 | 2017-04-04 | Micro Focus Software Inc. | System and method for implementing a secure web application entitlement service |
US9288264B2 (en) * | 2008-08-25 | 2016-03-15 | Novell, Inc. | System and method for implementing a cloud workflow |
US8286232B2 (en) * | 2009-03-13 | 2012-10-09 | Novell, Inc. | System and method for transparent cloud access |
US8364842B2 (en) * | 2009-03-13 | 2013-01-29 | Novell, Inc. | System and method for reduced cloud IP address utilization |
US8065395B2 (en) * | 2009-03-13 | 2011-11-22 | Novell, Inc. | System and method for queuing to a cloud via a queuing proxy |
US8429716B2 (en) * | 2009-11-05 | 2013-04-23 | Novell, Inc. | System and method for transparent access and management of user accessible cloud assets |
US9742864B2 (en) * | 2008-08-25 | 2017-08-22 | Novell, Inc. | System and method for implementing cloud mitigation and operations controllers |
US9894093B2 (en) | 2009-04-21 | 2018-02-13 | Bandura, Llc | Structuring data and pre-compiled exception list engines and internet protocol threat prevention |
US8468220B2 (en) * | 2009-04-21 | 2013-06-18 | Techguard Security Llc | Methods of structuring data, pre-compiled exception list engines, and network appliances |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10684989B2 (en) * | 2011-06-15 | 2020-06-16 | Microsoft Technology Licensing, Llc | Two-phase eviction process for file handle caches |
KR20130087810A (en) * | 2012-01-30 | 2013-08-07 | 삼성전자주식회사 | Method and apparatus for cooperative caching in mobile communication system |
US11093403B2 (en) | 2018-12-04 | 2021-08-17 | Vmware, Inc. | System and methods of a self-tuning cache sizing system in a cache partitioning system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6016310A (en) * | 1997-06-30 | 2000-01-18 | Sun Microsystems, Inc. | Trunking support in a high performance network device |
US6438652B1 (en) * | 1998-10-09 | 2002-08-20 | International Business Machines Corporation | Load balancing cooperating cache servers by shifting forwarded request |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5642511A (en) | 1994-12-16 | 1997-06-24 | International Business Machines Corporation | System and method for providing a visual application builder framework |
US5784566A (en) | 1996-01-11 | 1998-07-21 | Oracle Corporation | System and method for negotiating security services and algorithms for communication across a computer network |
US6018619A (en) | 1996-05-24 | 2000-01-25 | Microsoft Corporation | Method, system and apparatus for client-side usage tracking of information server systems |
US6185625B1 (en) | 1996-12-20 | 2001-02-06 | Intel Corporation | Scaling proxy server sending to the client a graphical user interface for establishing object encoding preferences after receiving the client's request for the object |
US5961593A (en) | 1997-01-22 | 1999-10-05 | Lucent Technologies, Inc. | System and method for providing anonymous personalized browsing by a proxy system in a network |
US6151688A (en) | 1997-02-21 | 2000-11-21 | Novell, Inc. | Resource management in a clustered computer system |
US5924116A (en) | 1997-04-02 | 1999-07-13 | International Business Machines Corporation | Collaborative caching of a requested object by a lower level node as a function of the caching status of the object at a higher level node |
US5964891A (en) | 1997-08-27 | 1999-10-12 | Hewlett-Packard Company | Diagnostic system for a distributed data access networked system |
US6014667A (en) | 1997-10-01 | 2000-01-11 | Novell, Inc. | System and method for caching identification and location information in a computer network |
US5999734A (en) | 1997-10-21 | 1999-12-07 | Ftl Systems, Inc. | Compiler-oriented apparatus for parallel compilation, simulation and execution of computer programs and hardware models |
US6185598B1 (en) | 1998-02-10 | 2001-02-06 | Digital Island, Inc. | Optimized network resource location |
US6112228A (en) | 1998-02-13 | 2000-08-29 | Novell, Inc. | Client inherited functionally derived from a proxy topology where each proxy is independently configured |
-
2001
- 2001-05-11 US US09/853,290 patent/US20020184327A1/en not_active Abandoned
- 2001-06-07 US US09/877,918 patent/US6862606B1/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6016310A (en) * | 1997-06-30 | 2000-01-18 | Sun Microsystems, Inc. | Trunking support in a high performance network device |
US6438652B1 (en) * | 1998-10-09 | 2002-08-20 | International Business Machines Corporation | Load balancing cooperating cache servers by shifting forwarded request |
Cited By (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7711121B2 (en) | 2000-10-24 | 2010-05-04 | Microsoft Corporation | System and method for distributed management of shared computers |
US8041814B2 (en) * | 2001-06-28 | 2011-10-18 | International Business Machines Corporation | Method, system and computer program product for hierarchical load balancing |
US20030005116A1 (en) * | 2001-06-28 | 2003-01-02 | Chase Jeffrey Scott | Method, system and computer program product for hierarchical load balancing |
US8171139B2 (en) * | 2001-06-28 | 2012-05-01 | International Business Machines Corporation | Hierarchical load balancing |
US20030037120A1 (en) * | 2001-08-17 | 2003-02-20 | Doug Rollins | Network computer providing mass storage, broadband access, and other enhanced functionality |
US7461139B2 (en) * | 2001-08-17 | 2008-12-02 | Micron Technology, Inc. | Network computer providing mass storage, broadband access, and other enhanced functionality |
US20030154260A1 (en) * | 2002-02-13 | 2003-08-14 | Mebane Cummins Aiken | Computer-implemented data messaging ring |
US7191224B2 (en) * | 2002-02-13 | 2007-03-13 | Sas Institute Inc. | Computer-implemented data messaging ring |
US7890543B2 (en) | 2003-03-06 | 2011-02-15 | Microsoft Corporation | Architecture for distributed computing system and automated design, deployment, and management of distributed applications |
US20040199572A1 (en) * | 2003-03-06 | 2004-10-07 | Hunt Galen C. | Architecture for distributed computing system and automated design, deployment, and management of distributed applications |
US20040205179A1 (en) * | 2003-03-06 | 2004-10-14 | Hunt Galen C. | Integrating design, deployment, and management phases for systems |
US7890951B2 (en) | 2003-03-06 | 2011-02-15 | Microsoft Corporation | Model-based provisioning of test environments |
US7886041B2 (en) | 2003-03-06 | 2011-02-08 | Microsoft Corporation | Design time validation of systems |
US7689676B2 (en) | 2003-03-06 | 2010-03-30 | Microsoft Corporation | Model-based policy application |
US8122106B2 (en) | 2003-03-06 | 2012-02-21 | Microsoft Corporation | Integrating design, deployment, and management phases for systems |
US7839860B2 (en) | 2003-05-01 | 2010-11-23 | Genesis Microchip Inc. | Packet based video display interface |
US8068485B2 (en) | 2003-05-01 | 2011-11-29 | Genesis Microchip Inc. | Multimedia interface |
US7733915B2 (en) | 2003-05-01 | 2010-06-08 | Genesis Microchip Inc. | Minimizing buffer requirements in a digital video system |
US8059673B2 (en) | 2003-05-01 | 2011-11-15 | Genesis Microchip Inc. | Dynamic resource re-allocation in a packet based video display interface |
US8204076B2 (en) | 2003-05-01 | 2012-06-19 | Genesis Microchip Inc. | Compact packet based multimedia interface |
US7800623B2 (en) | 2003-09-18 | 2010-09-21 | Genesis Microchip Inc. | Bypassing pixel clock generation and CRTC circuits in a graphics controller chip |
US8385544B2 (en) | 2003-09-26 | 2013-02-26 | Genesis Microchip, Inc. | Packet based high definition high-bandwidth digital content protection |
US7613300B2 (en) * | 2003-09-26 | 2009-11-03 | Genesis Microchip Inc. | Content-protected digital link over a single signal line |
US7778422B2 (en) | 2004-02-27 | 2010-08-17 | Microsoft Corporation | Security associations for devices |
US20050193203A1 (en) * | 2004-02-27 | 2005-09-01 | Microsoft Corporation | Security associations for devices |
US7802144B2 (en) | 2005-04-15 | 2010-09-21 | Microsoft Corporation | Model-based system monitoring |
US7797147B2 (en) | 2005-04-15 | 2010-09-14 | Microsoft Corporation | Model-based system monitoring |
US8489728B2 (en) | 2005-04-15 | 2013-07-16 | Microsoft Corporation | Model-based system monitoring |
US10540159B2 (en) | 2005-06-29 | 2020-01-21 | Microsoft Technology Licensing, Llc | Model-based virtual system provisioning |
US9317270B2 (en) | 2005-06-29 | 2016-04-19 | Microsoft Technology Licensing, Llc | Model-based virtual system provisioning |
US8549513B2 (en) | 2005-06-29 | 2013-10-01 | Microsoft Corporation | Model-based virtual system provisioning |
US9811368B2 (en) | 2005-06-29 | 2017-11-07 | Microsoft Technology Licensing, Llc | Model-based virtual system provisioning |
US7941309B2 (en) | 2005-11-02 | 2011-05-10 | Microsoft Corporation | Modeling IT operations/policies |
US8799432B1 (en) * | 2006-10-31 | 2014-08-05 | Hewlett-Packard Development Company, L.P. | Managed computer network caching requested and related data from remote computers |
US9996572B2 (en) | 2008-10-24 | 2018-06-12 | Microsoft Technology Licensing, Llc | Partition management in a partitioned, scalable, and available structured storage |
US20100106934A1 (en) * | 2008-10-24 | 2010-04-29 | Microsoft Corporation | Partition management in a partitioned, scalable, and available structured storage |
US8886796B2 (en) * | 2008-10-24 | 2014-11-11 | Microsoft Corporation | Load balancing when replicating account data |
US20100289812A1 (en) * | 2009-05-13 | 2010-11-18 | Stmicroelectronics, Inc. | Device, system, and method for wide gamut color space support |
US8429440B2 (en) | 2009-05-13 | 2013-04-23 | Stmicroelectronics, Inc. | Flat panel display driver method and system |
US8156238B2 (en) | 2009-05-13 | 2012-04-10 | Stmicroelectronics, Inc. | Wireless multimedia transport method and apparatus |
US8860888B2 (en) | 2009-05-13 | 2014-10-14 | Stmicroelectronics, Inc. | Method and apparatus for power saving during video blanking periods |
US8760461B2 (en) | 2009-05-13 | 2014-06-24 | Stmicroelectronics, Inc. | Device, system, and method for wide gamut color space support |
US8788716B2 (en) | 2009-05-13 | 2014-07-22 | Stmicroelectronics, Inc. | Wireless multimedia transport method and apparatus |
US8291207B2 (en) | 2009-05-18 | 2012-10-16 | Stmicroelectronics, Inc. | Frequency and symbol locking using signal generated clock frequency and symbol identification |
US8468285B2 (en) | 2009-05-18 | 2013-06-18 | Stmicroelectronics, Inc. | Operation of video source and sink with toggled hot plug detection |
US8582452B2 (en) | 2009-05-18 | 2013-11-12 | Stmicroelectronics, Inc. | Data link configuration by a receiver in the absence of link training data |
US8370554B2 (en) | 2009-05-18 | 2013-02-05 | Stmicroelectronics, Inc. | Operation of video source and sink with hot plug detection not asserted |
US8671234B2 (en) | 2010-05-27 | 2014-03-11 | Stmicroelectronics, Inc. | Level shifting cable adaptor and chip system for use with dual-mode multi-media device |
EP2495937A1 (en) * | 2011-03-01 | 2012-09-05 | Telefonaktiebolaget LM Ericsson (publ) | Tunnel gateway managed caching architecture |
US10223431B2 (en) * | 2013-01-31 | 2019-03-05 | Facebook, Inc. | Data stream splitting for low-latency data access |
US9223710B2 (en) | 2013-03-16 | 2015-12-29 | Intel Corporation | Read-write partitioning of cache memory |
US20160119420A1 (en) * | 2013-05-02 | 2016-04-28 | International Business Machines Corporation | Replication of content to one or more servers |
US10547676B2 (en) | 2013-05-02 | 2020-01-28 | International Business Machines Corporation | Replication of content to one or more servers |
US11388232B2 (en) | 2013-05-02 | 2022-07-12 | Kyndryl, Inc. | Replication of content to one or more servers |
US10554744B2 (en) * | 2013-05-02 | 2020-02-04 | International Business Machines Corporation | Replication of content to one or more servers |
US11627200B2 (en) | 2013-07-31 | 2023-04-11 | Citrix Systems, Inc. | Systems and methods for performing response based cache redirection |
US20150039674A1 (en) * | 2013-07-31 | 2015-02-05 | Citrix Systems, Inc. | Systems and methods for performing response based cache redirection |
US10951726B2 (en) * | 2013-07-31 | 2021-03-16 | Citrix Systems, Inc. | Systems and methods for performing response based cache redirection |
CN103442000A (en) * | 2013-08-22 | 2013-12-11 | 北京星网锐捷网络技术有限公司 | Method and device for replacing WEB caches and HTTP proxy server |
US20190260847A1 (en) * | 2015-12-31 | 2019-08-22 | International Business Machines Corporation | Caching for data store clients using expiration times |
US10715623B2 (en) * | 2015-12-31 | 2020-07-14 | International Business Machines Corporation | Caching for data store clients using expiration times |
US20180077121A1 (en) * | 2016-09-14 | 2018-03-15 | Wanpath, LLC | Reverse proxy for accessing local network over the internet |
US9985930B2 (en) * | 2016-09-14 | 2018-05-29 | Wanpath, LLC | Reverse proxy for accessing local network over the internet |
CN106713506A (en) * | 2017-02-22 | 2017-05-24 | 郑州云海信息技术有限公司 | Data acquisition method and data acquisition system |
CN109995855A (en) * | 2019-03-20 | 2019-07-09 | 北京奇艺世纪科技有限公司 | A kind of data capture method, device and terminal |
CN114466004A (en) * | 2022-03-24 | 2022-05-10 | 成都新希望金融信息有限公司 | File transmission method, system, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US6862606B1 (en) | 2005-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6862606B1 (en) | System and method for partitioning address space in a proxy cache server cluster | |
US10819826B2 (en) | System and method for implementing application functionality within a network infrastructure | |
US10858503B2 (en) | System and devices facilitating dynamic network link acceleration | |
US7076555B1 (en) | System and method for transparent takeover of TCP connections between servers | |
US7640298B2 (en) | Method and system for communicating an information packet through multiple router devices | |
USRE45009E1 (en) | Dynamic network link acceleration | |
US6470389B1 (en) | Hosting a network service on a cluster of servers using a single-address image | |
US7418522B2 (en) | Method and system for communicating an information packet through multiple networks | |
US8341290B2 (en) | Method and system for selecting a computing device for maintaining a client session in response to a request packet | |
US7421505B2 (en) | Method and system for executing protocol stack instructions to form a packet for causing a computing device to perform an operation | |
Yang et al. | EFFICIENTSUPPORTFORCO NTENT-BASED ROUTINGINWEBSERVERCLU STERS | |
US7512686B2 (en) | Method and system for establishing a data structure of a connection with a client | |
US20010049741A1 (en) | Method and system for balancing load distribution on a wide area network | |
US20020059451A1 (en) | System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics | |
US7546369B2 (en) | Method and system for communicating a request packet in response to a state | |
Sit et al. | Socket cloning for cluster-based web servers | |
Wills et al. | N for the price of 1: bundling web objects for more efficient content delivery | |
US20020116532A1 (en) | Method and system for communicating an information packet and identifying a data structure | |
Zhang et al. | Creating Linux virtual servers | |
Dabek | A cooperative file system | |
US20020116605A1 (en) | Method and system for initiating execution of software in response to a state | |
Yang et al. | An effective mechanism for supporting content-based routing in scalable Web server clusters | |
Barrenechea | Transparent Distributed Web Caching Architecture | |
Sherman | Distributed web caching system with consistent hashing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOVELL, INC., UTAH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CARTER, STEPHEN R.;DAVIS, HOWARD ROLLIN;CHRISTENSEN, BRENT RAY;AND OTHERS;REEL/FRAME:011807/0750 Effective date: 20010425 Owner name: VOLERA, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:011805/0859 Effective date: 20010427 |
|
STCB | Information on status: application discontinuation |
Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION |
|
AS | Assignment |
Owner name: MICRO FOCUS SOFTWARE INC., DELAWARE Free format text: CHANGE OF NAME;ASSIGNOR:NOVELL, INC.;REEL/FRAME:040020/0703 Effective date: 20160718 |