US9628404B1 - Systems and methods for multi-tenancy management within a distributed database - Google Patents

Systems and methods for multi-tenancy management within a distributed database Download PDF

Info

Publication number
US9628404B1
US9628404B1 US14/612,912 US201514612912A US9628404B1 US 9628404 B1 US9628404 B1 US 9628404B1 US 201514612912 A US201514612912 A US 201514612912A US 9628404 B1 US9628404 B1 US 9628404B1
Authority
US
United States
Prior art keywords
customer
service
computer
available
allocated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/612,912
Inventor
Christopher Goffinet
Peter Schuller
Boaz Avital
Armond Bigian
Spencer G. Fang
Anthony Asta
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twitter Inc
Original Assignee
Twitter Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twitter Inc filed Critical Twitter Inc
Priority to US14/612,912 priority Critical patent/US9628404B1/en
Assigned to TWITTER, INC. reassignment TWITTER, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ASTA, ANTHONY, AVITAL, BOAZ, BIGIAN, ARMOND, FANG, SPENCER G., GOFFINET, CHRISTOPHER, SCHULLER, PETER
Priority to US15/489,461 priority patent/US10289703B1/en
Application granted granted Critical
Publication of US9628404B1 publication Critical patent/US9628404B1/en
Priority to US16/410,952 priority patent/US10649963B1/en
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TWITTER, INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TWITTER, INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TWITTER, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • H04L47/783Distributed allocation of resources, e.g. bandwidth brokers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F17/30289
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5061Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
    • H04L41/5067Customer-centric QoS measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • the present disclosure generally relates to multi-tenant distributed databases and, more particularly, to implementations for managing resource allocation within a multi-tenant distributed database.
  • Distributed database systems include a plurality of storage devices spread among a network of interconnected computers.
  • the distributed database systems typically have greater reliability and availability than parallel database systems, among other benefits.
  • Various internet services for example social networking services, employ distributed database systems to manage the storage and retrieval of information.
  • the need to efficiently and accurately read and write data across the database system increases with a greater amount of information, a greater number of users, and stricter latency requirements.
  • a single tenant system includes an architecture in which each customer has their own software instance.
  • a multi-tenant system includes an architecture that enables multiple customers to use a single software instance.
  • FIG. 1A depicts a system capable of implementing a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 1B depicts a detailed representation of various components configured to manage a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 1C depicts a representation of various layers supported by a coordinator machine in accordance with some embodiments.
  • FIG. 2 depicts an example representation of entities and components associated with managing services of a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 3 depicts an example user interface associated with a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 4A depicts an example user interface for configuring services for operation on a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 4B depicts an example user interface for selecting resources of a multi-tenant distributed database in accordance with some embodiments.
  • FIGS. 5A-5D depict example user interfaces for initiating a service supported by a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 6 depicts a flow chart of an example method for configuring a consistency model for a service supported by a multi-tenant distributed database in accordance with some embodiments.
  • FIG. 7 depicts a hardware diagram of an electronic device in accordance with some embodiments.
  • a multi-tenant distributed database as well as various built-in systems and methods of managing access thereto are disclosed.
  • the distributed database is a multi-tenant system capable of concurrently serving multiple use cases of multiple customers according to various resource requirements, parameters, and other factors.
  • the systems and methods may dynamically reallocate resources within the distributed database in response to the detection of various service usage parameters.
  • the systems and methods may allocate respective portions of the distributed database resources for use by respective customers in operating respective services, whereby the portions may be allocated according to requirements of the respective services without having to build a dedicated system for the customer.
  • the actual usage of the services may result in an imbalanced resource allocation, whereby one service may request more resources than are allocated and another service may underutilize its allocation.
  • the systems and methods may identify an available allocated or unallocated resource portion capable of handling the overage, and may dynamically adjust any resource allocation(s) accordingly.
  • the systems and methods improve existing multi-tenant distributed database frameworks by negating the need for complex resource reconfiguring.
  • the systems and methods enable seamless accommodation of overage requests, thus improving on the need for existing frameworks to outright deny overage requests.
  • FIG. 1A illustrates a general system architecture of a system 100 implementing a multi-tenant distributed database 105 .
  • the distributed database 105 may include multiple nodes 104 of storage devices or computer resources that are distributed across a plurality of physical computers, such as a network of interconnected computers.
  • the multiple nodes 104 may be virtually or physically separated, and may be configured to interface with one or more processing units such as one or more CPUs.
  • Each of the nodes 104 may store one or more replicas of one or more datasets, and may include one or more various types of storage devices (e.g., solid state drives (SSDs), platter storage such as hard disk drives, or other memory) and structures (e.g., SSTable, seadb, b-tree, or others).
  • a distributed database management system (DBMS) 103 may be configured to manage the distributed database 105 , whereby the DBMS 103 may be stored on a centralized computer within the system 100 .
  • SSDs solid state drives
  • DBMS distributed database
  • the system 100 further includes a plurality of clients 110 configured to access the distributed database 105 and features thereof via one or more networks 102 .
  • the network 102 may be any type of wired or wireless LAN, WAN, or the like.
  • the network 102 may be the Internet, or various corporate intranets or extranets.
  • each of the plurality of clients 110 is a dedicated computer machine, workstation, or the like, including any combination of hardware and software components.
  • a user such as a developer, engineer, supervisor, or the like (generally, a “customer”) may interface with any of the plurality of clients 110 to access the distributed database 105 and configure various services to be supported thereon.
  • the plurality of clients 110 may also interface with the DBMS 103 .
  • FIG. 1B illustrates a system 150 having components capable of implementing the systems and methods of the present embodiments.
  • the system 150 includes the distributed database 105 storing a plurality of nodes 104 , as discussed with respect to FIG. 1A .
  • Each of the nodes 104 may store one or more replica representations 130 of one or more datasets.
  • the system 150 further includes a management system 125 , which may serve as or be separate from the DMBS 103 as discussed with respect to FIG. 1A .
  • the management system 125 includes a plurality of coordinator machines 120 that may be distributed throughout various physical or virtual locations and may be configured to connect to one another.
  • Each of the coordinator machines 120 may manage various services associated with storing and managing datasets within the distributed database 105 .
  • each of the coordinator machines 120 may manage one or more services to identify appropriate replica representations 130 and interface with the identified replica representations 130 for dataset storage and management.
  • Customers may operate one or more of the clients 110 to interface with one or more of the coordinator machines 120 , where the particular coordinator machine 120 is selected based on availability or other factors.
  • FIG. 1C illustrates a more detailed representation of the coordinator machine 120 and various features that the coordinator machine 120 is capable of supporting or managing. Although only one coordinator machine 120 is depicted in FIG. 1C , it should be appreciated that each of the coordinator machines 120 of the management system 125 may include the same components and support the same services. As illustrated in FIG. 1C , the coordinator machine 120 supports four layers: an interfaces layer 106 , a storage services layer 108 , a core layer 112 , and a storage engines layer 114 .
  • the core layer 112 is configured to manage or process failure events, consistency models within the distributed database 105 , routing functionality, topology management, intra- and inter-datacenter replication, and conflict resolution.
  • the storage engines layer 114 is configured to convert and/or process data for storage on various physical memory devices (e.g., SSD, platter storage, or other memory).
  • the storage services layer 108 supports applications or features that enable customers to manage the importing and storage of data within the distributed database 105 . For example, some of the applications or features include batch importing, managing a strong consistency service, and managing a timeseries counters service.
  • the interfaces layer 106 manages how customers interact with the distributed database 105 , such as customers interacting with the distributed database 105 via the clients 110 .
  • FIG. 2 illustrates an example representation 200 of various applications and functionalities related to the distributed database system.
  • the applications and functionalities may be managed by the coordinator machines 120 as described with respect to FIGS. 1B and 1C .
  • the representation 200 identifies various modules managed by each of the coordinator machines 120 , as well as communication paths among the modules, the layers, and the storage components associated with the distributed database system.
  • the representation 200 includes a core layer 212 (such as the core layer 112 as discussed with respect to FIG. 1B ), a storage services module 222 , and a management services module 224 .
  • the core layer 212 may communicate with an interfaces layer 206 (such as the interfaces layer 106 as discussed with respect to FIG. 1B ) and a storage engines layer 214 (such as the storage engines layer 114 as discussed with respect to FIG. 1B ).
  • the management services module 224 is configured to communicate with the core layer 212 , and includes various components, applications, modules, or the like that facilitate various systems and methods supported by the distributed database system.
  • the storage services module 222 is also configured to communicate with the core layer 212 , and also includes various components, applications, modules, or the like that facilitate additional systems and methods supported by the distributed database system.
  • the storage engines layer 214 is configured to manage data storage on the distributed database as well as maintain data structures in memory.
  • the storage engine layer 214 supports at least three different storage engines: (1) seadb, which is a read-only file format for batch processed data (e.g., from a distributed system such as Apache Hadoop), (2) SSTable, a log-structured merge (LSM) tree-based format for heavy write workloads, and (3) b-tree, a b-tree based format for heavy read and light write workloads.
  • seadb which is a read-only file format for batch processed data (e.g., from a distributed system such as Apache Hadoop)
  • LSM log-structured merge
  • b-tree a b-tree based format for heavy read and light write workloads.
  • Customers may directly or indirectly select an appropriate storage engine for processing datasets based on the service or use-case of the service.
  • the customer may want to select a read/only selection corresponding to the seadb storage engine.
  • the customer may want to select a read/write selection corresponding to the SSTable or b-tree storage engine.
  • the b-tree storage engine is a better choice for a lot of data writes and the SSTable storage engine is a better choice for a lot of data reads.
  • the management services module 224 initiates an appropriate workflow based on the selected storage engine.
  • the management services module 224 further supports multiple types of clusters for storing datasets: a first, general cluster for storing general data as well as a second, production cluster for storing sensitive data.
  • the management services module 224 may further include a reporting module 238 configured for various reporting functionalities.
  • the reporting module 238 may support an integration between the datasets being stored and external services and teams, and may enable the automatic reporting of certain usage of the distributed database system to the external services and teams.
  • the reporting module 238 may support an API to a “capacity team,” or a team tasked with managing the capacity of the distributed database system (generally, a moderator), such that the capacity team may manage customer usage, model capacity metrics, and collect raw data for customers. By managing the capacity of the system, the capacity team may effectively and efficiently manage the associated resources of the distributed database system.
  • the reporting module 238 may generate reports associated with data usage resulting from consistency model management.
  • the management services module 224 places the service into a pending state and causes the reporting module 238 to automatically generate a service ticket that indicates the service's usage or requested usage, and provide the service ticket to the capacity team.
  • the capacity team may examine the service ticket and interface with the customer to handle or manage the usage request. In particular, the capacity team may approve the increased capacity and enable the service use by the customer, or may reject the increased capacity.
  • the reporting module 238 may also generate a report if a customer's service exceeds a quota or threshold, along with details of the excess usage.
  • the reporting module 238 may aggregate the reports such that, over time, the capacity team may analyze the usage data to generate resource planning recommendations. For example, the data from the aggregated reports may indicate that more resources are needed to support the excess usage requests.
  • the management services module 224 further supports a “self-service” interface module 226 that enables customers to configure services or applications within the distributed database, as well as configure various functionalities related thereto, such as consistency model configurations.
  • the self-service interface module 226 enables a customer to make selections, via various user interfaces, associated with initiating various services and applications supported by the distributed database as well as managing data stored in the distributed database.
  • a customer may interface with the self-service interface module 226 via the user interface which the self-service interface module 226 may cause to be displayed on one of the plurality of clients 110 as discussed with respect to FIG. 1 .
  • FIG. 3 illustrates an example “start screen” 350 that details various options and features available to customers for using the distributed database.
  • the customer may select whether the use case of a desired service is associated with static data ( 451 ) or dynamic data ( 452 ). Based on the selection of static data or dynamic data, the management services module 224 may need to configure different consistency models and/or different clusters within the distributed database for the desired service.
  • FIG. 4B illustrates an additional interface 550 associated with initiating a service.
  • the interface 550 of FIG. 4B indicates various clusters of the distributed database that are available for multi-tenant use.
  • the interface 550 includes a name of the cluster, a description of the cluster, a type of the cluster (e.g., testing, production, etc.), identifications of one or more data centers that support the cluster, and an option for the customer to select a particular cluster.
  • FIG. 5A illustrates an interface 552 associated with configuring a new application or service that will utilize a specific cluster (such as one of the clusters depicted in FIG. 4B ).
  • the interface 552 enables a customer to input a name and description for the application.
  • an interface 554 illustrated in FIG. 5B enables the customer to input contact information as well as associate a team of engineers with the application.
  • FIG. 5C illustrates an interface 556 that enables the customer to input various storage and throughput quotas for the application, such as storage space, peak keys written per second, peak partitioning keys read per second, and peak local keys read per second. It should be appreciated that additional storage and throughput quota parameters are envisioned.
  • an interface 558 as illustrated in FIG.
  • 5D enables the user to input traffic expectations for the application, such as whether the application will utilize a cache, and keys per second expectations. It should be appreciated that the interfaces 552 , 554 , 556 , 558 of FIGS. 5A-5D are merely examples and that additional or alternative options, selections, and/or content are envisioned.
  • the self-service interface module 226 further enables the customer to select various functionalities associated with dataset management using the distributed database.
  • the customer can select a rate limiting functionality to set rate limits (e.g., limits on queries per second) associated with data reads and/or data writes, which is described in further detail below.
  • rate limits e.g., limits on queries per second
  • the customer can configure custom alerts associated with meeting or exceeding rate limits.
  • the customer can select to have reports detailing resource usage and other metrics generated (e.g., by the reporting module 238 ) at various time intervals or in response to various triggers.
  • the self-service interface can enable customers to modify certain parameters (e.g., increase or decrease resource usage limits) after a service is initiated.
  • the self-service interface module 226 further enables the customer to select various consistency model configurations for a service.
  • distributed systems support a specific consistency model.
  • data When data is stored in a distributed system, the data must propagate among multiple computer resources or clusters before it has achieved replica convergence across the distributed system.
  • Certain consistency models have benefits and drawbacks when compared to other consistency models.
  • an eventually consistent database enables users to store and retrieve data without delay. However, because there is no delay in retrieving data, there is not a guarantee that the retrieved data is completely up-to-date (i.e., is consistent across the distributed system). In contrast, a strongly consistent database requires that all resources or clusters have the same view of stored data. Accordingly, when a user retrieves certain data, that data is guaranteed to be up-to-date, though with a higher read latency, a lower read throughput, and the potential for more failures.
  • Twitter® social networking service may not want a long delay when opening the “tweet stream” associated with his or her account, but also may not mind (or may not notice) that at least one Tweet® posted to Twitter® in the last fractions of a second are not presented in the tweet stream.
  • Twitter® may require a strongly consistent database when storing Twitter® handles (i.e., usernames) so as to ensure that the same handle will not be assigned to multiple end users.
  • the interface layer 206 supports a coordinator module 228 that is configured to interface with the management services module 224 and manage consistency models within the distributed database system.
  • a customer may interface with the self-service interface module 226 to specify the consistency model as well as various customization and configuration features associated therewith, for different applications and services to be supported by the distributed database system.
  • the interface layer 206 may therefore enable the customer to input a consistency model configuration including various parameters such as consistency type, associated time periods, and associated replication factors.
  • the distributed database enables multiple customers to use the same resources or cluster, whereby each customer is allotted a certain amount of the resources or cluster.
  • a customer may actually need more resources than what is originally envisioned by the customer and/or what is originally allocated to the customer.
  • a conventional system having resources dedicated for individual customer use would reject a request for resource capacity that exceeds the originally allocated amount.
  • a multi-tenant system concurrently supports multiple use cases for multiple customers, it is likely that one or more of the customers is below a corresponding allocated capacity at a given time.
  • the management services module 224 supports a rate-limiting service operated by a QoS controller 240 to manage customer usage of the resources or clusters of the distributed database across many metrics and ensure that no one service affects others on the system.
  • the rate-limiting service may limit usage by certain of the customers and, in some cases, dynamically reallocate certain resources for certain of the customers to effectively utilize the total amount of resources or clusters within the distributed database.
  • the distributed database is supporting ten (10) customers for various use cases.
  • Each of the ten (10) customers has a corresponding allocated amount of resources whereby a sum of all of the allocated amount of resources may constitute the total resource capacity of the distributed database.
  • the QoS controller 240 may compare the amount of resources needed to support the outstanding requests (i.e., a sum of the resources needed to support requests of all of the customers) to the total resource capacity to determine whether there is any available resource capacity. If there is available capacity, then at least one of the customers is not using a respective amount of resources allocated to that customer.
  • the QoS controller 240 can allocate a portion of the unused resources for use by the two customers according to the access requests. In contrast, if there is not available capacity, then the QoS controller 240 may reject the requests for the excess resource usage.
  • the QoS controller 240 is capable of distinguishing among various properties of allocated resources, and managing allocations and requests relating thereto.
  • various properties of resources may include storage space, network bandwidth, CPU usage, and others.
  • a customer may request a limit of 1,000 queries per second, but in operation only send 100 queries per second.
  • the amount of data per query may be very large and more than what the QoS controller 240 is expecting, such that the total amount of information completely saturates the network bandwidth for the resources allocated to the customer.
  • the QoS controller 240 may dynamically manage (e.g., rate limit) the allocated resources according to the network bandwidth of the queries even though the original request specified an amount of queries without indicating a corresponding data transmission amount.
  • the QoS controller 240 may identify an available portion of additional allocated resources capable of supporting the queries, and may dynamically reallocate the available portion to support the overage network bandwidth (thus negating the need to rate limit or deny the overage request).
  • the QoS controller 224 determines whether the required amount of resources of the distributed database are available to support the service. If the required amount of resources are available, the QoS controller 224 instantiates the resources and enables the service to access the resources, whereby the coordinator module 228 manages the corresponding resources of the distributed database according to the configured consistency model. Of course, the customer can request additional resources for a particular application which the QoS controller 224 may additionally configure or instantiate.
  • FIG. 6 is a flowchart of a method 600 for an electronic device to manage resource usage by a plurality of customers within a multi-tenant distributed database.
  • the electronic device may correspond to one or more of the various modules or entities, or combinations thereof, discussed herein.
  • one of the coordinator machines 120 or more specifically the management services module 224 , may implement and facilitate the method 600 .
  • the order of the steps of the depicted flowchart of FIG. 6 can differ from the version shown, and certain steps can be eliminated, and/or certain other ones can be added, depending upon the implementation.
  • a customer may interface (e.g., via the self-service interface 226 ) with the electronic device to specify parameters for a service as well as consistency model configurations for the service.
  • the method 600 begins with the electronic device allocating ( 670 ), to a first customer for operating a first service, a first portion of a plurality of computer resources within a multi-tenant distributed database. Similarly, the electronic device may allocate ( 672 ), to a second customer for operating a second service, a second portion of a plurality of computer resources within the multi-tenant distributed database.
  • each of the first customer and the second customer may submit a request (e.g., via a self-service user interface) for operation of the respective first and second services, along with parameters associated with service operation such as storage space, peak keys written per second, peak partitioning keys read per second, peak local keys read per second, and/or others.
  • the electronic device may accordingly identify or calculate the first and second portions based on respective amounts of resources needed to support the respective first and second services and the parameters thereof.
  • the electronic device may allocate the entirety of the plurality of computer resources, or may leave a portion of the plurality of computer resources unallocated. In other embodiments, the electronic device may over-allocate the plurality of computer resources, such as by allocating a single portion of the plurality of computer resources for operation of each of the first service and the second service. After the resource allocation, the first and second customers may operate the respective first and second services within the respective first and second allocated portions.
  • the electronic device may determine ( 674 ) whether the first customer (or the second customer), in operating the first service (or the second service), has exceeded the allocated first portion (or allocated second portion). In some implementations, the electronic device may detect when operation of either the first service or the second service is approaching the respective resource limit of the first portion or the second portion, or otherwise within a threshold amount or percentage of the respective resource limit.
  • the electronic device detects that the resource usage is within a threshold amount or percentage.
  • the electronic device may continue to monitor resource usage. In contrast, if the electronic device determines that the first customer is exceeding the allocated first portion (“YES”), the electronic device may optionally temporarily prevent ( 676 ) the first customer from exceeding the allocated first portion or otherwise rate limit operation of the first service. In particular, the electronic device may prevent or deny whatever operation the first service is attempting that would result in the exceeding of the allocated first portion (or would result in the exceeding of any threshold or limit).
  • the electronic device may also optionally generate ( 678 ) a service ticket and provide the service ticket to a moderator.
  • the service ticket may indicate any resource usage as well any instance of the allocated first portion being exceeded or an attempt to exceed the allocated first portion.
  • the moderator may choose to take various actions depending on the content of the service ticket.
  • the electronic device may identify ( 680 ) an available portion of the second portion allocated to the second customer.
  • the second customer may not be using the full resource allocation of the second portion and thus there may be an unused or available portion of the second portion.
  • the electronic device may determine ( 682 ) whether a combination of the first portion and the available portion exceeds the total capacity of the plurality of computer resources. In implementations, if the combination exceeds the total capacity (“YES”), then the plurality of computer resources may be over-allocated, and the electronic device may reduce ( 686 ) the available portion by removing or otherwise not including some of the resources from or in the available portion. In particular, the electronic device may determine a portion of the available portion that would not result in the plurality of computer resources being over-allocated. If the combination does not exceed the total capacity (“NO”), the electronic device may optionally identify ( 684 ) an available unallocated portion of the plurality of computer resources.
  • NO total capacity
  • the available unallocated portion may be in addition to the available portion of the second portion. Further, the electronic device may identify the available unallocated portion in various instances such as if the available portion of the second portion is not sufficient to support the exceeded allocated first portion. For example, if the electronic device determines that resources are needed to support 1,000 additional queries per second and the available portion of the second portion is able to support 800 queries per second, then the electronic device may additionally identify an unallocated portion capable of supporting the extra 200 queries per second. Thus, the electronic device may determine the total available resources to be the available portion of the second portion plus at least some of the available unallocated portion.
  • the electronic device may dynamically allocate ( 688 ), to the first customer for operating the first service, at least some of the available portion. Additionally, the electronic device may optionally allocate the available unallocated portion to the first customer for operating the first service. As a result, the first customer, in operating the first service, is able to exceed the allocated first portion without being subject to a service denial or to rate limiting.
  • the electronic device may generate ( 690 ) a report indicating usage metrics associated with each computer resource included in at least one of the first portion and the second portion, and provide the report to a moderator associated with the multi-tenant distributed database for usage analysis. The electronic device may also periodically update the report to indicate updated usage metrics according to operation of the first service and/or second service within the multi-tenant distributed database.
  • FIG. 7 illustrates an example electronic device 781 in which the functionalities as discussed herein may be implemented.
  • the electronic device 781 may be one of the coordinator machines 120 and/or one of the clients 110 as discussed with respect to FIG. 1B .
  • the electronic device 781 is a dedicated computer machine, workstation, or the like, including any combination of hardware and software components.
  • the electronic device 781 can include a processor 779 or other similar type of controller module or microcontroller, as well as a memory 795 .
  • the memory 795 can store an operating system 797 capable of facilitating the functionalities as discussed herein.
  • the processor 779 can interface with the memory 795 to execute the operating system 797 and a set of applications 783 .
  • the set of applications 783 (which the memory 795 can also store) can include a self-service interface module 726 configured to facilitate the customer interaction functionalities as discussed herein, a management services module 724 configured to facilitate resource allocation, a reporting module 738 configured to facilitate reporting functionalities, and a QoS controller 740 configured to manage reallocation of the resources in a multi-tenant distributed database.
  • the set of applications 783 can include one or more other applications or modules not depicted in FIG. 7 .
  • the memory 795 can include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others.
  • ROM read-only memory
  • EPROM electronic programmable read-only memory
  • RAM random access memory
  • EEPROM erasable electronic programmable read-only memory
  • other hard drives flash memory, MicroSD cards, and others.
  • the electronic device 781 can further include a communication module 793 configured to interface with one or more external ports 785 to communicate data via one or more networks 702 .
  • the communication module 793 can leverage the external ports 785 to establish a wide area network (WAN) or a local area network (LAN) for connecting the electronic device 781 to other components such as resources of a distributed database.
  • the communication module 793 can include one or more transceivers functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via the one or more external ports 785 .
  • the communication module 793 can include one or more wireless or wired WAN and/or LAN transceivers configured to connect the electronic device 781 to WANs and/or LANs.
  • the electronic device 781 may further include a user interface 787 configured to present information to the user and/or receive inputs from the user.
  • the user interface 787 includes a display screen 791 and I/O components 789 (e.g., capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, cursor control devices, haptic devices, and others).
  • a computer program product in accordance with an embodiment includes a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by the processor 779 (e.g., working in connection with the operating system 797 ) to facilitate the functions as described herein.
  • the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML, and/or others).

Abstract

Embodiments are provided for enabling dynamic reallocation of resources in a multi-tenant distributed database. According to certain aspects, a management services module allocates multiple portions of computer resources for respective operation of multiple services by multiple customers. A quality of service (QoS) controller detects that one of the services is attempting to exceed its allocated portion of resources, and identifies an available portion of another allocated portion of resources. In response, the QoS controller causes the management services module to dynamically allocate the available portion to the detected service so that the detected service is able to operate without error or delay.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 61/978,689, filed Apr. 11, 2014, which is incorporated by reference herein.
FIELD
The present disclosure generally relates to multi-tenant distributed databases and, more particularly, to implementations for managing resource allocation within a multi-tenant distributed database.
BACKGROUND
Distributed database systems include a plurality of storage devices spread among a network of interconnected computers. The distributed database systems typically have greater reliability and availability than parallel database systems, among other benefits. Various internet services, for example social networking services, employ distributed database systems to manage the storage and retrieval of information. Generally, the need to efficiently and accurately read and write data across the database system increases with a greater amount of information, a greater number of users, and stricter latency requirements.
In various conventional non-distributed systems, different tenancy configurations may be employed to manage software access by users or tenants. A single tenant system includes an architecture in which each customer has their own software instance. In contrast, a multi-tenant system includes an architecture that enables multiple customers to use a single software instance. There are benefits and drawbacks to both single tenant and multi-tenant systems. In particular, even though multi-tenant systems may generally be more complex than single tenant systems, multi-tenant systems may realize more cost savings, increase data aggregation benefits, and simplify the release management process, among other benefits. However, the complexity and constraints of existing distributed system frameworks and the complex resource requirements of multi-tenant systems limit the configurability and functionality of multi-tenancy configurations.
Accordingly, there is an opportunity for techniques and frameworks to support multi-tenant systems within distributed databases. In particular, there is an opportunity for techniques and frameworks to manage resource allocation within multi-tenant systems implemented in distributed databases.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed embodiments, and explain various principles and advantages of those embodiments.
FIG. 1A depicts a system capable of implementing a multi-tenant distributed database in accordance with some embodiments.
FIG. 1B depicts a detailed representation of various components configured to manage a multi-tenant distributed database in accordance with some embodiments.
FIG. 1C depicts a representation of various layers supported by a coordinator machine in accordance with some embodiments.
FIG. 2 depicts an example representation of entities and components associated with managing services of a multi-tenant distributed database in accordance with some embodiments.
FIG. 3 depicts an example user interface associated with a multi-tenant distributed database in accordance with some embodiments.
FIG. 4A depicts an example user interface for configuring services for operation on a multi-tenant distributed database in accordance with some embodiments.
FIG. 4B depicts an example user interface for selecting resources of a multi-tenant distributed database in accordance with some embodiments.
FIGS. 5A-5D depict example user interfaces for initiating a service supported by a multi-tenant distributed database in accordance with some embodiments.
FIG. 6 depicts a flow chart of an example method for configuring a consistency model for a service supported by a multi-tenant distributed database in accordance with some embodiments.
FIG. 7 depicts a hardware diagram of an electronic device in accordance with some embodiments.
DETAILED DESCRIPTION
According to the present embodiments, a multi-tenant distributed database as well as various built-in systems and methods of managing access thereto are disclosed. The distributed database is a multi-tenant system capable of concurrently serving multiple use cases of multiple customers according to various resource requirements, parameters, and other factors.
According to aspects, the systems and methods may dynamically reallocate resources within the distributed database in response to the detection of various service usage parameters. Initially, the systems and methods may allocate respective portions of the distributed database resources for use by respective customers in operating respective services, whereby the portions may be allocated according to requirements of the respective services without having to build a dedicated system for the customer. However, the actual usage of the services may result in an imbalanced resource allocation, whereby one service may request more resources than are allocated and another service may underutilize its allocation. The systems and methods may identify an available allocated or unallocated resource portion capable of handling the overage, and may dynamically adjust any resource allocation(s) accordingly. Thus, the systems and methods improve existing multi-tenant distributed database frameworks by negating the need for complex resource reconfiguring. Further, the systems and methods enable seamless accommodation of overage requests, thus improving on the need for existing frameworks to outright deny overage requests.
FIG. 1A illustrates a general system architecture of a system 100 implementing a multi-tenant distributed database 105. The distributed database 105 may include multiple nodes 104 of storage devices or computer resources that are distributed across a plurality of physical computers, such as a network of interconnected computers. The multiple nodes 104 may be virtually or physically separated, and may be configured to interface with one or more processing units such as one or more CPUs. Each of the nodes 104 may store one or more replicas of one or more datasets, and may include one or more various types of storage devices (e.g., solid state drives (SSDs), platter storage such as hard disk drives, or other memory) and structures (e.g., SSTable, seadb, b-tree, or others). A distributed database management system (DBMS) 103 may be configured to manage the distributed database 105, whereby the DBMS 103 may be stored on a centralized computer within the system 100.
The system 100 further includes a plurality of clients 110 configured to access the distributed database 105 and features thereof via one or more networks 102. It should be appreciated that the network 102 may be any type of wired or wireless LAN, WAN, or the like. For example, the network 102 may be the Internet, or various corporate intranets or extranets. In embodiments, each of the plurality of clients 110 is a dedicated computer machine, workstation, or the like, including any combination of hardware and software components. Further, a user such as a developer, engineer, supervisor, or the like (generally, a “customer”) may interface with any of the plurality of clients 110 to access the distributed database 105 and configure various services to be supported thereon. It should be appreciated that the plurality of clients 110 may also interface with the DBMS 103.
FIG. 1B illustrates a system 150 having components capable of implementing the systems and methods of the present embodiments. The system 150 includes the distributed database 105 storing a plurality of nodes 104, as discussed with respect to FIG. 1A. Each of the nodes 104 may store one or more replica representations 130 of one or more datasets.
The system 150 further includes a management system 125, which may serve as or be separate from the DMBS 103 as discussed with respect to FIG. 1A. The management system 125 includes a plurality of coordinator machines 120 that may be distributed throughout various physical or virtual locations and may be configured to connect to one another. Each of the coordinator machines 120 may manage various services associated with storing and managing datasets within the distributed database 105. In one case, each of the coordinator machines 120 may manage one or more services to identify appropriate replica representations 130 and interface with the identified replica representations 130 for dataset storage and management. Customers may operate one or more of the clients 110 to interface with one or more of the coordinator machines 120, where the particular coordinator machine 120 is selected based on availability or other factors.
FIG. 1C illustrates a more detailed representation of the coordinator machine 120 and various features that the coordinator machine 120 is capable of supporting or managing. Although only one coordinator machine 120 is depicted in FIG. 1C, it should be appreciated that each of the coordinator machines 120 of the management system 125 may include the same components and support the same services. As illustrated in FIG. 1C, the coordinator machine 120 supports four layers: an interfaces layer 106, a storage services layer 108, a core layer 112, and a storage engines layer 114.
Generally, the core layer 112 is configured to manage or process failure events, consistency models within the distributed database 105, routing functionality, topology management, intra- and inter-datacenter replication, and conflict resolution. The storage engines layer 114 is configured to convert and/or process data for storage on various physical memory devices (e.g., SSD, platter storage, or other memory). The storage services layer 108 supports applications or features that enable customers to manage the importing and storage of data within the distributed database 105. For example, some of the applications or features include batch importing, managing a strong consistency service, and managing a timeseries counters service. The interfaces layer 106 manages how customers interact with the distributed database 105, such as customers interacting with the distributed database 105 via the clients 110.
FIG. 2 illustrates an example representation 200 of various applications and functionalities related to the distributed database system. The applications and functionalities may be managed by the coordinator machines 120 as described with respect to FIGS. 1B and 1C. In particular, the representation 200 identifies various modules managed by each of the coordinator machines 120, as well as communication paths among the modules, the layers, and the storage components associated with the distributed database system.
As illustrated in FIG. 2, the representation 200 includes a core layer 212 (such as the core layer 112 as discussed with respect to FIG. 1B), a storage services module 222, and a management services module 224. The core layer 212 may communicate with an interfaces layer 206 (such as the interfaces layer 106 as discussed with respect to FIG. 1B) and a storage engines layer 214 (such as the storage engines layer 114 as discussed with respect to FIG. 1B). The management services module 224 is configured to communicate with the core layer 212, and includes various components, applications, modules, or the like that facilitate various systems and methods supported by the distributed database system. The storage services module 222 is also configured to communicate with the core layer 212, and also includes various components, applications, modules, or the like that facilitate additional systems and methods supported by the distributed database system.
The storage engines layer 214 is configured to manage data storage on the distributed database as well as maintain data structures in memory. The storage engine layer 214 supports at least three different storage engines: (1) seadb, which is a read-only file format for batch processed data (e.g., from a distributed system such as Apache Hadoop), (2) SSTable, a log-structured merge (LSM) tree-based format for heavy write workloads, and (3) b-tree, a b-tree based format for heavy read and light write workloads. Customers may directly or indirectly select an appropriate storage engine for processing datasets based on the service or use-case of the service.
For example, if the dataset is static and/or can be generated using a distributed system, the customer may want to select a read/only selection corresponding to the seadb storage engine. For further example, in the Twitter® social networking service, if the dataset changes dynamically, such as if the dataset includes tweets and Twitter® users, then the customer may want to select a read/write selection corresponding to the SSTable or b-tree storage engine. Generally, the b-tree storage engine is a better choice for a lot of data writes and the SSTable storage engine is a better choice for a lot of data reads. The management services module 224 initiates an appropriate workflow based on the selected storage engine. The management services module 224 further supports multiple types of clusters for storing datasets: a first, general cluster for storing general data as well as a second, production cluster for storing sensitive data.
The management services module 224 may further include a reporting module 238 configured for various reporting functionalities. The reporting module 238 may support an integration between the datasets being stored and external services and teams, and may enable the automatic reporting of certain usage of the distributed database system to the external services and teams. According to some embodiments, the reporting module 238 may support an API to a “capacity team,” or a team tasked with managing the capacity of the distributed database system (generally, a moderator), such that the capacity team may manage customer usage, model capacity metrics, and collect raw data for customers. By managing the capacity of the system, the capacity team may effectively and efficiently manage the associated resources of the distributed database system. In some embodiments, the reporting module 238 may generate reports associated with data usage resulting from consistency model management.
In operation, if a customer creates, tests, or otherwise uses a service and the usage amount exceeds an amount of resources allocated to the customer, the management services module 224 places the service into a pending state and causes the reporting module 238 to automatically generate a service ticket that indicates the service's usage or requested usage, and provide the service ticket to the capacity team. The capacity team may examine the service ticket and interface with the customer to handle or manage the usage request. In particular, the capacity team may approve the increased capacity and enable the service use by the customer, or may reject the increased capacity.
The reporting module 238 may also generate a report if a customer's service exceeds a quota or threshold, along with details of the excess usage. The reporting module 238 may aggregate the reports such that, over time, the capacity team may analyze the usage data to generate resource planning recommendations. For example, the data from the aggregated reports may indicate that more resources are needed to support the excess usage requests.
The management services module 224 further supports a “self-service” interface module 226 that enables customers to configure services or applications within the distributed database, as well as configure various functionalities related thereto, such as consistency model configurations. In particular, the self-service interface module 226 enables a customer to make selections, via various user interfaces, associated with initiating various services and applications supported by the distributed database as well as managing data stored in the distributed database. A customer may interface with the self-service interface module 226 via the user interface which the self-service interface module 226 may cause to be displayed on one of the plurality of clients 110 as discussed with respect to FIG. 1.
Generally, a customer may initiate various services or applications having associated use cases within the distributed database. FIG. 3 illustrates an example “start screen” 350 that details various options and features available to customers for using the distributed database. In another example, as illustrated in an interface 450 of FIG. 4A, the customer may select whether the use case of a desired service is associated with static data (451) or dynamic data (452). Based on the selection of static data or dynamic data, the management services module 224 may need to configure different consistency models and/or different clusters within the distributed database for the desired service.
FIG. 4B illustrates an additional interface 550 associated with initiating a service. In particular, the interface 550 of FIG. 4B indicates various clusters of the distributed database that are available for multi-tenant use. The interface 550 includes a name of the cluster, a description of the cluster, a type of the cluster (e.g., testing, production, etc.), identifications of one or more data centers that support the cluster, and an option for the customer to select a particular cluster.
FIG. 5A illustrates an interface 552 associated with configuring a new application or service that will utilize a specific cluster (such as one of the clusters depicted in FIG. 4B). The interface 552 enables a customer to input a name and description for the application. Similarly, an interface 554 illustrated in FIG. 5B enables the customer to input contact information as well as associate a team of engineers with the application. FIG. 5C illustrates an interface 556 that enables the customer to input various storage and throughput quotas for the application, such as storage space, peak keys written per second, peak partitioning keys read per second, and peak local keys read per second. It should be appreciated that additional storage and throughput quota parameters are envisioned. Moreover, an interface 558 as illustrated in FIG. 5D enables the user to input traffic expectations for the application, such as whether the application will utilize a cache, and keys per second expectations. It should be appreciated that the interfaces 552, 554, 556, 558 of FIGS. 5A-5D are merely examples and that additional or alternative options, selections, and/or content are envisioned.
The self-service interface module 226 further enables the customer to select various functionalities associated with dataset management using the distributed database. In one particular case, the customer can select a rate limiting functionality to set rate limits (e.g., limits on queries per second) associated with data reads and/or data writes, which is described in further detail below. Further, the customer can configure custom alerts associated with meeting or exceeding rate limits. Still further, the customer can select to have reports detailing resource usage and other metrics generated (e.g., by the reporting module 238) at various time intervals or in response to various triggers. Moreover, the self-service interface can enable customers to modify certain parameters (e.g., increase or decrease resource usage limits) after a service is initiated.
The self-service interface module 226 further enables the customer to select various consistency model configurations for a service. In general, distributed systems support a specific consistency model. When data is stored in a distributed system, the data must propagate among multiple computer resources or clusters before it has achieved replica convergence across the distributed system. Certain consistency models have benefits and drawbacks when compared to other consistency models. As discussed herein, an eventually consistent database enables users to store and retrieve data without delay. However, because there is no delay in retrieving data, there is not a guarantee that the retrieved data is completely up-to-date (i.e., is consistent across the distributed system). In contrast, a strongly consistent database requires that all resources or clusters have the same view of stored data. Accordingly, when a user retrieves certain data, that data is guaranteed to be up-to-date, though with a higher read latency, a lower read throughput, and the potential for more failures.
For most tasks and applications supported by a given service, having an eventually consistent database is sufficient. For example, a user of the Twitter® social networking service may not want a long delay when opening the “tweet stream” associated with his or her account, but also may not mind (or may not notice) that at least one Tweet® posted to Twitter® in the last fractions of a second are not presented in the tweet stream. However, there may be some tasks in which a strongly consistent database is preferred. For example, Twitter® may require a strongly consistent database when storing Twitter® handles (i.e., usernames) so as to ensure that the same handle will not be assigned to multiple end users.
Referring back to FIG. 2, the interface layer 206 supports a coordinator module 228 that is configured to interface with the management services module 224 and manage consistency models within the distributed database system. In particular, a customer may interface with the self-service interface module 226 to specify the consistency model as well as various customization and configuration features associated therewith, for different applications and services to be supported by the distributed database system. The interface layer 206 may therefore enable the customer to input a consistency model configuration including various parameters such as consistency type, associated time periods, and associated replication factors.
To support multiple services and multiple consistency models associated therewith, the distributed database enables multiple customers to use the same resources or cluster, whereby each customer is allotted a certain amount of the resources or cluster. In some scenarios, a customer may actually need more resources than what is originally envisioned by the customer and/or what is originally allocated to the customer. A conventional system having resources dedicated for individual customer use would reject a request for resource capacity that exceeds the originally allocated amount. However, because a multi-tenant system concurrently supports multiple use cases for multiple customers, it is likely that one or more of the customers is below a corresponding allocated capacity at a given time. Accordingly, the management services module 224 supports a rate-limiting service operated by a QoS controller 240 to manage customer usage of the resources or clusters of the distributed database across many metrics and ensure that no one service affects others on the system. In particular, the rate-limiting service may limit usage by certain of the customers and, in some cases, dynamically reallocate certain resources for certain of the customers to effectively utilize the total amount of resources or clusters within the distributed database.
As an example, assume that the distributed database is supporting ten (10) customers for various use cases. Each of the ten (10) customers has a corresponding allocated amount of resources whereby a sum of all of the allocated amount of resources may constitute the total resource capacity of the distributed database. Assume that two of the customers are each requesting access to an amount of resources that exceeds their respective allocated amounts. In this scenario, the QoS controller 240 may compare the amount of resources needed to support the outstanding requests (i.e., a sum of the resources needed to support requests of all of the customers) to the total resource capacity to determine whether there is any available resource capacity. If there is available capacity, then at least one of the customers is not using a respective amount of resources allocated to that customer. Accordingly, to maximize the total resource capacity of the system, the QoS controller 240 can allocate a portion of the unused resources for use by the two customers according to the access requests. In contrast, if there is not available capacity, then the QoS controller 240 may reject the requests for the excess resource usage.
The QoS controller 240 is capable of distinguishing among various properties of allocated resources, and managing allocations and requests relating thereto. In particular, various properties of resources may include storage space, network bandwidth, CPU usage, and others. As an example, a customer may request a limit of 1,000 queries per second, but in operation only send 100 queries per second. However, the amount of data per query may be very large and more than what the QoS controller 240 is expecting, such that the total amount of information completely saturates the network bandwidth for the resources allocated to the customer. In some cases, the QoS controller 240 may dynamically manage (e.g., rate limit) the allocated resources according to the network bandwidth of the queries even though the original request specified an amount of queries without indicating a corresponding data transmission amount. In other cases, the QoS controller 240 may identify an available portion of additional allocated resources capable of supporting the queries, and may dynamically reallocate the available portion to support the overage network bandwidth (thus negating the need to rate limit or deny the overage request).
After a customer specifies the parameters for a service via the various interfaces, the QoS controller 224 determines whether the required amount of resources of the distributed database are available to support the service. If the required amount of resources are available, the QoS controller 224 instantiates the resources and enables the service to access the resources, whereby the coordinator module 228 manages the corresponding resources of the distributed database according to the configured consistency model. Of course, the customer can request additional resources for a particular application which the QoS controller 224 may additionally configure or instantiate.
FIG. 6 is a flowchart of a method 600 for an electronic device to manage resource usage by a plurality of customers within a multi-tenant distributed database. It should be appreciated that the electronic device may correspond to one or more of the various modules or entities, or combinations thereof, discussed herein. For example, one of the coordinator machines 120, or more specifically the management services module 224, may implement and facilitate the method 600. The order of the steps of the depicted flowchart of FIG. 6 can differ from the version shown, and certain steps can be eliminated, and/or certain other ones can be added, depending upon the implementation. According to embodiments, a customer may interface (e.g., via the self-service interface 226) with the electronic device to specify parameters for a service as well as consistency model configurations for the service.
The method 600 begins with the electronic device allocating (670), to a first customer for operating a first service, a first portion of a plurality of computer resources within a multi-tenant distributed database. Similarly, the electronic device may allocate (672), to a second customer for operating a second service, a second portion of a plurality of computer resources within the multi-tenant distributed database. In some implementations, each of the first customer and the second customer may submit a request (e.g., via a self-service user interface) for operation of the respective first and second services, along with parameters associated with service operation such as storage space, peak keys written per second, peak partitioning keys read per second, peak local keys read per second, and/or others. The electronic device may accordingly identify or calculate the first and second portions based on respective amounts of resources needed to support the respective first and second services and the parameters thereof.
In allocating the first and second portions (and optionally any additional portions), the electronic device may allocate the entirety of the plurality of computer resources, or may leave a portion of the plurality of computer resources unallocated. In other embodiments, the electronic device may over-allocate the plurality of computer resources, such as by allocating a single portion of the plurality of computer resources for operation of each of the first service and the second service. After the resource allocation, the first and second customers may operate the respective first and second services within the respective first and second allocated portions.
During operation of the first and second services, either the first customer or the second customer may exceed the respective resources of the first portion or the second portion. Accordingly, the electronic device may determine (674) whether the first customer (or the second customer), in operating the first service (or the second service), has exceeded the allocated first portion (or allocated second portion). In some implementations, the electronic device may detect when operation of either the first service or the second service is approaching the respective resource limit of the first portion or the second portion, or otherwise within a threshold amount or percentage of the respective resource limit. For example, if the first service is allocated with 2 terabytes (TB) of storage and, during operation of the first service, the first service has used 1.9 TB of storage (i.e., within 5% or 0.1 of the 2 TB limit), then the electronic device detects that the resource usage is within a threshold amount or percentage.
If the electronic device determines that the first customer is not exceeding the allocated first portion (“NO”), the electronic device may continue to monitor resource usage. In contrast, if the electronic device determines that the first customer is exceeding the allocated first portion (“YES”), the electronic device may optionally temporarily prevent (676) the first customer from exceeding the allocated first portion or otherwise rate limit operation of the first service. In particular, the electronic device may prevent or deny whatever operation the first service is attempting that would result in the exceeding of the allocated first portion (or would result in the exceeding of any threshold or limit). The electronic device may also optionally generate (678) a service ticket and provide the service ticket to a moderator. The service ticket may indicate any resource usage as well any instance of the allocated first portion being exceeded or an attempt to exceed the allocated first portion. The moderator may choose to take various actions depending on the content of the service ticket.
To accommodate the increased resource need, the electronic device may identify (680) an available portion of the second portion allocated to the second customer. In particular, during current operation of the second service, the second customer may not be using the full resource allocation of the second portion and thus there may be an unused or available portion of the second portion.
The electronic device may determine (682) whether a combination of the first portion and the available portion exceeds the total capacity of the plurality of computer resources. In implementations, if the combination exceeds the total capacity (“YES”), then the plurality of computer resources may be over-allocated, and the electronic device may reduce (686) the available portion by removing or otherwise not including some of the resources from or in the available portion. In particular, the electronic device may determine a portion of the available portion that would not result in the plurality of computer resources being over-allocated. If the combination does not exceed the total capacity (“NO”), the electronic device may optionally identify (684) an available unallocated portion of the plurality of computer resources. The available unallocated portion may be in addition to the available portion of the second portion. Further, the electronic device may identify the available unallocated portion in various instances such as if the available portion of the second portion is not sufficient to support the exceeded allocated first portion. For example, if the electronic device determines that resources are needed to support 1,000 additional queries per second and the available portion of the second portion is able to support 800 queries per second, then the electronic device may additionally identify an unallocated portion capable of supporting the extra 200 queries per second. Thus, the electronic device may determine the total available resources to be the available portion of the second portion plus at least some of the available unallocated portion.
After the electronic device identifies and optionally modifies (e.g., by reducing) the available portion of the second portion, the electronic device may dynamically allocate (688), to the first customer for operating the first service, at least some of the available portion. Additionally, the electronic device may optionally allocate the available unallocated portion to the first customer for operating the first service. As a result, the first customer, in operating the first service, is able to exceed the allocated first portion without being subject to a service denial or to rate limiting. In an optional embodiment, the electronic device may generate (690) a report indicating usage metrics associated with each computer resource included in at least one of the first portion and the second portion, and provide the report to a moderator associated with the multi-tenant distributed database for usage analysis. The electronic device may also periodically update the report to indicate updated usage metrics according to operation of the first service and/or second service within the multi-tenant distributed database.
FIG. 7 illustrates an example electronic device 781 in which the functionalities as discussed herein may be implemented. In some embodiments, the electronic device 781 may be one of the coordinator machines 120 and/or one of the clients 110 as discussed with respect to FIG. 1B. Generally, the electronic device 781 is a dedicated computer machine, workstation, or the like, including any combination of hardware and software components.
The electronic device 781 can include a processor 779 or other similar type of controller module or microcontroller, as well as a memory 795. The memory 795 can store an operating system 797 capable of facilitating the functionalities as discussed herein. The processor 779 can interface with the memory 795 to execute the operating system 797 and a set of applications 783. The set of applications 783 (which the memory 795 can also store) can include a self-service interface module 726 configured to facilitate the customer interaction functionalities as discussed herein, a management services module 724 configured to facilitate resource allocation, a reporting module 738 configured to facilitate reporting functionalities, and a QoS controller 740 configured to manage reallocation of the resources in a multi-tenant distributed database. It should be appreciated that the set of applications 783 can include one or more other applications or modules not depicted in FIG. 7.
Generally, the memory 795 can include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others.
The electronic device 781 can further include a communication module 793 configured to interface with one or more external ports 785 to communicate data via one or more networks 702. For example, the communication module 793 can leverage the external ports 785 to establish a wide area network (WAN) or a local area network (LAN) for connecting the electronic device 781 to other components such as resources of a distributed database. According to some embodiments, the communication module 793 can include one or more transceivers functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via the one or more external ports 785. More particularly, the communication module 793 can include one or more wireless or wired WAN and/or LAN transceivers configured to connect the electronic device 781 to WANs and/or LANs.
The electronic device 781 may further include a user interface 787 configured to present information to the user and/or receive inputs from the user. As illustrated in FIG. 7, the user interface 787 includes a display screen 791 and I/O components 789 (e.g., capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, cursor control devices, haptic devices, and others).
In general, a computer program product in accordance with an embodiment includes a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by the processor 779 (e.g., working in connection with the operating system 797) to facilitate the functions as described herein. In this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML, and/or others).
This disclosure is intended to explain how to fashion and use various embodiments in accordance with the technology rather than to limit the true, intended, and fair scope and spirit thereof. The foregoing description is not intended to be exhaustive or to be limited to the precise forms disclosed. Modifications or variations are possible in light of the above teachings. The embodiment(s) were chosen and described to provide the best illustration of the principle of the described technology and its practical application, and to enable one of ordinary skill in the art to utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the embodiments as determined by the appended claims, as may be amended during the pendency of this application for patent, and all equivalents thereof, when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled.

Claims (20)

The invention claimed is:
1. A computer-implemented method for managing resource usage by a plurality of customers within a multi-tenant distributed database, the method comprising:
allocating, to a first customer for operating a first service, a first portion of a plurality of computer resources associated with the multi-tenant distributed database;
allocating, to a second customer for operating a second service, a second portion of the plurality of computer resources associated with the multi-tenant distributed database;
detecting that the first customer, in operating the first service, is attempting to exceed the allocated first portion of the plurality of computer resources;
identifying, by a computer processor, an available portion of the second portion of the plurality of computer resources allocated to the second customer;
determining, by the computer processor, that the available portion is capable of handling operation of the first service by the first customer; and
dynamically allocating, to the first customer for operating the first service, at least some of the available portion.
2. The computer-implemented method of claim 1, wherein detecting that the first customer is attempting to exceed the allocated first portion comprises:
detecting that the first customer, in operating the first service, is attempting to exceed at least one of: a network bandwidth of the allocated first portion, a storage space of the allocation first portion, and a CPU usage of the first portion.
3. The computer-implemented method of claim 1, further comprising:
in response to detecting that the first customer is attempting to exceed the allocated first portion, temporarily preventing the first customer from exceeding the allocated first portion in operating the first service.
4. The computer-implemented method of claim 1, further comprising:
generating, by the computer processor, a report indicating usage metrics associated with each computer resource included in at least one of the first portion and the second portion.
5. The computer-implemented method of claim 4, further comprising:
updating, by the computer processor, the report to indicate updated usage metrics associated with each computer resource included in at least one of the first portion and the second portion; and
providing the updated report to a moderator associated with the multi-tenant distributed database for usage analysis.
6. The computer-implemented method of claim 1, further comprising:
receiving, from the first customer via a user interface, a selection of a rate limit for operating the first service, wherein the first portion of the plurality of computer resources is allocated according to the rate limit.
7. The computer-implemented method of claim 1, further comprising:
in response to detecting that the first customer is attempting to exceed the allocated first portion:
generating a service ticket indicating that the first customer is attempting to exceed the allocated first portion, and
providing the service ticket to a moderator associated with the multi-tenant distributed database.
8. The computer-implemented method of claim 1, wherein identifying the available portion comprises:
identifying the available portion of the second portion of the plurality of computer resources allocated to the second customer; and
identifying an available unallocated portion of the plurality of computer resources.
9. The computer-implemented method of claim 8, wherein dynamically allocating, to the first customer for operating the first service, the at least some of the available portion comprises:
dynamically allocating, to the first customer for operating the first service, (i) the at least some of the available portion and (ii) at least some of the available unallocated portion.
10. The computer-implemented method of claim 1, wherein identifying the available portion comprises:
identifying the available portion of the second portion of the plurality of computer resources allocated to the second customer;
determining that a combination of the first portion and the available portion of the second portion exceeds a total capacity of the plurality of computer resources; and
reducing the available portion of the second portion for dynamic allocation.
11. A system for managing resource usage by a plurality of customers within a multi-tenant distributed database, comprising:
a management services module adapted to interface with the multi-tenant distributed database and configured to:
allocate, to a first customer for operating a first service, a first portion of a plurality of computer resources associated with the multi-tenant distributed database, and
allocate, to a second customer for operating a second service, a second portion of the plurality of computer resources associated with the multi-tenant distributed database;
a quality of service (QoS) controller executed by a computer processor and configured to:
detect that the first customer, in operating the first service, is attempting to exceed the allocated first portion of the plurality computer resources,
identify an available portion of the second portion of the plurality of computer resources allocated to the second customer,
determine that the available portion is capable of handling operation of the first service by the first customer, and
cause the management services module to dynamically allocate, to the first customer for operating the first service, at least some of the available portion.
12. The system of claim 11, wherein to detect that the first customer is attempting to exceed the allocated first portion, the quality of service (QoS) controller is configured to:
detect that the first customer, in operating the first service, is attempting to exceed at least one of: a network bandwidth of the allocated first portion, a storage space of the allocation first portion, and a CPU usage of the first portion.
13. The system of claim 11, wherein in response to detecting that the first customer is attempting to exceed the allocated first portion, the quality of service (QoS) controller is further configured to:
temporarily prevent the first customer from exceeding the allocated first portion in operating the first service.
14. The system of claim 11, further comprising:
a reporting module configured to generate a report indicating usage metrics associated with each computer resource included in at least one of the first portion and the second portion.
15. The system of claim 14, wherein the reporting module is further configured to:
update the report to indicate updated usage metrics associated with each computer resource included in at least one of the first portion and the second portion; and
provide the updated report to a moderator associated with the multi-tenant distributed database for usage analysis.
16. The system of claim 11, further comprising:
a user interface module configured to receive, from the first customer, a selection of a rate limit for operating the first service, wherein the management services module is configured to allocate the first portion of the plurality of computer resources according to the rate limit.
17. The system of claim 11, further comprising:
a reporting module configured to, in response to the quality of service (QoS) controller detecting that the first customer is attempting to exceed the allocated first portion:
generate a service ticket indicating that the first customer is attempting to exceed the allocated first portion, and
provide the service ticket to a moderator associated with the multi-tenant distributed database.
18. The system of claim 11, wherein to identify the available portion, the quality of service (QoS) controller is configured to:
identify the available portion of the second portion of the plurality of computer resources allocated to the second customer, and
identify an available unallocated portion of the plurality of computer resources.
19. The system of claim 18, wherein to cause the management services module to dynamically allocate, to the first customer for operating the first service, the at least some of the available portion, the quality of service (QoS) controller is configured to:
dynamically allocate, to the first customer for operating the first service, (i) the at least some of the available portion and (ii) at least some of the available unallocated portion.
20. The system of claim 11, wherein to identify the available portion, the quality of service (QoS) controller is configured to:
identify the available portion of the second portion of the plurality of computer resources allocated to the second customer,
determine that a combination of the first portion and the available portion of the second portion exceeds a total capacity of the plurality of computer resources, and
reduce the available portion of the second portion for dynamic allocation.
US14/612,912 2014-04-11 2015-02-03 Systems and methods for multi-tenancy management within a distributed database Active 2035-10-09 US9628404B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/612,912 US9628404B1 (en) 2014-04-11 2015-02-03 Systems and methods for multi-tenancy management within a distributed database
US15/489,461 US10289703B1 (en) 2014-04-11 2017-04-17 System and methods for multi-tenancy management within a distributed database
US16/410,952 US10649963B1 (en) 2014-04-11 2019-05-13 Multi-tenancy management within a distributed database

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461978689P 2014-04-11 2014-04-11
US14/612,912 US9628404B1 (en) 2014-04-11 2015-02-03 Systems and methods for multi-tenancy management within a distributed database

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/489,461 Continuation US10289703B1 (en) 2014-04-11 2017-04-17 System and methods for multi-tenancy management within a distributed database

Publications (1)

Publication Number Publication Date
US9628404B1 true US9628404B1 (en) 2017-04-18

Family

ID=58765594

Family Applications (7)

Application Number Title Priority Date Filing Date
US14/575,476 Active 2036-03-04 US10235333B1 (en) 2014-04-11 2014-12-18 Managing consistency models in a distributed database
US14/612,912 Active 2035-10-09 US9628404B1 (en) 2014-04-11 2015-02-03 Systems and methods for multi-tenancy management within a distributed database
US14/685,485 Active 2036-06-16 US9852173B1 (en) 2014-04-11 2015-04-13 Systems and methods for using a reaction-based approach to managing shared state storage associated with a distributed database
US15/489,461 Active US10289703B1 (en) 2014-04-11 2017-04-17 System and methods for multi-tenancy management within a distributed database
US15/854,262 Active 2035-07-08 US10817501B1 (en) 2014-04-11 2017-12-26 Systems and methods for using a reaction-based approach to managing shared state storage associated with a distributed database
US16/357,131 Active 2035-11-02 US11269819B1 (en) 2014-04-11 2019-03-18 Managing consistency models in a distributed database
US16/410,952 Active US10649963B1 (en) 2014-04-11 2019-05-13 Multi-tenancy management within a distributed database

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/575,476 Active 2036-03-04 US10235333B1 (en) 2014-04-11 2014-12-18 Managing consistency models in a distributed database

Family Applications After (5)

Application Number Title Priority Date Filing Date
US14/685,485 Active 2036-06-16 US9852173B1 (en) 2014-04-11 2015-04-13 Systems and methods for using a reaction-based approach to managing shared state storage associated with a distributed database
US15/489,461 Active US10289703B1 (en) 2014-04-11 2017-04-17 System and methods for multi-tenancy management within a distributed database
US15/854,262 Active 2035-07-08 US10817501B1 (en) 2014-04-11 2017-12-26 Systems and methods for using a reaction-based approach to managing shared state storage associated with a distributed database
US16/357,131 Active 2035-11-02 US11269819B1 (en) 2014-04-11 2019-03-18 Managing consistency models in a distributed database
US16/410,952 Active US10649963B1 (en) 2014-04-11 2019-05-13 Multi-tenancy management within a distributed database

Country Status (1)

Country Link
US (7) US10235333B1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170134305A1 (en) * 2015-11-10 2017-05-11 Impetus Technologies, Inc. System and Method for Allocating and Reserving Supervisors in a Real-Time Distributed Processing Platform
US10289703B1 (en) * 2014-04-11 2019-05-14 Twitter, Inc. System and methods for multi-tenancy management within a distributed database
US20200042364A1 (en) * 2018-07-31 2020-02-06 Hewlett Packard Enterprise Development Lp Movement of services across clusters
US10977277B2 (en) 2010-12-23 2021-04-13 Mongodb, Inc. Systems and methods for database zone sharding and API integration
US10997211B2 (en) 2010-12-23 2021-05-04 Mongodb, Inc. Systems and methods for database zone sharding and API integration
US11222043B2 (en) 2010-12-23 2022-01-11 Mongodb, Inc. System and method for determining consensus within a distributed database
US11288282B2 (en) * 2015-09-25 2022-03-29 Mongodb, Inc. Distributed database systems and methods with pluggable storage engines
US11394532B2 (en) 2015-09-25 2022-07-19 Mongodb, Inc. Systems and methods for hierarchical key management in encrypted distributed databases
US11403317B2 (en) 2012-07-26 2022-08-02 Mongodb, Inc. Aggregation framework system architecture and method
US11422731B1 (en) * 2017-06-12 2022-08-23 Pure Storage, Inc. Metadata-based replication of a dataset
US11481289B2 (en) 2016-05-31 2022-10-25 Mongodb, Inc. Method and apparatus for reading and writing committed data
US11520670B2 (en) 2016-06-27 2022-12-06 Mongodb, Inc. Method and apparatus for restoring data from snapshots
US20220405298A1 (en) * 2015-09-25 2022-12-22 Mongodb, Inc. Large scale unstructured database systems
US11544288B2 (en) 2010-12-23 2023-01-03 Mongodb, Inc. Systems and methods for managing distributed database deployments
US11544284B2 (en) 2012-07-26 2023-01-03 Mongodb, Inc. Aggregation framework system architecture and method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10986169B2 (en) 2018-04-19 2021-04-20 Pinx, Inc. Systems, methods and media for a distributed social media network and system of record
US10992740B2 (en) * 2018-05-14 2021-04-27 Salesforce.Com, Inc. Dynamically balancing partitions within a distributed streaming storage platform
CN111008026B (en) 2018-10-08 2024-03-26 阿里巴巴集团控股有限公司 Cluster management method, device and system
US11509717B2 (en) * 2019-11-13 2022-11-22 Microsoft Technology Licensing, Llc Cross datacenter read-write consistency in a distributed cloud computing platform
US11483857B2 (en) * 2020-01-26 2022-10-25 Ali Atefi Apparatuses, methods, and computer-readable medium for communication in a wireless local area network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5455865A (en) * 1989-05-09 1995-10-03 Digital Equipment Corporation Robust packet routing over a distributed network containing malicious failures
US7317967B2 (en) * 2001-12-31 2008-01-08 B. Braun Medical Inc. Apparatus and method for transferring data to a pharmaceutical compounding system
US20090009295A1 (en) * 2007-03-30 2009-01-08 Broadcom Corporation Transceiver with far field and near field operation and methods for use therewith
US20110302577A1 (en) * 2010-06-02 2011-12-08 Microsoft Corporation Virtual machine migration techniques
US20120331254A1 (en) * 2004-04-09 2012-12-27 Hitachi, Ltd. Storage control system and method

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4679137A (en) * 1985-04-30 1987-07-07 Prometrix Corporation Process control interface system for designer and operator
US5133075A (en) * 1988-12-19 1992-07-21 Hewlett-Packard Company Method of monitoring changes in attribute values of object in an object-oriented database
JPH05346957A (en) * 1992-04-17 1993-12-27 Hitachi Ltd Device and method for presenting shape feature quantity
US5793310A (en) * 1994-02-04 1998-08-11 Nissan Motor Co., Ltd. Portable or vehicular navigating apparatus and method capable of displaying bird's eye view
JP3161919B2 (en) * 1994-10-18 2001-04-25 シャープ株式会社 Parameter input device
US5581687A (en) * 1994-11-10 1996-12-03 Baxter International Inc. Interactive control systems for medical processing devices
TW316308B (en) * 1997-01-16 1997-09-21 Acer Peripherals Inc Display screen function adjusting method and device
US5939877A (en) * 1997-05-27 1999-08-17 Hewlett-Packard Company Graphical system and method for automatically scaling waveforms in a signal measurement system
CN1093972C (en) * 1997-09-05 2002-11-06 索尼公司 Method and appts. for displaying images
US6029177A (en) * 1997-11-13 2000-02-22 Electronic Data Systems Corporation Method and system for maintaining the integrity of a database providing persistent storage for objects
US6032228A (en) * 1997-11-26 2000-02-29 International Business Machines Corporation Flexible cache-coherency mechanism
US8462810B2 (en) * 1999-05-21 2013-06-11 Wi-Lan, Inc. Method and system for adaptively obtaining bandwidth allocation requests
US7817666B2 (en) * 1999-05-21 2010-10-19 Wi-Lan, Inc. Method and system for adaptively obtaining bandwidth allocation requests
US7006530B2 (en) * 2000-12-22 2006-02-28 Wi-Lan, Inc. Method and system for adaptively obtaining bandwidth allocation requests
DE19929462A1 (en) * 1999-06-26 2001-02-22 Philips Corp Intellectual Pty Method for training an automatic speech recognizer
EP1290539A1 (en) * 2000-05-24 2003-03-12 Koninklijke Philips Electronics N.V. A method and apparatus for shorthand processing of medical images
JP4189787B2 (en) * 2001-03-06 2008-12-03 日本光電工業株式会社 Biological information display monitor and system
JP3412692B2 (en) * 2001-05-16 2003-06-03 コナミ株式会社 Gauge display program, gauge display method, and video game apparatus
US7761845B1 (en) * 2002-09-09 2010-07-20 Cypress Semiconductor Corporation Method for parameterizing a user module
US7421546B2 (en) * 2004-02-12 2008-09-02 Relaystar Sa/Nv Intelligent state engine system
US7475363B1 (en) * 2004-06-29 2009-01-06 Emc Corporation Methods and apparatus for viewing network resources
JP2006236019A (en) * 2005-02-25 2006-09-07 Hitachi Ltd Switching method for data copy system
JP4677322B2 (en) * 2005-10-25 2011-04-27 キヤノン株式会社 Image processing parameter setting device
US8677257B2 (en) * 2006-08-04 2014-03-18 Apple Inc. Granular graphical user interface element
JP2008311580A (en) * 2007-06-18 2008-12-25 Tokyo Electron Ltd Substrate processing device, display method, program, and recording medium
US20100182947A1 (en) * 2008-11-26 2010-07-22 Je-Hong Jong Method and system of providing link adaptation for maximizing throughput in mobile satellite systems
JP4939594B2 (en) * 2009-11-30 2012-05-30 インターナショナル・ビジネス・マシーンズ・コーポレーション An apparatus, method, and computer program for configuring a cloud system capable of dynamically providing a service level agreement based on service level actual values or updated preference information provided by a primary cloud and providing a service
US8843323B1 (en) * 2010-05-14 2014-09-23 Los Alamos National Security, Llc Method for computing self-consistent solution in a gun code
US8719845B2 (en) * 2010-05-19 2014-05-06 Microsoft Corporation Sharing and synchronization of objects
EP2583211B1 (en) * 2010-06-15 2020-04-15 Oracle International Corporation Virtual computing infrastructure
US9590849B2 (en) * 2010-06-23 2017-03-07 Twilio, Inc. System and method for managing a computing cluster
US8817621B2 (en) * 2010-07-06 2014-08-26 Nicira, Inc. Network virtualization apparatus
US8560887B2 (en) * 2010-12-09 2013-10-15 International Business Machines Corporation Adding scalability and fault tolerance to generic finite state machine frameworks for use in automated incident management of cloud computing infrastructures
US8774536B1 (en) * 2012-07-03 2014-07-08 Google Inc. Efficient processing of streams of images within a moving window session
AU2013308906B2 (en) * 2012-08-28 2016-07-21 Boston Scientific Neuromodulation Corporation Point-and-click programming for deep brain stimulation using real-time monopolar review trendlines
US8972491B2 (en) * 2012-10-05 2015-03-03 Microsoft Technology Licensing, Llc Consistency-based service-level agreements in cloud storage environments
US20140101298A1 (en) * 2012-10-05 2014-04-10 Microsoft Corporation Service level agreements for a configurable distributed storage system
US9547533B2 (en) * 2012-10-15 2017-01-17 Optum Soft, Inc. Efficient reliable distributed flow-controlled event propagation
CN103870771B (en) * 2012-12-14 2017-12-26 联想(北京)有限公司 A kind of method, apparatus and electronic equipment for preventing touch-screen false triggering
JP6270196B2 (en) * 2013-01-18 2018-01-31 シナプティクス・ジャパン合同会社 Display panel driver, panel display device, and adjustment device
US9201897B1 (en) * 2013-03-14 2015-12-01 The Mathworks, Inc. Global data storage combining multiple back-end storage devices
JP2014215903A (en) * 2013-04-26 2014-11-17 株式会社リコー Information browsing program, information processing apparatus, information browsing system and information browsing method
US9053167B1 (en) * 2013-06-19 2015-06-09 Amazon Technologies, Inc. Storage device selection for database partition replicas
JP2015049544A (en) * 2013-08-29 2015-03-16 オリンパス株式会社 Parameter change device and method
US9940042B2 (en) * 2013-09-06 2018-04-10 Hitachi, Ltd. Distributed storage system, and data-access method therefor
US10235333B1 (en) 2014-04-11 2019-03-19 Twitter, Inc. Managing consistency models in a distributed database
US9864636B1 (en) * 2014-12-10 2018-01-09 Amazon Technologies, Inc. Allocating processor resources based on a service-level agreement
US10045349B2 (en) * 2015-07-16 2018-08-07 Ali Atefi Apparatuses, methods, and computer-readable medium for communication in a wireless local area network
WO2017011744A1 (en) * 2015-07-16 2017-01-19 Atefi Ali Apparatuses, methods, and computer-readable medium for communication in a wireless local area network
EP3377961B1 (en) * 2015-11-17 2021-07-21 PCMS Holdings, Inc. System and method for using augmented reality to visualize network service quality
CN107816804B (en) * 2016-09-13 2021-11-02 广东美的生活电器制造有限公司 Display panel and liquid heating appliance
CN106454084A (en) * 2016-09-30 2017-02-22 努比亚技术有限公司 Control method and electronic device
CN108255381A (en) * 2016-12-29 2018-07-06 苏州普源精电科技有限公司 The region parameter amending method and device of electronic product
CN107068115A (en) * 2017-03-29 2017-08-18 北京安云世纪科技有限公司 A kind of screen display adjusting method and device
CN106898331B (en) * 2017-04-06 2019-12-24 北京安云世纪科技有限公司 Screen display adjusting method and device
US10514951B2 (en) * 2017-05-04 2019-12-24 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery
US11294726B2 (en) * 2017-05-04 2022-04-05 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scalable scheduler with heterogeneous resource allocation of large competing workloads types using QoS
US10545796B2 (en) * 2017-05-04 2020-01-28 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5455865A (en) * 1989-05-09 1995-10-03 Digital Equipment Corporation Robust packet routing over a distributed network containing malicious failures
US7317967B2 (en) * 2001-12-31 2008-01-08 B. Braun Medical Inc. Apparatus and method for transferring data to a pharmaceutical compounding system
US20120331254A1 (en) * 2004-04-09 2012-12-27 Hitachi, Ltd. Storage control system and method
US20090009295A1 (en) * 2007-03-30 2009-01-08 Broadcom Corporation Transceiver with far field and near field operation and methods for use therewith
US20110302577A1 (en) * 2010-06-02 2011-12-08 Microsoft Corporation Virtual machine migration techniques

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
curator.apache.org, "Client," (2015). Retrieved from the Internet on Jul. 23, 2015): http://curator.apache.org/curator-client/index.html.
curator.apache.org, "Error Handling," (2015). Retrieved from the Internet on Jul. 23, 2015: http://curator.apache.org/errors.html.
curator.apache.org, "Framework," (2015) Retrieved from the Internet on Jul. 23, 2015: http://curator.apache.org/curator-framework/index.html.
curator.apache.org, "Recipes," (2015). Retrieved from the Internet on Jul. 23, 2015: URL: http://curator.apache.org/curator-recipes/index.html.
curator.apache.org, "Utilities," (2015). Retrieved from the Internet on Jul. 23, 2015: http://curator.apache.org/utilities.html.
curator.apache.org, "Welcome to Apache Curator," (2015). Retrieved from the Internet on Apr. 17, 2015: http://curator.apache.org/.
data.linkedin.com, "Apache Helix," (2015). Retrieved from the Internet on Apr. 17, 2015: http://data.linkedin.com/opensource/helix.

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222043B2 (en) 2010-12-23 2022-01-11 Mongodb, Inc. System and method for determining consensus within a distributed database
US11544288B2 (en) 2010-12-23 2023-01-03 Mongodb, Inc. Systems and methods for managing distributed database deployments
US10977277B2 (en) 2010-12-23 2021-04-13 Mongodb, Inc. Systems and methods for database zone sharding and API integration
US10997211B2 (en) 2010-12-23 2021-05-04 Mongodb, Inc. Systems and methods for database zone sharding and API integration
US11544284B2 (en) 2012-07-26 2023-01-03 Mongodb, Inc. Aggregation framework system architecture and method
US11403317B2 (en) 2012-07-26 2022-08-02 Mongodb, Inc. Aggregation framework system architecture and method
US10289703B1 (en) * 2014-04-11 2019-05-14 Twitter, Inc. System and methods for multi-tenancy management within a distributed database
US10649963B1 (en) 2014-04-11 2020-05-12 Twitter, Inc. Multi-tenancy management within a distributed database
US20220405298A1 (en) * 2015-09-25 2022-12-22 Mongodb, Inc. Large scale unstructured database systems
US11288282B2 (en) * 2015-09-25 2022-03-29 Mongodb, Inc. Distributed database systems and methods with pluggable storage engines
US11394532B2 (en) 2015-09-25 2022-07-19 Mongodb, Inc. Systems and methods for hierarchical key management in encrypted distributed databases
US11816126B2 (en) * 2015-09-25 2023-11-14 Mongodb, Inc. Large scale unstructured database systems
US20170134305A1 (en) * 2015-11-10 2017-05-11 Impetus Technologies, Inc. System and Method for Allocating and Reserving Supervisors in a Real-Time Distributed Processing Platform
US10171378B2 (en) * 2015-11-10 2019-01-01 Impetus Technologies, Inc. System and method for allocating and reserving supervisors in a real-time distributed processing platform
US11481289B2 (en) 2016-05-31 2022-10-25 Mongodb, Inc. Method and apparatus for reading and writing committed data
US11537482B2 (en) 2016-05-31 2022-12-27 Mongodb, Inc. Method and apparatus for reading and writing committed data
US11520670B2 (en) 2016-06-27 2022-12-06 Mongodb, Inc. Method and apparatus for restoring data from snapshots
US11544154B2 (en) 2016-06-27 2023-01-03 Mongodb, Inc. Systems and methods for monitoring distributed database deployments
US11422731B1 (en) * 2017-06-12 2022-08-23 Pure Storage, Inc. Metadata-based replication of a dataset
US10733029B2 (en) * 2018-07-31 2020-08-04 Hewlett Packard Enterprise Development Lp Movement of services across clusters
US20200042364A1 (en) * 2018-07-31 2020-02-06 Hewlett Packard Enterprise Development Lp Movement of services across clusters

Also Published As

Publication number Publication date
US10289703B1 (en) 2019-05-14
US10235333B1 (en) 2019-03-19
US10649963B1 (en) 2020-05-12
US11269819B1 (en) 2022-03-08
US10817501B1 (en) 2020-10-27
US9852173B1 (en) 2017-12-26

Similar Documents

Publication Publication Date Title
US10649963B1 (en) Multi-tenancy management within a distributed database
CN105893139B (en) Method and device for providing storage service for tenant in cloud storage environment
US9798635B2 (en) Service level agreement-based resource allocation for failure recovery
US9582297B2 (en) Policy-based data placement in a virtualized computing environment
US9864517B2 (en) Actively responding to data storage traffic
US8386610B2 (en) System and method for automatic storage load balancing in virtual server environments
US9250965B2 (en) Resource allocation for migration within a multi-tiered system
US20110270968A1 (en) Decision support system for moving computing workloads to public clouds
US10210192B2 (en) Capacity accounting for heterogeneous storage systems
US20100125715A1 (en) Storage System and Operation Method Thereof
US10866970B1 (en) Range query capacity allocation
US20160378754A1 (en) Fast query processing in columnar databases with gpus
US9330158B1 (en) Range query capacity allocation
US20150236974A1 (en) Computer system and load balancing method
US9483510B2 (en) Correlating database and storage performance views
US10552224B2 (en) Computer system including server storage system
US20190317670A1 (en) Distributing data across a mixed data storage center
WO2022257709A1 (en) Database optimization using record correlation and intermediate storage media
US11055285B2 (en) Access path optimization
JP2021513137A (en) Data migration in a tiered storage management system
US11914586B2 (en) Automated partitioning of a distributed database system
Oral Application framework for supporting performance isolation in software as a service systems
WO2015145677A1 (en) Management computer and platform improvement method

Legal Events

Date Code Title Description
AS Assignment

Owner name: TWITTER, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOFFINET, CHRISTOPHER;SCHULLER, PETER;AVITAL, BOAZ;AND OTHERS;REEL/FRAME:034878/0836

Effective date: 20141216

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:062079/0677

Effective date: 20221027

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0086

Effective date: 20221027

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0001

Effective date: 20221027