US20100313201A1 - Methods and apparatus for fast context switching in a virtualized system - Google Patents

Methods and apparatus for fast context switching in a virtualized system Download PDF

Info

Publication number
US20100313201A1
US20100313201A1 US12/481,374 US48137409A US2010313201A1 US 20100313201 A1 US20100313201 A1 US 20100313201A1 US 48137409 A US48137409 A US 48137409A US 2010313201 A1 US2010313201 A1 US 2010313201A1
Authority
US
United States
Prior art keywords
guest
virtual
page table
globally unique
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/481,374
Other versions
US8312468B2 (en
Inventor
Matthew John Warton
Carl Frans VanSchaik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Dynamics Mission Systems Inc
Original Assignee
Open Kernel Labs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Open Kernel Labs Inc filed Critical Open Kernel Labs Inc
Priority to US12/481,374 priority Critical patent/US8312468B2/en
Assigned to OPEN KERNEL LABS reassignment OPEN KERNEL LABS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VAN SCHAIK, CARL FRANS, WARTON, MATTHEW JOHN
Priority to PCT/US2010/037372 priority patent/WO2010144316A1/en
Publication of US20100313201A1 publication Critical patent/US20100313201A1/en
Priority to US13/674,480 priority patent/US20130074070A1/en
Publication of US8312468B2 publication Critical patent/US8312468B2/en
Application granted granted Critical
Assigned to GENERAL DYNAMICS C4 SYSTEMS, INC. reassignment GENERAL DYNAMICS C4 SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OPEN KERNEL LABS, INC.
Assigned to GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC. reassignment GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL DYNAMICS C4 SYSTEMS, INC.
Assigned to GENERAL DYNAMICS MISSION SYSTEMS, INC reassignment GENERAL DYNAMICS MISSION SYSTEMS, INC MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC., GENERAL DYNAMICS MISSION SYSTEMS, LLC
Assigned to GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC. reassignment GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL DYNAMICS C4 SYSTEMS, INC.
Assigned to GENERAL DYNAMICS MISSION SYSTEMS, INC. reassignment GENERAL DYNAMICS MISSION SYSTEMS, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • the present application relates in general to virtual machines and more specifically to methods and apparatus for fast context switching in a virtualized system.
  • microprocessors such as microprocessors using the ARM architecture, include a privileged (or kernel) mode and an unprivileged (or user) mode of execution.
  • the privileged/kernel mode is typically reserved for a single operating system (OS).
  • Programs being executed by one of these processors typically access memory using virtual addresses.
  • a memory-management unit (MMU) translates these virtual addresses in to physical addresses.
  • each running application has a separate page table (PT) that maps virtual memory for that application to physical memory for that application.
  • PT page table
  • the OS is allocated a high portion of the virtual address space, with memory mappings that are only active when the processor is in kernel mode.
  • the OS typically has access to its own memory mapping as well as each application's memory mapping. This allows the OS to access data in each application's memory.
  • each application typically only has access to its own memory mappings and its own memory.
  • Page tables reside in regular memory. In order to use a page table entry, that entry must be brought in to a register in the processor called the translation lookaside buffer (TLB).
  • TLB translation lookaside buffer
  • PTEs page-table entries
  • the TLB contains no mapping for an attempted memory access
  • some processors e.g., ARM
  • traverse the appropriate page table to locate a suitable mapping. This is a time consuming process that degrades processor performance. If a suitable memory mapping is located, the page table walker inserts the memory mapping into the TLB. This removes a previous memory mapping from the TLB. Hence, reducing the number of TLB entries needed improves processor performance.
  • the MMU includes the TLB, the page-table walker, a page-table pointer register, and other control registers explained in more detail below. Because the MMU automatically walks the page tables, the processor architecture dictates a format for the associated page tables.
  • kernel mappings are independent of which application is executing. While each application has its own page table, this means that second-level (L 2 ) page tables that contain only kernel mappings may be shared between different applications' page tables. This is achieved by having the parts of all applications' L 1 page tables that correspond to the kernel part of the address space point to the same L 2 page tables, as illustrated in FIG. 2 . Furthermore, kernel entries are marked as global, which ensures that only one entry will ever be in the TLB for each kernel mapping, thus reducing pressure on TLB real-estate. A person of ordinary skill in the art will readily appreciate that other kernel mappings may be used without departing from the scope or spirit of the disclosed system.
  • a globally unique application-space identifier is associated with each guest application. No two applications share the same application-space identifier, even if the two applications reside in different virtual machines. Domain identifiers are used to ensure that a guest OS's mappings are only active when that guest is executing. A unique domain identifier is associated with each virtual machine, and all translation lookaside buffer entries thereby mapping the guest's kernel pages with that domain value. All other mappings are tagged with a predefined domain such as zero.
  • a virtual memory management unit may be configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table. In such an instance, the two virtual page table pointers are presented to a guest operating system, and two physical page table pointers are associated with the two virtual page table pointers.
  • FIG. 1 is a block diagram showing one example of how each running application has a separate page table that maps virtual memory for that application to physical memory for that application.
  • FIG. 2 is a block diagram showing one example of how having the parts of all applications' L 1 page tables that correspond to the kernel part of the address space point to the same L 2 page tables.
  • FIG. 3 is a block diagram showing an example of various address spaces and page tables.
  • FIG. 4 is a block diagram showing a simplified view of ARMv6 page table entries.
  • FIG. 5 is a block diagram showing a simplified example of the effect of the domain access control register.
  • FIG. 6 is a block diagram showing one example of how various tags are assigned and the pre-computed domain access control register value associated with each shadow page table.
  • FIG. 7 is a flowchart showing one example of how the hypervisor changes the domain access control register thereby enabling a particular virtual machine's domain in addition to a global domain.
  • FIG. 8 is a flowchart showing one example of how the hypervisor virtualizes and updates a application-space identification register with a value corresponding to a target application.
  • FIG. 9 is a flowchart showing one example of how a guest kernel returns to user mode and the hypervisor resets the domain access control register to only having the global domain enabled.
  • FIG. 10 is a flowchart showing one example of a world switch.
  • FIG. 11 is a flowchart showing one example of page fault handling and shadow page table construction.
  • FIG. 12 is a flowchart showing examples of flush operations.
  • FIG. 13 is a block diagram showing one example of dual page table pointers.
  • a hypervisor multiplexes hardware between multiple virtual machines (VMs), and presents each VM with the illusion of being a complete system.
  • VMs virtual machines
  • the hypervisor provides a virtual kernel mode for multiple virtualized operating systems running on the single processor.
  • FIG. 3 illustrates an example of various address spaces and page tables.
  • Each VM includes multiple applications, and each application has its own page table.
  • the application page table that is active for each VM is determined by that VM's OS pointing a virtual page-table pointer at the active page table.
  • the page tables maintained by the guest OS map addresses from the virtual address space as seen by the guest application or OS (called guest virtual addresses) to what the guest experiences as physical memory (guest physical address space).
  • guest virtual addresses addresses from the virtual address space as seen by the guest application or OS
  • guest physical address space what the guest experiences as physical memory (guest physical address space).
  • the hypervisor translates these guest physical addresses to real physical addresses.
  • the hypervisor preferably maintains a shadow page table, which is constructed from the guest OS's page table by translating the guest physical addresses (contained in the guest page tables) into real physical addresses.
  • the page table pointer for each virtual OS is virtualized by the hypervisor pointing the physical page table pointer at the appropriate shadow page table.
  • FIG. 4 illustrates a simplified view of ARMv6 page table entries (PTEs).
  • the domain ID which tags mappings and therefore logically belongs into the L 2 PT, is stored in the L 1 PT.
  • the MMU includes an application-space ID (ASID) register.
  • ASID application-space ID
  • TLB entries are tagged with a particular ASID value, and are only active if that tag matches the content of the processor's ASID register. TLB entries can also be marked global. Global TLB entries are active irrespective of the value of the ASID register. Typically, a small number of different ASID values are supported (e.g., 128-256).
  • the MMU also includes a domain access control register (DACR).
  • a TLB entry is also tagged with a domain ID, and the DACR specifies for each domain whether TLB entries tagged with that domain ID are presently active. Only a relatively small number of domains are supported (e.g., 16). The effect of the DACR is illustrated in FIG. 5 .
  • the disclosed system quickly enables and disables guest kernel mappings by keeping these mappings as valid mappings in the TLB even when they are not needed. Instead the mappings are quickly activated or deactivated by modifying a register.
  • the number of TLB entries used is reduced by ensuring that an entry mapping a guest kernel page is valid irrespective of which of the guest's applications has invoked the virtual kernel.
  • the number of TLB entries used is also reduced by enabling the guest kernel to share an application's TLB entries.
  • a particular guest's mapping either kernel or application, may only be valid during the execution of that particular guest's virtual machine (VM), it must be inactive if a different VM is executing.
  • a particular application's mappings may only be valid when that is the active application in the guest OS.
  • ASID value is associated with each guest application. No two applications share the same ASID, even if they reside in different VMs. If all of the ASIDs are used (e.g., 256), ASID preemption and recycling may be used. Guest kernels are not assigned an ASID, instead their mappings are marked global, meaning they are valid for all ASIDs.
  • Domain IDs are used to ensure that a guest OS's mappings are only active when that guest OS is executing.
  • a unique domain ID is associated with each VM, and all guest OS page table entries are tagged with that domain ID, thereby tagging the TLB entries mapping the guest's kernel pages with that domain value. All other mappings are tagged with a predefined domain ID such as zero.
  • any domain ID may be used. No two VMs share the same domain ID. If all of the domain IDs are used (e.g., 16), domain ID preemption and recycling may be used.
  • FIG. 6 illustrates an example of how the various tags (ASID, DID, global bit, kernel bit) are assigned, and also shows the pre-computed DACR value associated with each shadow page table.
  • This assignment allows the disclosed system to activate and deactivate memory mappings quickly by loading a small number of registers with pre-computed values.
  • the DACR has domain zero enabled, all other domains disabled. Thus, only the application's mappings (the ones with the matching ASID) are active.
  • the guest OS When an application in the VM calls the guest OS, the guest OS enters virtual kernel mode. As illustrated in FIG. 7 , the hypervisor changes the DACR, enabling the particular VM's domain (in addition to domain 0 ). This makes the guest kernel's mappings active and allows the guest kernel to access its own virtual memory as well as the application's virtual memory.
  • the DACR value for the guest kernel is typically pre-computed and stored as part of the virtual machine state.
  • the guest OS When the guest OS switches from one of its applications to another one of its applications, the guest OS does so by updating the virtual page table pointer. As illustrated in FIG. 8 , the hypervisor virtualizes this and updates the ASID register with the value corresponding to the target application. The hypervisor also updates the physical page table pointer to point to the shadow page table belonging to that application space.
  • the hypervisor when the guest kernel returns to user mode, the hypervisor resets the DACR to only domain zero being enabled. The hypervisor thereby de-activates the guest kernel mappings.
  • the hypervisor when the hypervisor switches between virtual machines (world switch), the hypervisor loads the DACR either with the value enabling only domain zero (if the VM is in virtual user mode) or the one enabling also the domain associated with the VM.
  • the hypervisor also loads the ASID register and physical page table pointer to point to the shadow page table associated with the appropriate application.
  • the shadow page tables are kept consistent with the guest page tables by hiding all this machinery behind the abstraction of a virtual memory-management unit (MMU).
  • MMU virtual memory-management unit
  • the virtual TLB is of an arbitrary size, so it rarely loses mappings and therefore avoids expensive misses on mappings replaced by mappings inserted later.
  • Address-space mappings are defined by setting up entries in the page table. The page table walker will find them and insert them into the TLB on demand. Removing or changing a memory mapping uses an explicit flush operation to invalidate any TLB entry that may contain the mapping.
  • the virtual TLB is represented by shadow page tables. Entries in the virtual TLB are added either by the guest OS performing an explicit TLB insert or by the hypervisor performing a virtual page table walk at page fault time.
  • the hypervisor can eagerly create shadow page table entries from the guest page table, for example when the guest sets the virtual page table pointer to a new page table. Entries are removed (or invalidated) when the guest OS explicitly flushes them from the virtual TLB. Shadow page-table entries may also be removed when the virtual TLB is full.
  • a flush operation may be targeted at a specific entry, a particular application's address space, or the whole virtual TLB.
  • the hypervisor invalidates that entry in the shadow page table and also flushes it from the physical TLB.
  • the hypervisor invalidates or removes a complete shadow page table and flushes the corresponding ASID from the physical TLB.
  • the hypervisor invalidates or removes all of the shadow page tables belonging to the VM that is performing the flush. This operation includes the guest OS's shadow page tables that belong to the particular VM and includes an appropriate flush of the physical TLB.
  • Some processors using the ARM architecture use dual page table pointers. This allows splitting the page table into two parts, one mapping the lower part and the other mapping the upper part of the address space. A boundary between the two parts may be configured through an MMU register. Each part has its own page table, which is pointed to by two separate page table registers, ttbt 0 and ttbr 1 .
  • OS typically allocated in the top of the address space and uses the ttbr 1 page table pointer.
  • Applications typically use the lower part of the address space, as illustrated in FIG. 13 .
  • the OS points ttbr 0 to the new program's page table, and leaves ttbr 1 unmodified.
  • each application requires a full-size L 1 page table (e.g., 16KiB on ARMv6).
  • L 1 page table e.g. 16KiB on ARMv6
  • the memory savings are e.g. 8KiB per application. In fact, the savings can be more. If an application is known to require less address space, an even smaller L 1 page table can be used.
  • the hypervisor may achieve similar performance and memory benefits to the ones described above by employing a virtual MMU that supports two page table pointers and a configurable boundary between the virtual user and kernel page tables.
  • the present system presents two virtual page table pointers to the guest OS, and makes use of the two physical page table pointers in the shadow page tables. Maintenance of the shadow page tables work as described above, except that the virtual kernel-user boundary determines whether an entry is inserted into the user or kernel shadow page table.
  • guest kernel entry (described above with reference to FIG. 7 ) and guest kernel exit (described above with reference to FIG. 9 ) are essentially unaffected by the use of dual page table pointers.
  • the virtualized PT pointer update (described above with reference to FIG. 8 ) works as described above if the guest OS changes the virtual ttbr 0 . However, if the guest OS changes the virtual ttbr 1 , the system removes the guest OS shadow page table and flushes the guest OS mappings from the TLB.
  • the world switch (described above with reference to FIG. 10 ) sets ttbr 0 to point to the application's shadow page table (as described above). However, in addition a world switch sets ttbr 1 to point to the guest kernel shadow page table (obtained from the VM context).
  • the operation described with reference to FIG. 11 continues to operate as described above, except that the fault address is compared to the virtual user-kernel boundary to determine which guest page table (user or kernel) to traverse, and which shadow PT to update.
  • the operation described with reference to FIG. 12 continues to operate as described above, except that when flushing an individual mapping, the address (compared to the virtual user-kernel boundary) determines from which shadow page table the entry is removed. In addition, when flushing a whole address space, only the user shadow page table is removed. Still further, when flushing the whole TLB, removing all shadow page tables means removing all user shadow page tables plus the kernel shadow page table for the particular VM.

Abstract

The present disclosure provides methods and apparatus for fast context switching in a virtualized system. In the disclosed system, a globally unique application-space identifier is associated with each guest application. No two applications share the same application-space identifier, even if the two applications reside in different virtual machines. Domain identifiers are used to ensure that a guest's mappings are only active when that guest is executing. A unique domain identifier is associated with each virtual machine, and all translation lookaside buffer entries thereby mapping the guest's kernel pages with that domain value. All other mappings are tagged with a predefined domain such as zero. In addition, a virtual memory management unit may be configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table. In such an instance, the two virtual page table pointers are presented to a guest operating system, and two physical page table pointers are associated with the two virtual page table pointers.

Description

    TECHNICAL FIELD
  • The present application relates in general to virtual machines and more specifically to methods and apparatus for fast context switching in a virtualized system.
  • BACKGROUND
  • Some microprocessors, such as microprocessors using the ARM architecture, include a privileged (or kernel) mode and an unprivileged (or user) mode of execution. The privileged/kernel mode is typically reserved for a single operating system (OS).
  • Programs being executed by one of these processors, such as applications and the operating system, typically access memory using virtual addresses. A memory-management unit (MMU) translates these virtual addresses in to physical addresses.
  • As illustrated in FIG. 1, each running application has a separate page table (PT) that maps virtual memory for that application to physical memory for that application. Typically, the OS is allocated a high portion of the virtual address space, with memory mappings that are only active when the processor is in kernel mode. The OS typically has access to its own memory mapping as well as each application's memory mapping. This allows the OS to access data in each application's memory. In contrast, each application typically only has access to its own memory mappings and its own memory.
  • In many non-virtualized systems, the change between allowing access to all of the memory and only some of the memory happens automatically when the processor mode changes between kernel mode and user mode. Each page table entry is tagged with a kernel bit to indicate whether the memory mapping is always valid or only valid in kernel mode.
  • Page tables reside in regular memory. In order to use a page table entry, that entry must be brought in to a register in the processor called the translation lookaside buffer (TLB). The TLB is a limited-size cache of page-table entries (PTEs). Because the TLB is so small (typically less than 100 entries), TLB real estate is valuable.
  • When the TLB contains no mapping for an attempted memory access, some processors (e.g., ARM) traverse (“walk”) the appropriate page table to locate a suitable mapping. This is a time consuming process that degrades processor performance. If a suitable memory mapping is located, the page table walker inserts the memory mapping into the TLB. This removes a previous memory mapping from the TLB. Hence, reducing the number of TLB entries needed improves processor performance.
  • The MMU includes the TLB, the page-table walker, a page-table pointer register, and other control registers explained in more detail below. Because the MMU automatically walks the page tables, the processor architecture dictates a format for the associated page tables.
  • Because there is only one kernel (one operating system) in a non-virtualized system, all kernel mappings are independent of which application is executing. While each application has its own page table, this means that second-level (L2) page tables that contain only kernel mappings may be shared between different applications' page tables. This is achieved by having the parts of all applications' L1 page tables that correspond to the kernel part of the address space point to the same L2 page tables, as illustrated in FIG. 2. Furthermore, kernel entries are marked as global, which ensures that only one entry will ever be in the TLB for each kernel mapping, thus reducing pressure on TLB real-estate. A person of ordinary skill in the art will readily appreciate that other kernel mappings may be used without departing from the scope or spirit of the disclosed system.
  • However, in a virtualized system, there are typically multiple kernels (multiple operating systems). As a result, fast context switching using these traditional memory mapping schemes becomes problematic.
  • SUMMARY
  • The present disclosure provides improved methods and apparatus for fast context switching in a virtualized system. In the example system disclosed, a globally unique application-space identifier is associated with each guest application. No two applications share the same application-space identifier, even if the two applications reside in different virtual machines. Domain identifiers are used to ensure that a guest OS's mappings are only active when that guest is executing. A unique domain identifier is associated with each virtual machine, and all translation lookaside buffer entries thereby mapping the guest's kernel pages with that domain value. All other mappings are tagged with a predefined domain such as zero. In addition, a virtual memory management unit may be configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table. In such an instance, the two virtual page table pointers are presented to a guest operating system, and two physical page table pointers are associated with the two virtual page table pointers.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 is a block diagram showing one example of how each running application has a separate page table that maps virtual memory for that application to physical memory for that application.
  • FIG. 2 is a block diagram showing one example of how having the parts of all applications' L1 page tables that correspond to the kernel part of the address space point to the same L2 page tables.
  • FIG. 3 is a block diagram showing an example of various address spaces and page tables.
  • FIG. 4 is a block diagram showing a simplified view of ARMv6 page table entries.
  • FIG. 5 is a block diagram showing a simplified example of the effect of the domain access control register.
  • FIG. 6 is a block diagram showing one example of how various tags are assigned and the pre-computed domain access control register value associated with each shadow page table.
  • FIG. 7 is a flowchart showing one example of how the hypervisor changes the domain access control register thereby enabling a particular virtual machine's domain in addition to a global domain.
  • FIG. 8 is a flowchart showing one example of how the hypervisor virtualizes and updates a application-space identification register with a value corresponding to a target application.
  • FIG. 9 is a flowchart showing one example of how a guest kernel returns to user mode and the hypervisor resets the domain access control register to only having the global domain enabled.
  • FIG. 10 is a flowchart showing one example of a world switch.
  • FIG. 11 is a flowchart showing one example of page fault handling and shadow page table construction.
  • FIG. 12 is a flowchart showing examples of flush operations.
  • FIG. 13 is a block diagram showing one example of dual page table pointers.
  • DETAILED DESCRIPTION
  • In a virtualized system, a hypervisor multiplexes hardware between multiple virtual machines (VMs), and presents each VM with the illusion of being a complete system. In this case, only the hypervisor has privileged access to all of the memory. The hypervisor provides a virtual kernel mode for multiple virtualized operating systems running on the single processor.
  • FIG. 3 illustrates an example of various address spaces and page tables. In this example, there are multiple VMs. Each VM includes multiple applications, and each application has its own page table. The application page table that is active for each VM is determined by that VM's OS pointing a virtual page-table pointer at the active page table.
  • The page tables maintained by the guest OS map addresses from the virtual address space as seen by the guest application or OS (called guest virtual addresses) to what the guest experiences as physical memory (guest physical address space). The hypervisor translates these guest physical addresses to real physical addresses.
  • The hypervisor preferably maintains a shadow page table, which is constructed from the guest OS's page table by translating the guest physical addresses (contained in the guest page tables) into real physical addresses. The page table pointer for each virtual OS is virtualized by the hypervisor pointing the physical page table pointer at the appropriate shadow page table.
  • FIG. 4 illustrates a simplified view of ARMv6 page table entries (PTEs). The domain ID, which tags mappings and therefore logically belongs into the L2 PT, is stored in the L1 PT.
  • The MMU includes an application-space ID (ASID) register. TLB entries are tagged with a particular ASID value, and are only active if that tag matches the content of the processor's ASID register. TLB entries can also be marked global. Global TLB entries are active irrespective of the value of the ASID register. Typically, a small number of different ASID values are supported (e.g., 128-256).
  • The MMU also includes a domain access control register (DACR). A TLB entry is also tagged with a domain ID, and the DACR specifies for each domain whether TLB entries tagged with that domain ID are presently active. Only a relatively small number of domains are supported (e.g., 16). The effect of the DACR is illustrated in FIG. 5.
  • In order increase performance, it is desirable to switch quickly between different virtual machines (world switch) within the processor, switch quickly between virtual user mode and virtual kernel mode within each VM, switch quickly between applications within each VM, and enable each virtual OS to have fast access to the memory of its applications. Because the virtualized kernel is executing in user mode, just like applications, the kernel-bit in the page table cannot be used for enabling and disabling mappings of the virtualized kernel (guest OS).
  • The disclosed system quickly enables and disables guest kernel mappings by keeping these mappings as valid mappings in the TLB even when they are not needed. Instead the mappings are quickly activated or deactivated by modifying a register.
  • The number of TLB entries used is reduced by ensuring that an entry mapping a guest kernel page is valid irrespective of which of the guest's applications has invoked the virtual kernel. The number of TLB entries used is also reduced by enabling the guest kernel to share an application's TLB entries. However, a particular guest's mapping, either kernel or application, may only be valid during the execution of that particular guest's virtual machine (VM), it must be inactive if a different VM is executing. Furthermore, a particular application's mappings may only be valid when that is the active application in the guest OS.
  • A globally unique ASID value is associated with each guest application. No two applications share the same ASID, even if they reside in different VMs. If all of the ASIDs are used (e.g., 256), ASID preemption and recycling may be used. Guest kernels are not assigned an ASID, instead their mappings are marked global, meaning they are valid for all ASIDs.
  • Domain IDs are used to ensure that a guest OS's mappings are only active when that guest OS is executing. A unique domain ID is associated with each VM, and all guest OS page table entries are tagged with that domain ID, thereby tagging the TLB entries mapping the guest's kernel pages with that domain value. All other mappings are tagged with a predefined domain ID such as zero. However, a person of ordinary skill in the art will readily appreciate that any domain ID may be used. No two VMs share the same domain ID. If all of the domain IDs are used (e.g., 16), domain ID preemption and recycling may be used.
  • FIG. 6 illustrates an example of how the various tags (ASID, DID, global bit, kernel bit) are assigned, and also shows the pre-computed DACR value associated with each shadow page table. This assignment allows the disclosed system to activate and deactivate memory mappings quickly by loading a small number of registers with pre-computed values. Specifically, when the VM is executing in virtual user mode, the DACR has domain zero enabled, all other domains disabled. Thus, only the application's mappings (the ones with the matching ASID) are active.
  • When an application in the VM calls the guest OS, the guest OS enters virtual kernel mode. As illustrated in FIG. 7, the hypervisor changes the DACR, enabling the particular VM's domain (in addition to domain 0). This makes the guest kernel's mappings active and allows the guest kernel to access its own virtual memory as well as the application's virtual memory. The DACR value for the guest kernel is typically pre-computed and stored as part of the virtual machine state.
  • When the guest OS switches from one of its applications to another one of its applications, the guest OS does so by updating the virtual page table pointer. As illustrated in FIG. 8, the hypervisor virtualizes this and updates the ASID register with the value corresponding to the target application. The hypervisor also updates the physical page table pointer to point to the shadow page table belonging to that application space.
  • As illustrated in FIG. 9, when the guest kernel returns to user mode, the hypervisor resets the DACR to only domain zero being enabled. The hypervisor thereby de-activates the guest kernel mappings.
  • As illustrated in FIG. 10, when the hypervisor switches between virtual machines (world switch), the hypervisor loads the DACR either with the value enabling only domain zero (if the VM is in virtual user mode) or the one enabling also the domain associated with the VM. The hypervisor also loads the ASID register and physical page table pointer to point to the shadow page table associated with the appropriate application.
  • As illustrated in FIG. 11, the shadow page tables are kept consistent with the guest page tables by hiding all this machinery behind the abstraction of a virtual memory-management unit (MMU). This includes a pre-defined page table format (possibly, but not necessarily, the same as the native PT format of the processor), a page table pointer which tells the virtual hardware PT walker where to start, and a TLB that is a cache of memory mappings. Unlike a physical TLB, the virtual TLB is of an arbitrary size, so it rarely loses mappings and therefore avoids expensive misses on mappings replaced by mappings inserted later.
  • Address-space mappings are defined by setting up entries in the page table. The page table walker will find them and insert them into the TLB on demand. Removing or changing a memory mapping uses an explicit flush operation to invalidate any TLB entry that may contain the mapping.
  • The virtual TLB is represented by shadow page tables. Entries in the virtual TLB are added either by the guest OS performing an explicit TLB insert or by the hypervisor performing a virtual page table walk at page fault time. Optionally, the hypervisor can eagerly create shadow page table entries from the guest page table, for example when the guest sets the virtual page table pointer to a new page table. Entries are removed (or invalidated) when the guest OS explicitly flushes them from the virtual TLB. Shadow page-table entries may also be removed when the virtual TLB is full.
  • As illustrated in FIG. 12, a flush operation may be targeted at a specific entry, a particular application's address space, or the whole virtual TLB. When the flush operation is targeted at a specific entry, the hypervisor invalidates that entry in the shadow page table and also flushes it from the physical TLB. When the flush operation is targeted at a particular application's address space, the hypervisor invalidates or removes a complete shadow page table and flushes the corresponding ASID from the physical TLB. When the flush operation is targeted at the whole virtual TLB, the hypervisor invalidates or removes all of the shadow page tables belonging to the VM that is performing the flush. This operation includes the guest OS's shadow page tables that belong to the particular VM and includes an appropriate flush of the physical TLB.
  • Some processors using the ARM architecture use dual page table pointers. This allows splitting the page table into two parts, one mapping the lower part and the other mapping the upper part of the address space. A boundary between the two parts may be configured through an MMU register. Each part has its own page table, which is pointed to by two separate page table registers, ttbt0 and ttbr1.
  • Native operating systems typically use these separate parts of memory to keep kernel and user page tables separate. The OS is typically allocated in the top of the address space and uses the ttbr1 page table pointer. Applications typically use the lower part of the address space, as illustrated in FIG. 13. On a context switch, the OS points ttbr0 to the new program's page table, and leaves ttbr1 unmodified.
  • This approach has a performance benefit. When a user process is created, the kernel part of the L1 page table would normally have to be filled with pointers to the already existing kernel L2 page tables. This step is not required in the dual page table scheme, as the kernel page tables are kept separate.
  • This method also saves memory. With a single page table (containing user and kernel mappings), each application requires a full-size L1 page table (e.g., 16KiB on ARMv6). With the dual page table pointers, this is reduced as the kernel page table is always the same, and a smaller L1 page table suffices for each user process. In the typical case that the address-space is split half-half between user and kernel, the memory savings are e.g., 8KiB per application. In fact, the savings can be more. If an application is known to require less address space, an even smaller L1 page table can be used.
  • The hypervisor may achieve similar performance and memory benefits to the ones described above by employing a virtual MMU that supports two page table pointers and a configurable boundary between the virtual user and kernel page tables. The present system presents two virtual page table pointers to the guest OS, and makes use of the two physical page table pointers in the shadow page tables. Maintenance of the shadow page tables work as described above, except that the virtual kernel-user boundary determines whether an entry is inserted into the user or kernel shadow page table.
  • More specifically, guest kernel entry (described above with reference to FIG. 7) and guest kernel exit (described above with reference to FIG. 9) are essentially unaffected by the use of dual page table pointers. The virtualized PT pointer update (described above with reference to FIG. 8) works as described above if the guest OS changes the virtual ttbr0. However, if the guest OS changes the virtual ttbr1, the system removes the guest OS shadow page table and flushes the guest OS mappings from the TLB.
  • The world switch (described above with reference to FIG. 10) sets ttbr0 to point to the application's shadow page table (as described above). However, in addition a world switch sets ttbr1 to point to the guest kernel shadow page table (obtained from the VM context).
  • The operation described with reference to FIG. 11 continues to operate as described above, except that the fault address is compared to the virtual user-kernel boundary to determine which guest page table (user or kernel) to traverse, and which shadow PT to update.
  • The operation described with reference to FIG. 12 continues to operate as described above, except that when flushing an individual mapping, the address (compared to the virtual user-kernel boundary) determines from which shadow page table the entry is removed. In addition, when flushing a whole address space, only the user shadow page table is removed. Still further, when flushing the whole TLB, removing all shadow page tables means removing all user shadow page tables plus the kernel shadow page table for the particular VM.
  • In summary, persons of ordinary skill in the art will readily appreciate that methods and apparatus for fast context switching in a virtualized system have been disclosed. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the exemplary embodiments disclosed. Many modifications and variations are possible in light of the above teachings. It is intended that the scope of the invention be limited not by this detailed description of examples, but rather by the claims appended hereto.

Claims (27)

1. A method of context switching in a virtualized system, the method comprising:
associating a plurality of globally unique domain identifiers with a plurality of different virtual machines;
associating the plurality of globally unique domain identifiers with a plurality of translation lookaside buffer entries;
associating a plurality of globally unique application-space identifiers with a plurality of guest applications, wherein the plurality of guest applications reside in different virtual machines from the plurality of different virtual machines; and
switching from one virtual machine in the plurality of virtual machines to another virtual machine in the plurality of virtual machines by updating at least one register associated with the plurality of globally unique domain identifiers.
2. The method of claim 1, including switching from one guest application in the plurality of guest applications to another guest application in the plurality of guest applications by updating at least one register associated with the plurality of globally unique application-space identifiers.
3. The method of claim 2, wherein the at least one register associated with the plurality of globally unique domain identifiers includes at least a first memory management unit register, and the at least one register associated with the plurality of globally unique application-space identifiers includes at least a second different memory management unit register.
4. The method of claim 1, wherein the plurality of globally unique domain identifiers are used to ensure that each guest operating system in a plurality of guest operating system has memory mappings that are only active when that guest operating system is executing.
5. The method of claim 1, wherein the plurality of globally unique domain identifiers are used to map each guest operating system in a plurality of guest operating systems with pages associated with that domain value.
6. The method of claim 1, wherein a plurality of memory mappings are associated with a domain identifier indicative of virtually privileged memory access.
7. The method of claim 1, wherein a virtual memory management unit is configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table.
8. The method of claim 7, including presenting the two virtual page table pointers to a guest operating system, and associating two physical page table pointers with the two virtual page table pointers.
9. The method of claim 1, including executing the plurality of guest applications on an electronic device.
10. A virtualized system comprising:
a processor; and
a memory operatively coupled to the processer, wherein the processor:
associates a plurality of globally unique domain identifiers with a plurality of different virtual machines;
associates the plurality of globally unique domain identifiers with a plurality of translation lookaside buffer entries;
associates a plurality of globally unique application-space identifiers with a plurality of guest applications, wherein the plurality of guest applications reside in different virtual machines from the plurality of different virtual machines; and
switches from one virtual machine in the plurality of virtual machines to another virtual machine in the plurality of virtual machines by updating at least one register associated with the plurality of globally unique domain identifiers.
11. The virtualized system of claim 10, wherein the processor switches from one guest application in the plurality of guest applications to another guest application in the plurality of guest applications by updating at least one register associated with the plurality of globally unique application-space identifiers.
12. The method of claim 11, wherein the at least one register associated with the plurality of globally unique domain identifiers includes at least a first memory management unit register, and the at least one register associated with the plurality of globally unique application-space identifiers includes at least a second different memory management unit register.
13. The virtualized system of claim 10, wherein the plurality of globally unique domain identifiers are used to ensure that each guest operating system in a plurality of guest operating systems has memory mappings that are only active when that guest operating system is executing.
14. The virtualized system of claim 10, wherein the plurality of globally unique domain identifiers are used to map each guest operating system in a plurality of guest operating systems with pages associated with that domain value.
15. The virtualized system of claim 10, wherein a plurality of memory mappings are associated with a domain identifier indicative of privileged memory access.
16. The virtualized system of claim 10, wherein a virtual memory management unit is configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table.
17. The virtualized system of claim 16, wherein the processor presents the two virtual page table pointers to a guest operating system, and associating two physical page table pointers with the two virtual page table pointers.
18. The virtualized system of claim 10, wherein the processor executes the plurality of guest applications on an electronic device.
19. A computer readable storage device storing a software program structured to cause a processor to:
associate a plurality of globally unique domain identifiers with a plurality of different virtual machines;
associate the plurality of globally unique domain identifiers with a plurality of translation lookaside buffer entries;
associate a plurality of globally unique application-space identifiers with a plurality of guest applications, wherein the plurality of guest applications reside in different virtual machines from the plurality of different virtual machines; and
switch from one virtual machine in the plurality of virtual machines to another virtual machine in the plurality of virtual machines by updating at least one register associated with the plurality of globally unique domain identifiers.
20. The computer readable storage device of claim 19, wherein the processor switches from one guest application in the plurality of guest applications to another guest application in the plurality of guest applications by updating at least one register associated with the plurality of globally unique application-space identifiers.
21. The method of claim 20, wherein the at least one register associated with the plurality of globally unique domain identifiers includes at least a first memory management unit register, and the at least one register associated with the plurality of globally unique application-space identifiers includes at least a second different memory management unit register.
22. The computer readable storage device of claim 19, wherein the plurality of globally unique domain identifiers are used to ensure that each guest operating system in a plurality of guest operating system has memory mappings that are only active when that guest operating system is executing.
23. The computer readable storage device of claim 19, wherein the plurality of globally unique domain identifiers are used to map each guest operating system in a plurality of guest operating systems with pages associated with that domain value.
24. The computer readable storage device of claim 19, wherein a plurality of memory mappings are associated with a domain identifier indicative of privileged memory access.
25. The computer readable storage device of claim of claim 19, wherein a virtual memory management unit is configured to support two virtual page table pointers and a configurable boundary between a virtual user page table and a virtual kernel page table.
26. The computer readable storage device of claim of claim 25, wherein the processor presents the two virtual page table pointers to a guest operating system, and associating two physical page table pointers with the two virtual page table pointers.
27. The computer readable storage device of claim 19, wherein the processor executes the plurality of guest applications on an electronic device.
US12/481,374 2009-06-09 2009-06-09 Methods and apparatus for fast context switching in a virtualized system Active 2031-01-16 US8312468B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/481,374 US8312468B2 (en) 2009-06-09 2009-06-09 Methods and apparatus for fast context switching in a virtualized system
PCT/US2010/037372 WO2010144316A1 (en) 2009-06-09 2010-06-04 Methods and apparatus for fast context switching in a virtualized system
US13/674,480 US20130074070A1 (en) 2009-06-09 2012-11-12 Methods and apparatus for fast context switching in a virtualized system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/481,374 US8312468B2 (en) 2009-06-09 2009-06-09 Methods and apparatus for fast context switching in a virtualized system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/674,480 Continuation US20130074070A1 (en) 2009-06-09 2012-11-12 Methods and apparatus for fast context switching in a virtualized system

Publications (2)

Publication Number Publication Date
US20100313201A1 true US20100313201A1 (en) 2010-12-09
US8312468B2 US8312468B2 (en) 2012-11-13

Family

ID=43301693

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/481,374 Active 2031-01-16 US8312468B2 (en) 2009-06-09 2009-06-09 Methods and apparatus for fast context switching in a virtualized system
US13/674,480 Abandoned US20130074070A1 (en) 2009-06-09 2012-11-12 Methods and apparatus for fast context switching in a virtualized system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/674,480 Abandoned US20130074070A1 (en) 2009-06-09 2012-11-12 Methods and apparatus for fast context switching in a virtualized system

Country Status (2)

Country Link
US (2) US8312468B2 (en)
WO (1) WO2010144316A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110167422A1 (en) * 2010-01-05 2011-07-07 Sungkyunkwan University Foundation For Corporate Collaboration Virtualization apparatus
US20110231614A1 (en) * 2010-03-18 2011-09-22 Oracle International Corporation Accelerating memory operations using virtualization information
US20120151117A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "domain track"
US20120151168A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "l1 iterate and l2 swizzle"
US20120151116A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "l1 iterate and l2 drop/repopulate"
US20130117530A1 (en) * 2011-11-07 2013-05-09 Electronics And Telecommunications Research Institute Apparatus for translating virtual address space
WO2013101378A1 (en) * 2011-12-30 2013-07-04 Advanced Micro Devices, Inc. Instruction fetch translation lookaside buffer management to support host and guest o/s translations
WO2013112151A1 (en) * 2012-01-26 2013-08-01 Empire Technology Development Llc Data center with continuous world switch security
CN103257936A (en) * 2012-02-17 2013-08-21 联想(北京)有限公司 Memory mapping method and memory mapping module
CN104050415A (en) * 2013-03-15 2014-09-17 英特尔公司 Robust and High Performance Instructions for System Call
US20150234718A1 (en) * 2011-10-13 2015-08-20 Mcafee, Inc. System and method for kernel rootkit protection in a hypervisor environment
US20160203014A1 (en) * 2015-01-08 2016-07-14 International Business Machines Corporaiton Managing virtual machines using globally unique persistent virtual machine identifiers
US9946562B2 (en) 2011-10-13 2018-04-17 Mcafee, Llc System and method for kernel rootkit protection in a hypervisor environment
GB2563879A (en) * 2017-06-28 2019-01-02 Advanced Risc Mach Ltd Realm identifier comparison for translation cache lookup
US20200097413A1 (en) * 2018-09-25 2020-03-26 Ati Technologies Ulc External memory based translation lookaside buffer
CN111381879A (en) * 2018-12-31 2020-07-07 华为技术有限公司 Data processing method and device
US10761984B2 (en) * 2018-07-27 2020-09-01 Vmware, Inc. Using cache coherent FPGAS to accelerate remote access
US10891238B1 (en) 2019-06-28 2021-01-12 International Business Machines Corporation Dynamically joining and splitting dynamic address translation (DAT) tables based on operational context
US10970224B2 (en) * 2019-06-28 2021-04-06 International Business Machines Corporation Operational context subspaces
US11074195B2 (en) 2019-06-28 2021-07-27 International Business Machines Corporation Access to dynamic address translation across multiple spaces for operational context subspaces
US11099871B2 (en) 2018-07-27 2021-08-24 Vmware, Inc. Using cache coherent FPGAS to accelerate live migration of virtual machines
US11126464B2 (en) 2018-07-27 2021-09-21 Vmware, Inc. Using cache coherent FPGAS to accelerate remote memory write-back
US11231949B2 (en) 2018-07-27 2022-01-25 Vmware, Inc. Using cache coherent FPGAS to accelerate post-copy migration
CN114595164A (en) * 2022-05-09 2022-06-07 支付宝(杭州)信息技术有限公司 Method and apparatus for managing TLB cache in virtualized platform
US20240095184A1 (en) * 2022-09-21 2024-03-21 Advanced Micro Devices, Inc. Address Translation Service Management
US11947458B2 (en) 2018-07-27 2024-04-02 Vmware, Inc. Using cache coherent FPGAS to track dirty cache lines

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5697206B2 (en) * 2011-03-31 2015-04-08 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation System, method and program for protecting against unauthorized access
US9946566B2 (en) 2015-09-28 2018-04-17 Intel Corporation Method and apparatus for light-weight virtualization contexts
CN108139925B (en) 2016-05-31 2022-06-03 安华高科技股份有限公司 High availability of virtual machines
WO2017209876A1 (en) 2016-05-31 2017-12-07 Brocade Communications Systems, Inc. Buffer manager
US10387184B2 (en) 2016-11-15 2019-08-20 Red Hat Israel, Ltd. Address based host page table selection
CN109766286A (en) * 2018-11-26 2019-05-17 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) A kind of memory pool access method and device
CN110221990B (en) * 2019-04-26 2021-10-08 奇安信科技集团股份有限公司 Data storage method and device, storage medium and computer equipment
US11422944B2 (en) * 2020-08-10 2022-08-23 Intel Corporation Address translation technologies

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076141A (en) * 1996-01-24 2000-06-13 Sun Microsytems, Inc. Look-up switch accelerator and method of operating same
US6233668B1 (en) * 1999-10-27 2001-05-15 Compaq Computer Corporation Concurrent page tables
US6610657B1 (en) * 1996-11-21 2003-08-26 Promega Corporation Alkyl peptide amides and applications
US20060259732A1 (en) * 2005-05-12 2006-11-16 Microsoft Corporation Enhanced shadow page table algorithms
US20060282461A1 (en) * 2005-06-10 2006-12-14 Microsoft Corporation Object virtualization
US7409487B1 (en) * 2003-06-30 2008-08-05 Vmware, Inc. Virtualization system for computers that use address space indentifiers
US20090326192A1 (en) * 2008-04-08 2009-12-31 Aileron Therapeutics, Inc. Biologically active peptidomimetic macrocycles
US20090327648A1 (en) * 2008-06-30 2009-12-31 Savagaonkar Uday R Generating multiple address space identifiers per virtual machine to switch between protected micro-contexts
US20100162235A1 (en) * 2008-12-18 2010-06-24 Vmware, Inc. Virtualization system with a remote proxy
US20110144303A1 (en) * 2008-04-08 2011-06-16 Aileron Therapeutics, Inc. Biologically Active Peptidomimetic Macrocycles
US20120082636A1 (en) * 2003-11-05 2012-04-05 Walensky Loren D Stabilized alpha helical peptides and uses thereof

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076141A (en) * 1996-01-24 2000-06-13 Sun Microsytems, Inc. Look-up switch accelerator and method of operating same
US6610657B1 (en) * 1996-11-21 2003-08-26 Promega Corporation Alkyl peptide amides and applications
US6233668B1 (en) * 1999-10-27 2001-05-15 Compaq Computer Corporation Concurrent page tables
US7409487B1 (en) * 2003-06-30 2008-08-05 Vmware, Inc. Virtualization system for computers that use address space indentifiers
US20120082636A1 (en) * 2003-11-05 2012-04-05 Walensky Loren D Stabilized alpha helical peptides and uses thereof
US20060259732A1 (en) * 2005-05-12 2006-11-16 Microsoft Corporation Enhanced shadow page table algorithms
US20060282461A1 (en) * 2005-06-10 2006-12-14 Microsoft Corporation Object virtualization
US20090326192A1 (en) * 2008-04-08 2009-12-31 Aileron Therapeutics, Inc. Biologically active peptidomimetic macrocycles
US20110144303A1 (en) * 2008-04-08 2011-06-16 Aileron Therapeutics, Inc. Biologically Active Peptidomimetic Macrocycles
US20090327648A1 (en) * 2008-06-30 2009-12-31 Savagaonkar Uday R Generating multiple address space identifiers per virtual machine to switch between protected micro-contexts
US20100162235A1 (en) * 2008-12-18 2010-06-24 Vmware, Inc. Virtualization system with a remote proxy

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110167422A1 (en) * 2010-01-05 2011-07-07 Sungkyunkwan University Foundation For Corporate Collaboration Virtualization apparatus
US8793439B2 (en) * 2010-03-18 2014-07-29 Oracle International Corporation Accelerating memory operations using virtualization information
US20110231614A1 (en) * 2010-03-18 2011-09-22 Oracle International Corporation Accelerating memory operations using virtualization information
US9251102B2 (en) 2010-12-13 2016-02-02 Vmware, Inc. Virtualizing processor memory protection with “L1 iterate and L2 drop/repopulate”
US8621136B2 (en) * 2010-12-13 2013-12-31 Vmware, Inc. Virtualizing processor memory protection with “L1 iterate and L2 swizzle”
US20120151117A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "domain track"
US20120151116A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "l1 iterate and l2 drop/repopulate"
US8489800B2 (en) * 2010-12-13 2013-07-16 Vmware, Inc. Virtualizing processor memory protection with “domain track”
US8832351B2 (en) * 2010-12-13 2014-09-09 Vmware, Inc. Virtualizing processor memory protection with “L1 iterate and L2 drop/repopulate”
US20120151168A1 (en) * 2010-12-13 2012-06-14 Vmware, Inc. Virtualizing processor memory protection with "l1 iterate and l2 swizzle"
US20150234718A1 (en) * 2011-10-13 2015-08-20 Mcafee, Inc. System and method for kernel rootkit protection in a hypervisor environment
US9465700B2 (en) * 2011-10-13 2016-10-11 Mcafee, Inc. System and method for kernel rootkit protection in a hypervisor environment
US9946562B2 (en) 2011-10-13 2018-04-17 Mcafee, Llc System and method for kernel rootkit protection in a hypervisor environment
US20130117530A1 (en) * 2011-11-07 2013-05-09 Electronics And Telecommunications Research Institute Apparatus for translating virtual address space
WO2013101378A1 (en) * 2011-12-30 2013-07-04 Advanced Micro Devices, Inc. Instruction fetch translation lookaside buffer management to support host and guest o/s translations
US9465748B2 (en) 2011-12-30 2016-10-11 Advanced Micro Devices, Inc. Instruction fetch translation lookaside buffer management to support host and guest O/S translations
US8789047B2 (en) 2012-01-26 2014-07-22 Empire Technology Development Llc Allowing world switches between virtual machines via hypervisor world switch security setting
WO2013112151A1 (en) * 2012-01-26 2013-08-01 Empire Technology Development Llc Data center with continuous world switch security
US9652272B2 (en) 2012-01-26 2017-05-16 Empire Technology Development Llc Activating continuous world switch security for tasks to allow world switches between virtual machines executing the tasks
CN103257936A (en) * 2012-02-17 2013-08-21 联想(北京)有限公司 Memory mapping method and memory mapping module
CN104050415A (en) * 2013-03-15 2014-09-17 英特尔公司 Robust and High Performance Instructions for System Call
US20160092227A1 (en) * 2013-03-15 2016-03-31 Intel Corporation Robust and High Performance Instructions for System Call
US9207940B2 (en) * 2013-03-15 2015-12-08 Intel Corporation Robust and high performance instructions for system call
US20140281437A1 (en) * 2013-03-15 2014-09-18 Baiju V. Patel Robust and High Performance Instructions for System Call
US20160203014A1 (en) * 2015-01-08 2016-07-14 International Business Machines Corporaiton Managing virtual machines using globally unique persistent virtual machine identifiers
GB2563879A (en) * 2017-06-28 2019-01-02 Advanced Risc Mach Ltd Realm identifier comparison for translation cache lookup
GB2563879B (en) * 2017-06-28 2019-07-17 Advanced Risc Mach Ltd Realm identifier comparison for translation cache lookup
US11113209B2 (en) 2017-06-28 2021-09-07 Arm Limited Realm identifier comparison for translation cache lookup
US11099871B2 (en) 2018-07-27 2021-08-24 Vmware, Inc. Using cache coherent FPGAS to accelerate live migration of virtual machines
US11231949B2 (en) 2018-07-27 2022-01-25 Vmware, Inc. Using cache coherent FPGAS to accelerate post-copy migration
US11947458B2 (en) 2018-07-27 2024-04-02 Vmware, Inc. Using cache coherent FPGAS to track dirty cache lines
US10761984B2 (en) * 2018-07-27 2020-09-01 Vmware, Inc. Using cache coherent FPGAS to accelerate remote access
US11126464B2 (en) 2018-07-27 2021-09-21 Vmware, Inc. Using cache coherent FPGAS to accelerate remote memory write-back
US11243891B2 (en) * 2018-09-25 2022-02-08 Ati Technologies Ulc External memory based translation lookaside buffer
US20200097413A1 (en) * 2018-09-25 2020-03-26 Ati Technologies Ulc External memory based translation lookaside buffer
CN111381879A (en) * 2018-12-31 2020-07-07 华为技术有限公司 Data processing method and device
US11074195B2 (en) 2019-06-28 2021-07-27 International Business Machines Corporation Access to dynamic address translation across multiple spaces for operational context subspaces
US10970224B2 (en) * 2019-06-28 2021-04-06 International Business Machines Corporation Operational context subspaces
US11321239B2 (en) 2019-06-28 2022-05-03 International Business Machines Corporation Dynamically joining and splitting dynamic address translation (DAT) tables based on operational context
US10891238B1 (en) 2019-06-28 2021-01-12 International Business Machines Corporation Dynamically joining and splitting dynamic address translation (DAT) tables based on operational context
CN114595164A (en) * 2022-05-09 2022-06-07 支付宝(杭州)信息技术有限公司 Method and apparatus for managing TLB cache in virtualized platform
US20240095184A1 (en) * 2022-09-21 2024-03-21 Advanced Micro Devices, Inc. Address Translation Service Management

Also Published As

Publication number Publication date
WO2010144316A1 (en) 2010-12-16
US8312468B2 (en) 2012-11-13
US20130074070A1 (en) 2013-03-21

Similar Documents

Publication Publication Date Title
US8312468B2 (en) Methods and apparatus for fast context switching in a virtualized system
US10303620B2 (en) Maintaining processor resources during architectural events
US7073042B2 (en) Reclaiming existing fields in address translation data structures to extend control over memory accesses
US7945761B2 (en) Maintaining validity of cached address mappings
US8015388B1 (en) Bypassing guest page table walk for shadow page table entries not present in guest page table
US9304915B2 (en) Virtualization system using hardware assistance for page table coherence
EP2548124B1 (en) Address mapping in virtualized processing system
US9619387B2 (en) Invalidating stored address translations
US6907600B2 (en) Virtual translation lookaside buffer
US8060722B2 (en) Hardware assistance for shadow page table coherence with guest page mappings
US7908646B1 (en) Virtualization system for computers having multiple protection mechanisms
US9846610B2 (en) Page fault-based fast memory-mapped I/O for virtual machines
US7734892B1 (en) Memory protection and address translation hardware support for virtual machines
US7823151B2 (en) Method of ensuring the integrity of TLB entries after changing the translation mode of a virtualized operating system without requiring a flush of the TLB
JP2007188121A (en) Method for speeding up change in page table address on virtual machine
US10339068B2 (en) Fully virtualized TLBs
US20140208034A1 (en) System And Method for Efficient Paravirtualized OS Process Switching

Legal Events

Date Code Title Description
AS Assignment

Owner name: OPEN KERNEL LABS, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WARTON, MATTHEW JOHN;VAN SCHAIK, CARL FRANS;SIGNING DATES FROM 20091123 TO 20100212;REEL/FRAME:024237/0393

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: GENERAL DYNAMICS C4 SYSTEMS, INC., ARIZONA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OPEN KERNEL LABS, INC.;REEL/FRAME:032985/0455

Effective date: 20140529

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: GENERAL DYNAMICS MISSION SYSTEMS, INC, VIRGINIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:GENERAL DYNAMICS MISSION SYSTEMS, LLC;GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC.;REEL/FRAME:039117/0839

Effective date: 20151209

Owner name: GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC

Free format text: MERGER;ASSIGNOR:GENERAL DYNAMICS C4 SYSTEMS, INC.;REEL/FRAME:039117/0063

Effective date: 20151209

AS Assignment

Owner name: GENERAL DYNAMICS MISSION SYSTEMS, INC., VIRGINIA

Free format text: MERGER;ASSIGNOR:GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC.;REEL/FRAME:039269/0131

Effective date: 20151209

Owner name: GENERAL DYNAMICS ADVANCED INFORMATION SYSTEMS, INC

Free format text: MERGER;ASSIGNOR:GENERAL DYNAMICS C4 SYSTEMS, INC.;REEL/FRAME:039269/0007

Effective date: 20151209

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8