A METHOD AND GRAPHICS SUBSYSTEM FOR A COMPUTING DEVICE PRIORITY CLAIM
[0001] The present invention claims priority to U.S. Provisional Patent Application No. 60/543,108 filed on February 9, 2004, the contents of which are incorporated herein by reference.
RELATED APPLICATIONS
[0002] The present application relates to the following applications: (1) Attorney Docket No. 4001. Palm.PSI entitled "A System and Method of Format Negotiation in a Computing Device"; (2) Attorney Docket No. 4003.Palm.PSI entitled "A System and Method for a Security Model for a Computing Device"; and (3) Attorney Docket 4004.Palm.PSI entitled "A System and Method of Managing Connections with an Available Network", each of which are filed on the same day as the present application, the contents of each Application are incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
[0003] The present invention relates generally to operating system software. More particularly, the invention relates to software for implementing a graphics subsystem and a viewing system on computing devices. 2. Introduction
[0004] As mobile and handheld devices become more powerful and their displays improve, users expect a constantly improving graphical user interface experience. In addition, the Web is creating increasing demands and expectations for user interfaces. To implement a platform that will meet these needs and to continue meeting the growing
needs in the future, what is needed is a powerful rendering or drawing model and a graphics subsystem which scale across a wide range of hardware capabilities. For example, implementing a graphics subsystem for a network of mobile devices, tablets, and various devices that utilize two lower bandwidth CPUs rather than one high bandwidth CPU will be preferable.
[0005] Present operating systems for mobile devices typically have the graphics subsystem and the viewing and windowing systems closely tied. This tight coupling causes heavyweight data duplication. The same data is stored on one or more client processes and on a terminal server. Consequently, there is significant processing overhead because of repeated communication with a display server whenever there is any change in the graphics displayed on a device's screen. This tight coupling is also a barrier to improved scalability. It is difficult or impractical to add server and client processes without significantly reducing efficiency and performance. With respect to a graphics subsystem and view system, it is desirable that objects be loosely coupled in the object model. Loose coupling allows for data to be distributed. For example, with loose coupling each component or object encapsulates its own display information. An object is its own autonomous entity and can operate efficiently with reduced interaction with other objects. Loose coupling enables scaling a graphics subsystem from processing objects in a "small" environment to operating in a more complex or distributed environment. What is needed is a framework in which each object is represented accurately and operates efficiently in cooperation with other objects.
[0006] Present graphic subsystems for mobile devices also lack the ability to efficiently composite graphics and object representations on a screen. For example, one desirable graphics feature is having an icon or message on a screen be 'pushed' or transitioned off the display on one side by a new icon or message that enters the display from the opposite
side. This composite graphic - one example from a wide range of possibilities ~ has a visually appealing effect. Moreover, the judicious use of transition effects like this can significantly improve the usability and functionality of the interface. [0007] Present graphics subsystems and view systems also execute in an entirely synchronous or entirely asynchronous environment. This is a drawback when graphics operations that would benefit from operating in a combination of synchronous and asynchronous mode arises. Such operations occur more often as user interfaces for mobile and handheld devices improve. Thus, it would be desirable to enable synchronized drawing in an asynchronous environment. It would also be desirable to achieve the completion of synchronous tasks, for example, drawing, reconfiguring the layout, resizing and so on, asynchronously with respect to other tasks being performed in the graphics subsystem.
[0008] For example, in a conventional graphics subsystem, a view A having a child view B, there is knowledge in the server about both views A and B, and the server instructs the views to draw in parallel into their clipping regions. Although this allows views A and B to draw asynchronously (i.e. have their own event and update loops), it places heavy restrictions on their behavior. For example, view B needs to clear it's background pixels to a color before drawing, rather than simply drawing on top of what view A has drawn. These behavior restrictions prevent many advanced and "high design" interface designs from being implemented. Getting around these behavioral restrictions requires special purpose code and storing more knowledge about views A and B in the graphics server creating tight and heavy coupling rather than loose coupling.
[0009] An alternative technique is to provide a graphics server only knowledge of view A and make view A entirely responsible for drawing, layout, and passing events to view B. This allows views A and B to collaborate to produce the final image. Either this technique
or the heavy coupling technique described above is used in most systems. A problem arises because graphics designers must choose one or the other when writing the graphics subsystem code: either be asynchronous, which is generally desirable, at the cost of being tightly coupled (for coarse-grain, heavy-weight elements like windows) or be entirely synchronous while being well integrated for fine-grain, light-weight components such as user interface controls, which have more intimate knowledge of one another and are designed to operate only within a specific application context. It would be desirable to integrate these approaches, such that there is single, unified way to provide a graphical representation for an object on screen without having to make an early choice of which approach to use and thus limiting that object's use across a wide variety of contexts. [0010] What is needed in the art is a lightweight, loosely coupled, scalable graphics subsystem and view system for displaying graphics on a computing device.
SUMMARY OF THE INVENTION
[0011] Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the methods, instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.
[0012] The present invention addresses the needs in the prior art for an improved system and method of executing graphics-related operations and providing graphics to a screen of a computing device. The present invention comprises a system, method and computer- readable media that perform graphics subsystem and viewing management functions for a
computing device through interactions with a display server and related graphics hardware.
[0013] The method aspect of the invention relates to a loosely coupled, lightweight graphics subsystem for a computing device. The graphics subsystem has a rendering model and transport model that enable communication with a display server which is responsible for displaying graphics on the screen of the device. The rendering model features a hierarchy of views, where a view is responsible for a certain portion of the screen. Commands, events and other data are communicated between the display server and the view hierarchy using render stream objects which are transient, one-way conduits between the display server and the view hierarchy. Views only communicate with the display server when needed; there is no persistent communication means between them or between the views and any central authority.
[0014] In one aspect of the present invention, a method of displaying graphics on a computing device screen is described. A display control module, such as a display server, creates a transport object for communicating data related to a graphics operation to a view hierarchy. The module transmits the data to a root view within the view hierarchy. Upon processing of the data by the view hierarchy, the display control module receives reply or return data from the view hierarchy that changes the graphics on the computing device screen. [0015] In another aspect of the present invention a method of providing graphics on a screen of a computing device is described. A display server accepts an input that results from a user interaction with a computing device where the input is typically intended to change the graphics on the screen of the computing device. Upon receiving this input, the display server transmits data relating to the input, such as an update event, to a single view object. This is done by the display server instantiating or creating a data transport object
to the view object. The view object propagates the data to a hierarchy of views to perform graphics-related operations. Data is propagated in a manner that enables graphics-related operations in one of synchronous mode, asynchronous mode, or a combined synchronous- asynchronous mode. After view objects in the hierarchy receive the data, resultant data is created and returned to the display server via data transport objects.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: [0017] FIG. 1 is a diagram showing two aspects of a graphics subsystem component of an operating system for a mobile or handheld device in accordance with a preferred embodiment of the present invention;
[0018] FIG. 2 is a diagram of a display of a mobile device having user interface elements, a display server process, and a view hierarchy in accordance with a preferred embodiment of the present invention;
[0019] FIGS. 3A and 3B are diagrams of a render stream object, a sample stream of commands, and render stream branching;
[0020] FIG. 4 is a diagram of display server components and their relationships to the view hierarchy and the physical screen in accordance with a preferred embodiment of the present invention;
[0021] FIG. 5 is a diagram of a view object and selected various interfaces for communicating with other views in accordance with a preferred embodiment of the present invention;
[0022] FIG. 6 is a flow diagram of a display server accepting a user input and causing a change in the graphics of a computing device in accordance with a preferred embodiment of the present invention; [0023] FIG. 7 is a flow diagram of an update cycle by a display server in response to a view informing the display server that a portion of the screen is invalid;
[0024] FIG. 8 is a flow diagram of a graphics subsystem executing in synchronous mode in which a render stream is passed synchronously from a root view to child views during an update cycle; [0025] FIG. 9 is a flow diagram of a graphics subsystem executing in asynchronous mode during an update cycle, in which a single render stream is branched into two or more render streams;
[0026] FIG. 10 is diagram of a view hierarchy with a root view and view layout roots in accordance with a preferred embodiment of the present invention; and [0027] FIG. 1 1 is a block diagram of the basic components of a computing device in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0028] Various embodiments of the invention are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for
illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the invention.
[0029] The present invention provides for systems, methods and computer-readable media that function as a graphics subsystem of an operating system intended for use primarily on mobile and handheld devices, but also executable on any computing device as described in the figures. Examples of other computing devices include notebook computers, tablets, various Internet appliances, and laptop and desktop computers. In a preferred embodiment, the graphics subsystem operates on a handheld mobile computing device such as a combination cellular phone and PDA.
[0030] FIG. 1 is a diagram showing two primary aspects of a graphics subsystem 100 of an operating system 10 in a preferred embodiment of the present invention. The two aspects are a drawing (or rendering) model aspect 102 and a transport aspect 104. Generally, a graphics subsystem is the component of an operating system that interfaces with graphics and display hardware, provides application and system software access to that hardware and to graphics-related services, and potentially multiplexes access to graphics hardware between and among multiple applications.
[0031] The drawing model aspect 102 defines a highly expressive drawing language. It allows a graphics subsystem programmer to describe an image using primitive drawing commands, including path filling and stroking, and to apply modulations of color, blending, clipping and so on. Rendering is modeled explicitly as a definition of the value of each pixel within a target area. The drawing language provides a small number of drawing primitives to modify current pixel values, including two basic types of primitives: parametric drawing operations (path definition) and raster drawing operations (blitting). More complex rendering is accomplished by compositing multiple operations. Other
capabilities of the rendering model of the present invention include: arbitrary path filling, alpha blending, anti-aliasing, arbitrary two-dimensional and color-space transformations, linear color gradients, bitmap rendering with optional bilinear scaling, region-based clipping and general color modulation (from Boolean clipping to spatial color modulation). Components of the drawing model can be removed for lower-end devices. For example, components can be removed in order to not support general color modulation, anti-aliasing, or other such operations that are expensive to compute on low- powered hardware. On the other hand, the model can be configured to benefit from a full three-dimensional hardware accelerator. The drawing model aspect 102 also defines a drawing API that is used by clients to express commands in the drawing language.
[0032] Transport aspect 104 enables the transmission of drawing commands from where they are expressed by calls to the drawing API, such as within a client process, to where they are executed, typically within a server process. Transport aspect 104 addresses asynchronous operational issues. For example, it addresses the issue of how a screen or display controlled by a display server can multiplex drawing and update commands coming from different client processes and optionally execute the commands out of order if the display server determines that the resultant image would be identical. [0033] Drawing commands originating from multiple simultaneous clients of a display server are often not strongly ordered, i.e., they can often be executed in a different order and obtain the same image as if they were executed in the order specified by clients. For example, in a preferred embodiment transport aspect 104 and drawing model aspect 102 of graphics subsystem 100 are responsible for ensuring that with drawing command groupings A, B, and C specified in the order A to B to C, wherein the commands in C overlay an area drawn into by A and B draws into an area that is not affected by either A or C, A must be executed before C but B should be permitted to draw at any time. This is
very useful in situations where A, B and C originate from different client processes and the client responsible for A is slow, blocked or has crashed, and the client responsible for B is ready to continue processing. Transport aspect 104 also enables a display server to communicate with a distributed hierarchy of views, wherein each view has partial or complete ownership of certain portions of the screen, i.e., the actual pixels in those portions of the display.
[0034] FIG. 2 is a diagram illustrating a display 204 of a mobile device, the display having elements 204a, 204b, and 204c, a display server process 202, and a view hierarchy 206. Display server 202 process controls the graphical features of the user interface shown on screen 204, that is, which elements are displayed and how they are displayed. Display server 202 communicates with a view object hierarchy 206 comprised of numerous views arranged in a parent-child relationship with a root view 208 distinguishable from the other views in that it is the only view directly communicated with by the display server. Transport aspect 104 enables display server 202 to multiplex drawing commands coming from different views potentially distributed across different client processes, and execute them in the correct order or in an order the display server determines is appropriate. [0035] Drawing commands are transported from client views to the display server responsible for graphical rendering utilizing objects that function as delivery conduits. These objects, referred to as render streams, are a feature of transport aspect 104. FIG. 3 A is a diagram of a render stream object 302 and a sample stream of commands 304. Render stream 302 transports commands 304 from one or more clients (such as views) in one or more client processes to a display server in a server process. In a preferred embodiment, render stream 302 is an object instantiated by display server 202 and performs as a oneway pipe that transports commands from where they are expressed, in a client view, to where they are executed, in the display server. The client view and display server can be
in different processes or can operate in the same process assuming proper security measures have been taken if necessary, or if the system components are known to be trustworthy. In a preferred embodiment, drawing commands expressed into a render stream are buffered before being transported to the display server for greater efficiency when transporting across process boundaries. There can be numerous active render streams from views to the display server.
[0036] All types of drawing commands can be transported in render stream 302. In a preferred embodiment, drawing commands, e.g., moveto, lineto, closepathA l <color>, stroke <color>, and so on, resemble Postscript commands. In a preferred embodiment, render streams facilitate transmission of commands or any other data, such as pixels, modulation data, etc., in one direction. Commands that typically do not require direct responses are best suited to be transported utilizing render streams. [0037] Drawing model aspect 102 can carry out its functions independent of any render stream. For example, if the destination for drawing commands is local, such as to a bitmap, rather than to the screen of a handheld device, the drawing model does not need to utilize a render stream (although it may use other features of transport aspect 104). In cases where the drawing model operates independent of a render stream, the same drawing model API is used. However, depending on the context, commands may be rendered immediately to a local surface, or may be transported to a display somewhere else. [0038] In a preferred embodiment, drawing by client views occurs when they are asked by the display server to refresh the screen. This is done when a view, responsible for a certain area of pixels on the screen is invalid or 'dirty' or needs to be re-drawn for any reason, for example after some action or state change has occurred that changes the look of one or more visible components, or that adds or removes views form the hierarchy. In response to an update event, views send drawing commands to the display server so the
server can change those pixels according to these commands. In a preferred embodiment, this is the only mechanism by which a view may draw to the screen. [0039] This sequence of events encompassing the request made by the display server and the resulting drawing that occurs by clients is referred to as an update cycle. The display server initiates such a cycle by sending an update event to the root view, which will then distribute the event throughout the view hierarchy as needed to invoke views to draw, together composing the final image. Render streams are used during an update cycle to transport rendering commands from client views to the display server when the server operates in a different process or device from that of one or more of the client views, or when there are multiple systems of views operating asynchronously with respect to one another and conjoined within the same view hierarchy. These conjoined systems of views are asynchronized with respect to one another by the use of view layout root objects, as detailed below. [0040] The graphics subsystem of the present invention allows an update to be executed serially by each view synchronously following the drawing of its predecessor in the hierarchy within a single system of views, or in parallel by multiple systems of views which operate asynchronously with respect to one another. This is enabled in part by the ability of a render stream to branch. Branching is a procedure whereby an offshoot or branched render stream is created and passed to a child view to draw at some later point that the child chooses (i.e. asynchronously with respect to the parent view performing the branching operation), while the original or parent render stream continues to be used synchronously to transport subsequent drawing commands expressed by the parent view. [0041] FIG. 3B illustrates render stream branching. For example, a client process encompassing one or more views in hierarchy 206 may have an application user interface it wants drawn on the screen. The display server ultimately controls what is displayed on
the screen and client views have the information needed to describe the desired image, and so the display server needs a render stream with which to receive drawing commands from these views. In a preferred embodiment, the display server instantiates a render stream and passes it to root view 208 along with an update event, initiating an update cycle. The commands necessary to draw the application's imagery are divided into three sequences A, B, and C, and each sequence is generated by a subset of the views comprising the application. Sequence A is generated and placed into the original render stream 306, after which render stream 308 is branched from it and given to the subset of views that generate sequence B to be used to express and transport those commands. Following this, sequence C is generated and placed into the original render stream 306. Render stream 308 is thus branched from render stream 306 at a point after commands in group A has been expressed (though perhaps buffered and not necessarily transported) but before the first command in group C has been expressed. Data transported in render stream 306 includes commands from sequence A, a token for render stream 308, and commands from sequence C. Commands in render stream 308 are from sequence B., and this render stream uses TOKEN B to identify itself when returning drawing commands to the display server. [0042] Each act of branching creates a possibility of re-ordering by the display server. Thus, in this scenario branching has created the possibility that the commands in render stream 308 can execute concurrently or out-of-order with commands in render stream 306. Actual parallel execution can be employed using multiple processors or hardware accelerators, or greater efficiency can be reached by re-ordering commands sent to a single graphics accelerator. The display server receives these commands from render stream 306 and 308 in parallel and decides the actual order in which the commands will be executed based on the expressed order and on the dependencies between and among the commands.
[0043] FIG. 4 is a diagram of display server components and their relationship to the view hierarchy and the physical screen. The display server 202 controls the screen and is the only entity with direct access to it. The graphics driver is divided into two components: a driver 402, which provides full access to graphics hardware registers and resides in the I/O subsystem or kernel of the host operating system, and a graphics accelerant 404, which resides in the display server 202 and provides memory-mapped access to the frame buffer and access to any graphics functions implemented by graphics acceleration hardware. The display server is comprised of the graphics accelerant, a low-level renderer 406 (also known as mini-GL), which is responsible for region filling and pixel operations, and a high-level renderer 408 which provides memory buffer management, drawing path manipulation such as stroking, rendering state management, and so on. [0044] The display server 202 has explicit knowledge of only one view, the root view 208. From the display server perspective, root view 208 is responsible for handling events (including input and update events) for the entire screen. On simple devices the root view may in fact be the only view. If there is a single process that uses only one view and no re-ordering of views, the complexity of the design collapses into a simple code path between the display server and view hierarchy, which is highly efficient on weak hardware. On devices with more advanced hardware and user interfaces the root view distributes its responsibility for handled update and input events to a hierarchy of views. [0045] FIG. 5 is a diagram of a basic view object 502, which has three separate interfaces named IView 504, IViewParent 506, and IViewManager 508. The view hierarchy operates by the interaction of parents and children within the hierarchy accessing these interfaces on one another. IView allows manipulation of the core state and properties of a view, as well as allowing it to be added to another view as a child, such as child view 510. IView is the interface that a parent sees on its children. Input, update, layout and other
event types are propagated down the hierarchy from the display server to leaf views by each parent view making calls on its childrens' IView interfaces, and those views in turn (either synchronously or asynchronously) making calls on their own children's IView interfaces, and so on. The IViewParent interface is the interface that a child view sees on its parent, such as parent view 512 and which it can use to propagate events up the hierarchy, such as invalidate events or requests for a layout to be performed. IViewManager is the interface that a child view would use on its parent to manipulate its siblings, or that a third-party piece of code would use to add or remove children to a view. The loose coupling of the view hierarchy is enforced in part by the hiding of certain of these interfaces from a caller that has access to certain others. For example, the default security policy for views (which can be overridden by each view as desired) is that a child view which is initially given an IViewParent interface for its parent will not be able to convert that to an IView interface. The view itself stores state such as the spatial 2D transformation that should apply to that view's drawing, a reference to its parent, and so on.
[0046] In a conventional graphics subsystem, the display server will typically have knowledge of and establish direct connections with many of the child view objects within the view hierarchy (such as 206). In such a design, the display server must maintain a centralized, local record of the relationships between various views, their layout with respect to one another, their clipping regions, their drawing state, and so on. For example, when an update to the screen is required, all of the information as to which views should be invoked to handle this update is held by the display server. This record maintained by the display server is essentially a 'mirror' of information held by the views themselves, and there is significant overhead involved both in storing multiple copies of this information as well as in keeping such a centralized mirror of state information current as
views change their state in a distributed fashion. Any changes to the position of views or other state changes must constantly keep in sync with the server, introducing much communication overhead between the server and clients. For example, making a small local change to the position within its parent of a leaf view of the hierarchy will sometimes require updating the server's mirror of that view's position.
[0047] The graphics subsystem of the present invention, by contrast, facilitates loose coupling and reduces state duplication and unnecessary communication overhead between the server and clients. This is enabled in part by the use of a single view hierarchy spanning multiple processes, in which a parent view is solely responsible for the distribution of update, input and other events to its children, and in part by the transient nature of the connection established between client views and the display server, in the form of render streams that exist only for the duration of a single update cycle. In the preferred embodiment, the display server is not aware of view delegation and event propagation decisions made at any level of the hierarchy, or what decisions regarding screen updating, layout, and so on are made by sub-views operating under the root view. The display server is aware or has implicit knowledge of other views (such as 206) only during an update cycle, because it observes the branching of the original render stream it had passed to the root view, and receives drawing commands from each of these multiple render streams. In addition, individual views rely on no special knowledge about other views in the hierarchy, depending only on the existence of a few standard interfaces to allow manipulation of the hierarchy and cooperation with other views. [0048] An example of an advantage of this approach is how it affects the preferred implementation of a common type of component known as a window manager. A window manager is typically responsible for arbitrating interaction with and providing additional facilities for the manipulation of client views. For example, a window manager may
surround certain special client views (known as windows) with borders and controls that allow those views to be manually moved about the screen, closed, or resized by users, may algorithmically layout those client views on-screen according to preset or dynamic behaviors, or may modify the drawing of those client views. A window manager is also typically the entity that is contacted by a newly launched application in order to initiate the placement of a client view into the view hierarchy.
[0049] Because the window manager plays an important role and needs to exert control over client views, in conventional systems special privileges and specialized access to the display server are often required by a window manager in order for it to accomplish its tasks. In such a conventional design, there is typically either a very close coupling of the display server with the window manager (preventing modularization), or there is a specialized three-way relationship established between the display server, the window manager, and the windows themselves. In this type of three-way relationship, the window manager itself defines only policies for managing windows and remains in constant contact with the display server, which is solely empowered to implement those policies. This design creates a large amount of state duplication across the three entities, requires a very heavy-weight display server capable of implementing a wide range of policies that may be demanded by the window manager, and introduces substantial communication overhead. [0050] In a preferred embodiment, the duties of a window manager can be performed by any view for its children, without any special privileges or services required of the display server. A view performs these duties by manipulating the contents of various events passing through the hierarchy, and by inserting commands into a render stream prior and subsequent to passing that render stream (or a branched render stream) to its children to allow them to draw. For example, a view acting as a window manager might translate the
drawing coordinate system, clip to a particular shape, and add a filter or color modulation to a render stream before asking a child to draw into it, and then afterwards draw a decorative border around the child with several controls to allow the child view (in this instance, a window) to be resized, moved, and so on. Much or all of this is difficult or impossible in conventional systems because all of these potentially very dynamic relationships between the parent and its child, including the ways in which the parent wishes to affect its children's drawing, would need to be understood by the server before it can implement the desired policies. [0051] The render streams provide a way for these policies to be very efficiently described by the view asserting them at the time of rendering, preventing state duplication and maintenance overhead and the necessity for a centralized implementation of policies. In the preferred embodiment, the root view performs the duties of a window manager using these standard facilities of the update and rendering model. This allows the display server to encapsulate only the services required to multiplex access to the display hardware from multiple render streams and to efficiently implement the drawing model, and leaves all view state maintenance, window manager policy definition, and implementation of those policies to the views themselves.
[0052] Rendering by the display server is implemented in update cycles. An update occurs in response to a view informing the display server that a portion of the screen is invalid. FIG. 6 is an overview flow diagram of a process of updating and displaying graphics on the screen. At step 602 the display server instantiates and passes a render stream object to a root view. At step 604 the display server instructs the root view via the render stream to draw or create the updates needed for the display. Once the instructions to draw have been distributed to the appropriate views in the hierarchy, those views send data back to the display server at step 606. This is done using one or more render streams.
[0053] FIG. 7 is a flow diagram of an update cycle by which the display server renders the screen and by which these policies are defined. An update cycle occurs in response to a view informing the display server via an 'invalidate' event that a portion of the screen is invalid. At steps 702 to 708, an invalidate event is passed up the view hierarchy from child to parent until it reaches the display server. At some point after one or more of these invalidate events have been received, as indicated at step 710, the display server will create a render stream and pass it along with an update event to the root view, initiating an update cycle at 712. The update event instructs the root view to draw the updates needed for the display into the associated render stream. [0054] In a preferred embodiment, the graphics subsystem implements a synchronous model in which a single non-branched render stream is active between the root view and children views. When the view system performs in this mode, event handling and drawing of each view in the hierarchy is performed in sequence. At step 802 the root view receives a single render stream. At step 804 the root view passes the render stream to a child view. The render stream is synchronously passed from the root view to child views in an order in which it is desired that the views draw. There is limited fault tolerance in a synchronous model, and at step 806 while the root view waits for a response, several issues may arise. For example, if a child view is in an untrustworthy process, such as in a third-party application, that child view may not respond to the update by drawing in a timely manner. A delay in response by a child view can result in unacceptable wait times to the user for a graphic to appear on the screen, or can block the continuing update of other portions of the screen which do not overlap with the portion owned by the delayed view. The child view may also have been written incorrectly and may crash, causing similar delays before the crashed process can be cleaned up by the parent. Because views in the view hierarchy might exist in different processes or even on different devices, or may have been
developed by different vendors, system- wide synchronous operation of the entire view hierarchy is undesirable.
[0055] At step 808 the root view determines whether a response has been received. If a child view has returned a response, the display server then determines whether an update or a drawing has completed at step 810. If the drawing is not complete, the render stream is passed to the next child view asynchronously at step 812. If the drawing is complete, the display server terminates the render stream at step 814. [0056] In another preferred embodiment, the graphics subsystem implements an asynchronous model. FIG. 9 is a flow diagram illustrating a synchronous model in accordance with a preferred embodiment. At step 902 a single render stream is provided by the display server to the root view. At step 904, the render stream is branched one or more times, and the newly created render streams are passed to child views used to draw the contents their views to complete the rendering. When the view system performs in this mode, handling of input and update events can at the option of each view be queued up for handling asynchronously. Degrees of security, robustness, and efficiency can be determined at each level of the hierarchy by render stream branching, as a render stream is branched using heuristics and algorithms determined by clients. In a preferred embodiment, branching is used by any views that choose to handle an update event asynchronously with respect to its parent. As each view at step 908 receives a render stream to render into, it makes a decision whether to handle the update synchronously or asynchronously.
[0057] If the view decides to handle the update synchronously with respect to its parent, it proceeds to express drawing commands into the render stream, perhaps passing that render stream to children who participate in drawing its imagery, and then returns to the parent view which asked it to draw. Alternatively, if the view decides to execute the update
asynchronously with respect to its parent, it branches a new render stream at step 910, stores the reference to that newly branched render stream along with a queued request to perform the update later in another execution context using system threading facilities, and returns to the calling parent view immediately. A render stream created by branching can itself be branched, creating a dependency graph of render streams that is observed by the display server as the render streams are created. Step 912 is being performed constantly as the update cycle proceeds. At this step the display server is able to re-order drawing commands when those commands arrive in different render streams and do not touch the same pixels (for example, when the views drawing using those render streams do not overlap, or when the overlapping areas are known to be completely owned by one of the views). This re-ordering can result in more efficient use of rendering acceleration hardware or of multiple processors. If a particular view is slow to draw, rendering hardware can be put to work rendering a different view that owns a different part of the screen, and thus whose final imagery is not dependent on the slow view. Alternatively, multiple non-dependent commands may be drawn in parallel by two different acceleration engines or processors.
[0058] For example, views A, B, C, and D may each decide to draw asynchronously with respect to its parent, and thus each into its own branched render stream. The display server may be able to determine that the drawing commands from view A and the drawing commands from view C overlap and thus must be performed in order. If view A is delayed in drawing, the rendering of view C's drawing commands must then wait for view A commands to compete. However, the drawings of views B and D are not dependent on what view A draws, and thus the display server can proceed with B and D's commands before view A is done. Because the generation of drawing commands for each of A, B, C
and D is executing asynchronously with respect to their parents and with respect to one another, B and D can also generate these commands without interference from A's delay. [0059] In both modes, the display server's function is to process the stream of drawing commands coming through the render streams. Adhering to lightweight and loose coupling principles of the graphics subsystem, the display server does not have knowledge of which portions of the screen are owned by child views. It has knowledge of what portions of the screen may, at any given time, be touched by a drawing command coming through each render stream. In both modes, the display server can apply algorithms or heuristics as simple or as sophisticated as desired. For example, a simple display server may not need to perform any re-ordering and may simply draw the commands comprising each render stream one after the next. This will produce the right results on screen and allows for a very light-weight implementation that has little computational of memory bandwidth overhead which will likely be faster on low-powered hardware, though it will not be as fault tolerant as a display server that is capable of re-ordering. Alternatively, the display server can apply sophisticated analyses to the drawing commands to determine how to re-order, which in many scenarios will make much more effective use of more powerful or multiple processors, and of hardware acceleration engines. On high-powered hardware, this type of re-ordering can result in dramatically better responsiveness. Different classes of mobile and handheld devices with different hardware capabilities and different needs will be served best by different implementations.
[0060] In a preferred embodiment, asynchronous operation of sub-hierarchies of views within the view hierarchy (in other words, the ability for a view to handle events and updates asynchronously with respect to its parent) is enabled by the use of an object called a view layout root. A view layout root is a special view that detaches the event handling loop of the view hierarchy below it (i.e. for which it serves as a parent) from the event
loop of the view hierarchy above it (i.e. for which it serves as a child), and allows them to run asynchronously with respect to each other. FIG. 10 is an illustration of a view hierarchy 1002 with a root view 1004. A view layout root can be placed at any location in the view hierarchy. In FIG. 10 there are three view layout roots labeled "VLR." [0061] A view layout root is generally created and placed directly above another view in the hierarchy if it can be determined that this second view is either untrusted (and thus likely to be malicious or crash and thereby break a synchronous system of views) or in another process or device (and thus likely to be slow or introduce latency, which would slow down a synchronous system of views). The view will avoid having to wait for a result and, generally, operating in synchronous mode when doing so would not be efficient. For example, when adding a view in a third-party or untrusted process B as a child of a view in native process A, a view layout root may be created to act as an intermediary. A view layout root creates a new event loop that services events at that view or below it in the hierarchy, until a leaf view or another view layout root is encountered. This allows an event (such as an input or update event) traveling down the view hierarchy to a view layout root to be queued on arrival for handling in a separate thread by the view layout root or views below it, and for subsequent events to continue being handled immediately (in the originating thread and process) by views above it. [0062] This event queuing that happens at each view layout root allows the graphics subsystem to stay responsive. A common situation in which a view layout root provides this benefit is when the visual layout of a set of views needs to be re-configured. A view layout root allows a separate sub-view hierarchy to compute the new layout asynchronously with respect to its parent, without delaying other operations or event handling by its parent and the event loop in that that parent participates. The view layout root is akin to a root view for a sub-hierarchy of views that executes its own event loop
and is particularly useful when re-configuring a new layout. In FIG. 10 there are four systems of views operating asynchronously with respect to one another: 1006, the parent system, and asynchronous view layout root systems: 1008, 1010, and 1012. When distributing any event to the overall view system, the originating thread only needs to synchronously traverse as far down as the first view layout root, at which point the event is queued and the rest of the event handling in views below that point can be performed asynchronously. This method maintains the responsiveness of the user interface. [0063] As previously mentioned, when an update event is passed to a view layout root, the render stream attached to the event is branched. This branched render stream is queued along with the event, and the original render stream is discarded by the view layout root (though it may subsequently still be used by the view layout root's parent). When the view layout root handles the queued update event, it draws into the branching render stream. View layout roots make use of branching in this way to enable asynchronous handling of updates. [0064] There are a number of contexts in which view layout roots would improve or maintain efficient performance of a user interface, for example, when a user interface has a complex layout, such as a Web browser or any layout that has numerous tables or complicated nested layouts of many small components. The distribution and handling of input events can also be queued and made asynchronous, allowing the system to remain responsive to user input. View layout roots can also provide a way to prevent synchronous round-trip communications with views sitting across a processes boundary from its parent.
[0065] In conventional windowing or view systems, asynchronicity is generally managed by a tightly coupled central authority that usually has direct, manually created communication channels with each independent entity. For example, each window may
have its own thread and operate asynchronously, but will have a defined, static connection to the central authority through which events pass and drawing is done. In a preferred embodiment, any view can be a view layout root. A separate window and conduit with a central authority does not need to be created to enable asynchronicity, providing great flexibility. All communications within a view sub-tree can be asynchronized with respect to the parent and thus to the rest of the view hierarchy, as needed by the particular application. All communication between the client views and the display server is performed by events passing up and down the hierarchy, with the exception of drawing commands, which are delivered directly by client views to the display server using render streams. Render streams are branched whenever event handling is asynchronized by view layout roots, dynamically creating any necessary direct conduits to the display server. This design maintains very loose coupling between the client views and the display server. This is the case because any necessary communication conduits between client views and the display server are created on demand during each update cycle as required by the nature of the views being asked to draw.
[0066] Each view layout root may have its own thread, layout and event and update cycle and is asynchronous with respect to its parent and its parent's own event loop, which in turn is established by the next-highest view layout root. In addition, layout, event handling, and drawing can all occur in parallel with one another, and the layout, event handling or drawing happening within a system defined by one view layout root can occur in parallel with that occurring in another. The graphics subsystem of the present invention is thus aggressively multi-threaded and enables multi-processing. This is relevant to battery-powered mobile devices because the power drain of two or more CPUs providing a certain amount of computing power will be substantially less than if that same amount of computing power is delivered by only one CPU running at a faster speed.
[0067] The graphics subsystem of the present invention supports a 2-dimensional graphics model that is accelerated by 3-dimensional graphics hardware. It is expected that in the near future 3-D graphic acceleration can be implemented on mobile devices inexpensively and efficiently, and will likely become ubiquitously available on mobile hardware platforms. 2-D graphic accelerators are comparatively pπmitive and are not capable of the level of precision, functionality, or quality desired, whereas 3-D hardware supports much richer acceleration capabilities. Being able to accelerate a 2-D graphics drawing model that supports advanced functionality (e.g., complex path shapes, filling, stroking, etc.) with a 3-D graphics accelerator would reach the levels of both performance and quality desirable in the graphics subsystem of the present invention.
[0068] In a preferred embodiment, the API of the graphics subsystem of the present invention has constraints that are deliberate and intended to encourage application developers to take advantage of 3-D graphics acceleration. Conversely, the constraints are intended to discourage developers from designing applications that will be difficult to accelerate using 3-D graphics hardware, essentially restπcting availability of particular data at given times and encouraging the representation of required information in a form that can be efficiently represented to 3-D hardware. Features of the drawing model that are well suited and designed for 3-D hardware processing include the use of modulation groups, the encouragement of asynchronous operations and the avoidance of synchronous operations that require querying the drawing state (such queπes on 3-D graphics hardware introduce severe latency because such hardware is heavily pipelined), the reliance on linear transformations for texture and gradient operations, and the discouragement of mathematically-based operations on visible region representations. [0069] An example is the processing of a clipping region. Conventional APIs for creating and manipulating a clipping region make implicit assumptions that the clipping region has
a mathematical representation. However, this would cause problems when using a 3-D graphics accelerator which typically uses buffers of pixels to represent regions (e.g., stencil buffers, accumulation buffers, textures) rather than mathematical representations. Instead, application developers are encouraged to represent a clipping region by drawing into a modulation group. This maps very well to the use on 3-D hardware of a stencil buffer or to the use of texture modulation operations, while still allowing mathematical clipping to be performed by software implementations of the drawing model. [0070] Inasmuch as one embodiment of the invention described above relates to a hardware device or system, the basic components associated with a computing device are discussed below.
[0071] FIG. 11 and the related discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. Although not required, the invention has been described, at least in part, in the general context of computer-executable instructions, such as program modules, being executed by a personal computer or handheld computing device. Generally, program modules include routine programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including handheld devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, communication devices, cellular phones, tablets, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[0072] With reference to FIG. 11, an exemplary system for implementing the invention includes a general purpose computing device 11 , including a central processing unit (CPU) 120, a system memory 130, and a system bus 110 that couples various system components including the system memory 130 to the CPU 120. The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory 130 includes read only memory (ROM) 140 and random access memory (RAM) 150. A basic input/output (BIOS) contains the basic routine that helps to transfer information between elements within the personal computer 11, such as during start-up, is stored in ROM 140. The computing device 11 further includes a storage device such as a hard disk drive 160 for reading from and writing data. This storage device may be a magnetic disk drive for reading from or writing to removable magnetic disk or an optical disk drive for reading from or writing to a removable optical disk such as a CD ROM or other optical media. The drives and the associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 11. Although the exemplary environment described herein employs the hard disk, the removable magnetic disk and the removable optical disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memory (RAM), read only memory (ROM), and the like, may also be used in the exemplary operating environment. [0073] FIG. 11 also shows an input device 160 and an output device 170 communicating with the bus 110. The input device 160 operates as a source for multi-media data or other data and the output device 170 comprises a display, speakers or a combination of
components as a destination for multi-media data. The device 170 may also represent a recording device that receives and records data from a source device 160 such as a video camcorder. A communications interface 180 may also provide communication means with the computing device 11. [0074] As can be appreciated, the above description of hardware components is only provided as illustrative. For example, the basic components may differ between a desktop computer and a handheld or portable computing device. Those of skill in the art would understand how to modify or adjust the basic hardware components based on the particular hardware device (or group of networked computing devices) upon which the present invention is practiced.
[0075] Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media. [0076] Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose
processing device to perform a certain function or group of functions. Computer- executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps. [0077] Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. [0078] Although the above description may contain specific details, they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the invention are part of the scope of this invention. For example, another type of distributed data network can be used instead of a hierarchical structure for structuring the views. Accordingly, the appended claims and their legal equivalents should only define the invention, rather than any specific examples given.