WO2011121168A1 - System and method for allocating buffers - Google Patents

System and method for allocating buffers Download PDF

Info

Publication number
WO2011121168A1
WO2011121168A1 PCT/FI2010/050257 FI2010050257W WO2011121168A1 WO 2011121168 A1 WO2011121168 A1 WO 2011121168A1 FI 2010050257 W FI2010050257 W FI 2010050257W WO 2011121168 A1 WO2011121168 A1 WO 2011121168A1
Authority
WO
WIPO (PCT)
Prior art keywords
components
buffer
component
chain
control data
Prior art date
Application number
PCT/FI2010/050257
Other languages
French (fr)
Inventor
Rahul Singh
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/FI2010/050257 priority Critical patent/WO2011121168A1/en
Publication of WO2011121168A1 publication Critical patent/WO2011121168A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation

Definitions

  • the present invention relates to systems and methods for enabling portability of components and media applications in the mobile device landscape.
  • OpenMAX The standard for portable media library is developed by OpenMAX community.
  • OpenMAX defines a royalty-free cross-platform application programming interface specification offering comprehensive streaming media codec and application portability by enabling development, integration and programming of accelerated multimedia components across multiple operating systems and hardware platforms.
  • the goal of the standard is to reduce the cost and complexity of porting multimedia software to new processors and architectures.
  • OpenMAX abstracts the C-language based functionality to a higher level of representation, which allows straightforward design and implementation of variety of media use cases.
  • the structure of the standard comprises three main layers and a number of commonly defined media engines and media component sub-layers.
  • the formed structure resembles a stack model.
  • the main layers are Application Layer (AL), Integration Layer (IL) and Development Layer (DL).
  • the Integration Layer provides an API that is used for enabling portability across operating system platforms.
  • the interface allows the user to control the individual blocks of functionalities. These blocks are called components, and each component and relevant transform is encapsulated in a component interface.
  • the OpenMAX IL API allows the user to load, control, connect, and unload the individual components.
  • An OpenMAX IL component provides access to a standard set of component functions via its component handle.
  • Each OpenMax IL component has at least one port.
  • the OpenMAX defines four types of ports corresponding to the types of data a port may transfer, being audio, video and image data port and port for other data types. Ports are defined either input or output ports depending on whether it consumes or produces buffers.
  • each port has its own buffer, which can be shared with ports of the same component or ports of the other components in the system of components.
  • buffer sharing is implemented on a tunnel between an input port of a component and an output port of neighboring component, and is transparent to other components.
  • a tunnel between any two ports represents a dependency between those ports.
  • Buffer sharing extends that dependency so that all ports that share the same set of buffers form an implicit dependency chain. Exactly one port in that dependency chain allocates the buffers shared by all of them.
  • a method for providing a chain of at least two components for carrying out a set of functions at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, determining control data amount consumed by the components involved in said buffer allocation chain, in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain and providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
  • the method further comprises receiving of said announcements upon said component chain initiates its functionality first time. According to an embodiment, the method further comprises receiving of said announcements upon detecting a disturbance made to said component setup. According to an embodiment, the method further comprises introducing or removing at least one component to/from at least one chain of a plurality of components that share said data buffer. According to an embodiment, the method further comprises disabling all ports that are part of a buffer allocation chain established by said at least two components, releasing all buffers from all ports belonging to said buffer allocation chain and re-enabling all ports that were affected on said buffer allocation chain.
  • the method further comprises receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
  • the method further comprises delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
  • method further comprises computing the total amount of data not consumed by the components of said data allocation chain and needed to be reserved for the buffer.
  • method further comprises determining the actual data size comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain.
  • method further comprises releasing all buffers from all ports belonging to said buffer allocation chain sets the initial values of said buffers to zero.
  • an apparatus comprising a processor and a memory including computer program product, the memory and the computer program product configured to, with the processor, cause the apparatus to provide a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, to determine control data amount consumed by the components involved in said buffer allocation chain, in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, to compute a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain, delivering the accumulation information of available but not
  • an apparatus comprising means for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, means for determining control data amount consumed by the components involved in said buffer allocation chain, means for receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, means for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain, providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
  • the apparatus further comprises means for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
  • the apparatus further comprises means for delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and means for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
  • a computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising a computer program code section for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, a computer program code section for determining control data amount consumed by the components involved in said buffer allocation chain a computer program code section for in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain a computer program code section for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but
  • the computer program product further comprises a computer program code section for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
  • the computer program product further comprises a computer program code section for delivering the accumulation information of the amount of data not consumed by the components of said data allocation chain propagated from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and a computer program code section for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
  • FIG. 1 shows an example of OpenMAX integration layer API landscape
  • Fig. 2A shows a format of a buffer transmission unit incorporated with extra data sub-buffer
  • Fig. 2B shows relationship between components forming up a buffer
  • Fig. 3 shows a relationship of system components and ports in one possible embodiment
  • Fig. 4 shows an example of buffer allocation and sharing relationship in one possible embodiment
  • Fig. 5 shows an example of extra data buffer sharing in another possible embodiment
  • Fig. 6 shows a flowchart in the case where existing data allocation is disturbed due to changed component configuration in the system and a new allocation need has occurred.
  • Figure 1 presents an operating landscape for the OpenMAX integration layer (IL) Application programming interface (API).
  • the OpenMAX IL API is aimed to fill the gap of the missing multimedia middleware framework for some of the systems. Also in some cases a native media framework can be replaced with OpenMAX integration layer 1 04.
  • OpenMAX IL 1 04 fits seamlessly into an OpenMAX Application Layer 1 02 implementation.
  • the OpenMAX standard also defines a set of Development Layer (DL) 1 1 2 primitives, shown in the Figure 1 as 1 4, which can be used as building blocks of components.
  • DL Development Layer
  • the OpenMAX IL API is a component-based media API that consists of two main segments: the core API and the component API.
  • the OpenMAX IL core is used for dynamically loading and unloading components and for facilitating component communication. Once loaded, the API allows the user to communicate directly with the component, which eliminates any overhead for high commands. Similarly, the core allows a user to establish a communication tunnel between two components. Once established, the core API is no longer used and communications flow directly between components.
  • components represent individual blocks of functionality.
  • Components can be sources, sinks, codecs, filters, splitters, mixers, or any other data operator.
  • a component could possibly represent a piece of hardware, a software codec, another processor, or a combination thereof.
  • OpenMAX IL Resource management in OpenMAX IL is based on behavioral rules, priorities and component states.
  • Each OpenMAX IL component can undergo a series of state transitions, for example UNLOADED, LOADED, INVALID, WAIT FOR RESOU RCES, IDLE, PAUSED and EXECUTING. Every component is first considered to be in state UNLOADED. The component can become to LOADED through a call to the OpenMAX IL core. All other state transitions may then be achieved by communicating directly with the component. It is also possible for a component to enter an invalid state when a state transition is made with invalid data. It is possible to enter the invalid state from any state, but the only way to exit the invalid state is to unload and reload the component again.
  • the component shall have all its operational resources when it is in the IDLE state. Transitioning into the IDLE state may fail since this state requires allocation of all operational static resources.
  • the IL client may try again or may choose to put the component into the WAIT FOR RESOURCES state.
  • the component Upon entering the WAIT FOR RESOURCE state, the component uses a sub-routine that alerts it when resources have become available, and the component can perform a transition into the IDLE state.
  • the IDLE state indicates that the component has all of its needed static resources but is not processing data.
  • the EXECUTING state indicates that the component is pending reception of buffers to process data and will try to retrieve them later.
  • the PAUSED state maintains a context of buffer execution with the component without processing data or exchanging buffers. Transitioning from PAUSED to EXECUTING enables buffer processing to resume where the component left off. Transitioning from EXECUTING or PAUSED to IDLE will cause the context in which buffers were processed to be lost, which requires the start of a stream to be reintroduced. Transitioning from IDLE to LOADED will cause operational resources such as communication buffers to be lost.
  • the IDLE state is the state of a component when buffers are allocated to the component and its ports, but no processing of data happens yet. Whenever a component is asked to perform a task, it either waits for input from some other component or delivers data to its one or several output ports. In both cases the data processing is triggered by the state transmission to EXECUTING state, which indicates the component is ready to start processing data, whenever the data is available to it in the buffers.
  • OpenMAX IL standard defines the meta data used to describe the buffers allocated and exchanged between the components.
  • a buffer header element holds all the necessary parameters describing the details of the buffer and a pointer to the exact location of data within the physical buffer.
  • sending a buffer refers simply to action where a buffer header with pointers to actual data is sent from one port to another.
  • ⁇ pBuffer is a pointer to the actual buffer where data is stored but not necessarily the start of valid data.
  • nAllocLen is the total size of the allocated buffer in bytes, including valid and unused byte.
  • nFilledLen is the total size of valid bytes currently in the buffer starting from the location specified by pBuffer and nOffset. This includes any padding, e.g. the unused bytes at the end of a line of video when stride in bytes is larger than width in bytes.
  • nOffset is the start offset of valid data in bytes from the start of the buffer.
  • a pointer to the valid data may be obtained by adding nOffset to pBuffer.
  • pAppPrivate is a pointer to an IL client private structure.
  • pPlatformPrivate is a pointer to a private platform-specific structure.
  • pOutputPortPrivate is a private pointer of the output port that uses the buffer.
  • ⁇ plnputPortPrivate is a private pointer of the input port that uses the buffer.
  • nFlags field contains buffer specific flags.
  • nOutputPortlndex contains the port index of the output port that uses the buffer.
  • nlnputPortlndex contains the port index of the input port that uses the buffer.
  • each data buffer has a header associated with it that contains meta-information about the buffer.
  • the IL client shares buffer headers with each port with which it is communicating.
  • each pair of tunneling ports share buffer headers; otherwise, the same buffer transferred over multiple ports will have distinct buffer headers associated with it for each port.
  • the port configuration is used to determine and define the format of the data to be transferred on a component port, but the configuration does not define how that data exists in the buffer.
  • the range and location of valid data in a buffer is defined by the pBuffer, nOffset, and nFilledLen parameters of the buffer header.
  • the pBuffer parameter points to the start of the buffer.
  • the nOffset parameter indicates the number of bytes between the start of the buffer and the start of valid data.
  • the nFilledLen parameter specifies the number of contiguous bytes of valid data in the buffer. The valid data in the buffer is therefore located in the range pBuffer + nOffset to pBuffer + nOffset + nFilledLen.
  • the following cases are representative of compressed data in a buffer that is transferred into or out of a component when decoding or encoding.
  • the buffer just provides a transport mechanism for the data with no particular requirement on the content.
  • the requirement for the content is defined by the port configuration parameters.
  • Case 1 Each buffer is filled in whole or in part. In the case of buffers containing compressed data frames, the frames are denoted by f1 to fn. Case 1 provides a benefit when decoding for playback.
  • the buffer can accommodate multiple frames and reduce the number of transactions required to buffer an amount of data for decoding. However, this case may require the decoder to parse the data when decoding the frames. It also may require the decoder component to have a frame-building buffer in which to put the parsed data or maintain partial frames that would be completed with the next buffer.
  • Case 2 Each buffer is filled with only complete frames of compressed data. Case 2 differs from case 1 because it requires the compressed data to be parsed first so that only complete frames are put in the buffers. Case 2 may also require the decoder component to parse the data for decoding. This case may not require the extra working buffer for parsing frames required in case 1 .
  • Case 3 Each buffer is filled with only one frame of compressed data.
  • the benefit in case 3 is that a decoding component does not have to parse the data. Parsing would be required at the source component. However, this method creates a bottleneck in data transfer. Data transfer would be limited to one frame per transfer. Depending on the implementation, one transaction per frame could have a greater impact on performance than parsing frames from a buffer.
  • FIG 2A is presented a simplified format of a buffer pointer transmission unit containing also pointers to additional payload of the buffer.
  • additional buffer payload information 208 is identified via the extra data buffer flag within the buffer header structure 206.
  • This additional buffer payload information applies to the first new logical unit in the buffer.
  • the extra data flag applies to the logical unit whose starting boundary occurs first in the buffer.
  • Subsequent logical units in a buffer don't have explicit extra data.
  • the data attributes like type and size are identified by a corresponding data structure 206, immediately following the buffer payload 208 and preceding the actual data.
  • Figure 2B represents the relationship how data reservation is partitioned between actual data consumed by components and extra data within a buffer, which size is defined with nBufferSize parameter and starting point in memory space is expressed with pBuffer pointer.
  • the parameter contains information of the minimum size in bytes for buffers that are allocated for a certain port. If there is no extra data 21 8 to be processed in the buffer, the whole buffer is reserved for actual data 21 4. As can be seen from the Figure 2B, the amount of extra data can vary from zero to a predetermined value.
  • Each port has a component-defined minimum number of buffers it can allocate or use.
  • a port associates a buffer header with each buffer.
  • a buffer header references data in the buffer and provides metadata associated with the contents of the buffer. Every component port is capable of allocating its own buffers or using pre-allocated buffers; one of these choices will usually be more efficient than the other.
  • a tunneling component may choose to re-use buffers from one port on another to avoid memory copies and optimize memory usage.
  • a buffer supplier port does not necessarily allocate its buffers; it may re-use buffer from another port on the same component.
  • FIG 3 is illustrated a typical relationship between ports.
  • Component A has a buffer 300, which is shared between other components.
  • Ports 308 and 31 2 are illustrated as supplier ports.
  • the port that receives the UseBuffer 302 calls from its neighbor is known as a non-supplier port.
  • Ports 31 0 and 31 4 illustrate non-supplier ports.
  • a port's tunneling port is the port neighboring it with which it shares a tunnel.
  • port 31 0 is the tunneling port to port 308.
  • port 308 is the tunneling port to port 31 0.
  • An allocator port is a supplier port that also allocates its own buffers.
  • Port 308 is the only allocator port in the Figure 3.
  • Another port type is a sharing port.
  • port 31 2 in Figure 3 is a sharing port as it reuses the buffer from port 31 0.
  • Sharing relation is marked with 304 in Figure 3.
  • a buffer sharing extends the dependency of the components so that all ports that share the same set of buffers form an implicit dependency chain.
  • One port in that dependency chain allocates the buffers shared by all of them.
  • that sharing port is port 308.
  • the port can have a set of requirements for a buffer. These requirements may be, for example the number of buffers required by the port and the size of each required buffer. The maximum of multiple sets of buffer requirements is defined as the largest number of buffers derived from any set combined with the largest size derived from any set.
  • One embodiment relates to a situation where component is attached to the dependency chain afterwards when the original component system has been established some time already. Adding new component to the set of chained components already running happens by disabling the specific ports of the neighboring components. Disabling the ports resets the memory buffer allocated at those ports to the default values the components had initially. If the neighboring ports were part of the big buffer sharing chain, then the ports and components involved in this buffer sharing chain had to be disabled as well, and the new allocation of the buffers would happen. The new component brings a new source or a sink to the available buffer resource.
  • a buffer size calculation should be performed in this situation in such a way that each port having extra data requirements to a buffer, informs its production of extra data or capability to receive other components' extra data to its buffer with a parameter dedicated to this purpose.
  • the parameter can be for example nExtraDataSizeShared or nExtraDataSizePropagated The use of these parameters is described in more detail later on in this document.
  • a data reservation unit extra data is introduced. It may or may not be a real variable for a data reservation. Extra data is appended to the buffer that holds the component's processed data. The buffer header of this buffer is then propagated to next component in the chain which utilizes the extra data found in the pointed location in order to properly process the actual data in the buffer. Component will communicate with this information to other components so that they are making their own extra data allocation calculations based on the data they receive from other components taking part to the buffer allocation component chain. Allocator component's port will query sharing ports downstream and in this process, update its extra data size by cumulating the size of extra data across all sharing components downstream.
  • the parameter that may contain the original buffer size information could be nBufferSize.
  • buffers may need to be copied.
  • the intermediate component which allocates a new set of buffers needs to copy across the extra data and forward it downstream all the way till the consumer component.
  • the intermediate component or its tunneled port needs to allocate a new set of buffers.
  • the allocator component preferably take into account any extra data size coming from upstream.
  • protocol code examples are used in one possible embodiment where the components move from loaded to idle state and resource allocation happens. Another possible embodiment is an occurrence where the disabled ports get enabled in a state other than LOADED state.
  • the protocol codes define certain new structure types in OpenMAX that are used for storing data units to variables nExtraDataSizeShared and nExtraDataSizePropagated respectively. Other information units the both structures contain are the information of the size of the unit stored in nSize parameter, the used OpenMAX standard version as stored in nVersion parameter and nPortlndex parameter representing the read-only value containing the index of the port.
  • the first set of code is used and the default values of the buffers in the components are initially set to zero.
  • the second set of code is used and the default values of the buffers in the components are initially set to zero.
  • Figure 4 shows as one embodiment how the buffer allocation with sharing can be established.
  • Figure 4 depicts the needed steps for component C to achieve an idle state. Alongside this the whole chain of components moves to idle state. Following concentrates to component C only for clarity reasons.
  • the IL client commands component C to transition from loaded to idle state, it follows the following prescribed steps:
  • Component C knows that it can re-use port 414 buffers 406 since port 416 is a supplier port. Component C establishes a sharing relationship from port 414 to port 416.
  • port 414 shall be an allocator port.
  • - Component C allocates and distributes port 414 buffers. Since port 41 6 will re-use the buffer of port 41 4, component C calls determines the buffer requirements of port 41 6. After that port 41 6 calls OMX_GetParameter function on port 41 8 to determine its buffer requirements and reports the requirements as the maximum between its own and those of port 41 8. Next port 41 4 calls OMX_GetParameter function on port 41 2 to determine its buffer requirements via nBufferSize. Port 41 2 determines the buffer requirements of port 41 0. Port 41 0 returns the maximum of its own requirements and the requirement of port 408, retrieved via OMX_GetParameter function call. Port 41 2 then returns the maximum of its own requirements and the requirements that port 41 0 returns.
  • Port 41 4 allocates buffers according to the maximum of its own requirements and the requirements that ports 41 2 and 41 6 return.
  • the resulting buffers are effectively allocated according to the maximum requirements of ports 408, 41 0, 41 2, 41 4, 41 6 and 41 8, all of which use the buffers of port 414Since port 41 6 will re-use the buffers of port 41 4, component C shares these buffers with port 41 6.
  • port 41 6 calls OMXJJseBuffer function on port 41 8 for every buffer that is shared.
  • port 41 4 calls OMXJJseBuffer function on port 41 2.
  • the size of the buffers being shared in this embodiment is the maximum of the nBufferSize's from components A to D.
  • Figure 5 depicts another possible embodiment of the invention with the details of buffer sharing in case the sharing components also generate extra data.
  • the buffer size needed for the actual data reservation of the components is determined in two parts, first the maximum shared buffer size, parameter nBufferSize, is determined in the same way as procedure depicted along with Figure 4. In addition to this, there is also need to determine the accumulated extra data each component will provide to the component chain. Next this part is discussed in more detail.
  • the operating set up is arranged such that ports 500, 508 and 514 are allocator ports. Allocator output port 500 on component A will query its tunneled input port 502 on B for extra data size on any of component B's sharing ports downstream.
  • Component can, for example, use an OpenMAX function call
  • a query will be passed on to output port 504 on component B which will return any extra data size it might have after querying its tunneled input port 506 on component C. Since buffer sharing is defined here to be within the range 516, this port does not share buffers with its output port 508 on C, the query stops here. At every stage, any extra data size queried will be gradually added and eventually be available to allocator output port 500 on component A. Allocator output port 500 on component A does not have any input port so it does not query whether there is any extra data size being propagated through its input ports.
  • total extra data size will be sum of extra data size of port 500 added to extra data size other ports share with port 500.
  • the size of the buffers being shared is the summation of the maximum of the nBufferSize's from components A to C added to the total extra data size from components A to C.
  • allocator output port 514 on component E will query its tunneled input port 512 on component C for extra data size on any of its sharing port downstream. Since port 512 of component C does not share buffers with component C's output port 508, the query stops there, indicated with buffer sharing range 520 in Figure 5. Thereafter, total extra data size port 514 could use is its own extra data size only and no buffer sharing happens here.
  • Allocator output port 508 on component C will query any extra data size on its sharing port 510 downstream.
  • Figure 5. depicts that port 510 has no sharing port downstream. Port 510 has got tunneled port 508 upstream, but the query will not proceed forward downstream since input port 510 on component D does not have any other port to communicate with. This way, ports 51 0 and 508 share a buffer the allocator port 508 provides.
  • output port 508 on component C has got two input ports 506 and 512 which it uses to process the buffers and forward it downstream. It will query for example both input port 506 and input port 512 on C via e.g. an OpenMAX function call
  • Allocator output port 508 on component C will compute total extra data size as a sum of its own extra data size plus the extra data size propagated from input port 506 on component C plus the extra data size propagated from input port 512 on component C.
  • the total size of the buffers being shared in this scenario is the summation of the maximum of the nBufferSize's from components A, B, C and E added to the total extra data size from components A to C and E.
  • a given chain of components is supposed to process a video frame of 320x240 resolution and for example 3 bytes per pixel.
  • the nBufferSize will point to 320x240x3 bytes plus any component specific memory needed for alignment of components etc.
  • components A, B and C require extra data of 1 00 bytes each.
  • nBufferSize also accommodates the extra data needs of a component, then nBufferSize from a given component will be a sum of 320x240x3 bytes plus 100 bytes plus any component specific memory needed for the alignment.
  • By simply picking the max of nBuffer size will not address the memory needs for the buffer sharing chain since components A ,B and C will each need 100 bytes of extra memory to append their extra data.
  • FIG. 6 depicts the data flow chart of one possible embodiment of the invention.
  • Figure 6 situation starts with idle state. That means that all components in the previous situation have got the resources allocated to them.
  • a disturbance occurs to the chain which causes difference to the amount of data the system comprises.
  • the disturbance can be caused by, e.g. if an additional component is added to the existing chain of components or some already existing component is removed or an existing component alters the amount of data it produces.
  • This causes the system to disable all ports that were participating the earlier data allocation chain.
  • the disabling also releases all data allocation reservations from the ports that were part of the allocation chain.
  • the new combination of components is established, and the ports are re- enabled for being operative again for new allocations in the new component chain.
  • an allocator port of the newly formed chain has to determine the new balance of allocation units each component is providing to the allocation chain.
  • the allocator port cumulates the information of extra data each port in the chain provides. Based on this information, the total amount of memory needed for the buffer on the total allocation chain is computed. Finally the allocator port provides a buffer for data allocation chain with the right size and the system with new component setup becomes operative again.
  • the buffer allocation happens in a portable device.
  • the portable device may be a terminal device, which may belong to variety of telecommunications networks like GSM, UMTS, WCDMA or some other networks.
  • the device may communicate with WLAN, Bluetooth or some other near field technology with other network peers.
  • the device may have local area network connection, UPnP connectivity, or it can belong to other pervasive home connectivity network.
  • UPnP connectivity or it can belong to other pervasive home connectivity network.
  • there there may be a circuitry and electronics providing means for handling, receiving and transmitting data.
  • the device may consist of a touch sensitive or non-touch sensitive display, an input arrangement for inputting user's commands, a speaker and a microphone arrangement for conveying voice information, a digital video camera arrangement capable of capturing visual data at least in still and video formats, a microphone arrangement capable for capturing live audio, a microprocessor for executing program codes defining functionality of the device. Coupled to the microprocessor there may be a memory arrangement implemented with ROM, RAM, SRAM, DRAM, CMOS, FLASH, DDR, SDRAM or some other memory technology. Further the device may consist of a hardware accelerator coupled to another hardware accelerator or the microprocessor with a memory bus. In the memory there may be stored a computer program product, which by executed by the processor, causes the device to perform various steps.
  • the device may be arranged in such a way that it provides the means that are both essential and necessity for performing of the steps.
  • the computer program product may be composed with various programming languages depending on the needs of the programmers.
  • the use of the portable device may require the program code produced by the program product to be adjusted to new circumstances.
  • the program code adjusts its functions to the new circumstances and continues to provide executable steps to the processor.
  • the computer program product may consist of means for implementing multimedia capabilities.
  • a network device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of the embodiments.

Abstract

There is a method for providing a component chain for carrying out functions, components comprising a buffer for storing control data for the functions, at least one component comprising an allocator functionality for controlling the allocation of control data among said buffers to form a buffer allocation chain, determining control data amount consumed by the components in buffer allocation chain, in response to receiving announcements from other components in the said allocation chain, cumulating information of not-consumed control data of the components, computing a total amount of memory needed for successfully carrying out allocation of control data among buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data available but not consumed by the components of said buffer allocation chain and providing a buffer with computed size to a buffer allocation chain for allocating buffers.

Description

System and method for allocating buffers
Field of the Invention The present invention relates to systems and methods for enabling portability of components and media applications in the mobile device landscape.
Background of the Invention
The standard for portable media library is developed by OpenMAX community. OpenMAX defines a royalty-free cross-platform application programming interface specification offering comprehensive streaming media codec and application portability by enabling development, integration and programming of accelerated multimedia components across multiple operating systems and hardware platforms. The goal of the standard is to reduce the cost and complexity of porting multimedia software to new processors and architectures. OpenMAX abstracts the C-language based functionality to a higher level of representation, which allows straightforward design and implementation of variety of media use cases.
The structure of the standard comprises three main layers and a number of commonly defined media engines and media component sub-layers. The formed structure resembles a stack model. The main layers are Application Layer (AL), Integration Layer (IL) and Development Layer (DL). The Integration Layer provides an API that is used for enabling portability across operating system platforms. The interface allows the user to control the individual blocks of functionalities. These blocks are called components, and each component and relevant transform is encapsulated in a component interface. The OpenMAX IL API allows the user to load, control, connect, and unload the individual components. An OpenMAX IL component provides access to a standard set of component functions via its component handle. These functions allow a client to get and set component and port configuration parameters, get and set the state of the component, send commands to the component, receive event notifications, allocate buffers, establish communications with a single component port, and establish communication between two component ports. Each OpenMax IL component has at least one port. The OpenMAX defines four types of ports corresponding to the types of data a port may transfer, being audio, video and image data port and port for other data types. Ports are defined either input or output ports depending on whether it consumes or produces buffers.
In an OpenMAX system multimedia processing functions are cascaded one component after another so that components are connected with their input and output ports respectively. In the system like this, each port has its own buffer, which can be shared with ports of the same component or ports of the other components in the system of components.
Currently buffer sharing is implemented on a tunnel between an input port of a component and an output port of neighboring component, and is transparent to other components. A tunnel between any two ports represents a dependency between those ports. Buffer sharing extends that dependency so that all ports that share the same set of buffers form an implicit dependency chain. Exactly one port in that dependency chain allocates the buffers shared by all of them.
Whenever buffer sharing happens across the component chain, the reservation of the total buffer is done in such a way that each component's buffer size is checked and the buffer size having maximum value is selected.
When a new component generating extra data is connected to a system or an existing component is removed from the system, a traditional buffer sharing causes severe problems as the reserved buffer size might not be big enough for the needs of the added component or the removal of a component causes the shared buffer not to match anymore to the buffer reservation. There is, therefore, a need for a solution that enables a successful connection and disconnection of components in a component chain and also a successful allocation of buffers in these circumstances.
Summary of the Invention
Now there has been invented an improved method and technical equipment implementing the method, by which the above problems are alleviated. Various aspects of the invention include a method, an apparatus, a server, a client and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments of the invention are disclosed in the dependent claims.
According to a first aspect, there is provided a method for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, determining control data amount consumed by the components involved in said buffer allocation chain, in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain and providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers. According to an embodiment, the method further comprises receiving of said announcements upon said component chain initiates its functionality first time. According to an embodiment, the method further comprises receiving of said announcements upon detecting a disturbance made to said component setup. According to an embodiment, the method further comprises introducing or removing at least one component to/from at least one chain of a plurality of components that share said data buffer. According to an embodiment, the method further comprises disabling all ports that are part of a buffer allocation chain established by said at least two components, releasing all buffers from all ports belonging to said buffer allocation chain and re-enabling all ports that were affected on said buffer allocation chain. According to an embodiment, the method further comprises receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component. According to an embodiment, the method further comprises delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream. According to an embodiment, method further comprises computing the total amount of data not consumed by the components of said data allocation chain and needed to be reserved for the buffer. According to an embodiment, method further comprises determining the actual data size comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain. According to an embodiment, method further comprises releasing all buffers from all ports belonging to said buffer allocation chain sets the initial values of said buffers to zero.
According to second aspect, there is provided an apparatus comprising a processor and a memory including computer program product, the memory and the computer program product configured to, with the processor, cause the apparatus to provide a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, to determine control data amount consumed by the components involved in said buffer allocation chain, in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, to compute a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain, delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information, receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream, and to provide a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers. According to an embodiment, apparatus further comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain.
According to third aspect, there is provided an apparatus comprising means for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, means for determining control data amount consumed by the components involved in said buffer allocation chain, means for receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain, means for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain, providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers. According to an embodiment, the apparatus further comprises means for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component. According to an embodiment, the apparatus further comprises means for delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and means for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
According to fourth aspect, there is provided a computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising a computer program code section for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain, a computer program code section for determining control data amount consumed by the components involved in said buffer allocation chain a computer program code section for in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain a computer program code section for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain and a computer program code section for providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers. According to an embodiment, the computer program product further comprises a computer program code section for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component. According to an embodiment, the computer program product further comprises a computer program code section for delivering the accumulation information of the amount of data not consumed by the components of said data allocation chain propagated from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information and a computer program code section for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
Description of the Drawings
In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which Fig. 1 shows an example of OpenMAX integration layer API landscape;
Fig. 2A shows a format of a buffer transmission unit incorporated with extra data sub-buffer;
Fig. 2B shows relationship between components forming up a buffer;
Fig. 3 shows a relationship of system components and ports in one possible embodiment;
Fig. 4 shows an example of buffer allocation and sharing relationship in one possible embodiment;
Fig. 5 shows an example of extra data buffer sharing in another possible embodiment; and
Fig. 6 shows a flowchart in the case where existing data allocation is disturbed due to changed component configuration in the system and a new allocation need has occurred.
Detailed Description
In the following, several embodiments of the invention will be described in the context of media library utilization in OpenMAX standard and resource reservation thereof. It is to be noted, however, that the invention is not limited to OpenMAX environment alone. In fact, the different embodiments have applications widely in any environment where optimization of data transmission and resource reservation is required.
Figure 1 presents an operating landscape for the OpenMAX integration layer (IL) Application programming interface (API). The OpenMAX IL API is aimed to fill the gap of the missing multimedia middleware framework for some of the systems. Also in some cases a native media framework can be replaced with OpenMAX integration layer 1 04. As Figure 1 depicts, OpenMAX IL 1 04 fits seamlessly into an OpenMAX Application Layer 1 02 implementation. The OpenMAX standard also defines a set of Development Layer (DL) 1 1 2 primitives, shown in the Figure 1 as 1 1 4, which can be used as building blocks of components.
The OpenMAX IL API is a component-based media API that consists of two main segments: the core API and the component API. The OpenMAX IL core is used for dynamically loading and unloading components and for facilitating component communication. Once loaded, the API allows the user to communicate directly with the component, which eliminates any overhead for high commands. Similarly, the core allows a user to establish a communication tunnel between two components. Once established, the core API is no longer used and communications flow directly between components.
In the OpenMAX Integration Layer, components represent individual blocks of functionality. Components can be sources, sinks, codecs, filters, splitters, mixers, or any other data operator. Depending on the implementation, a component could possibly represent a piece of hardware, a software codec, another processor, or a combination thereof.
Resource management in OpenMAX IL is based on behavioral rules, priorities and component states. Each OpenMAX IL component can undergo a series of state transitions, for example UNLOADED, LOADED, INVALID, WAIT FOR RESOU RCES, IDLE, PAUSED and EXECUTING. Every component is first considered to be in state UNLOADED. The component can become to LOADED through a call to the OpenMAX IL core. All other state transitions may then be achieved by communicating directly with the component. It is also possible for a component to enter an invalid state when a state transition is made with invalid data. It is possible to enter the invalid state from any state, but the only way to exit the invalid state is to unload and reload the component again. In general, the component shall have all its operational resources when it is in the IDLE state. Transitioning into the IDLE state may fail since this state requires allocation of all operational static resources. When the transition from LOADED to IDLE fails, the IL client may try again or may choose to put the component into the WAIT FOR RESOURCES state. Upon entering the WAIT FOR RESOURCE state, the component uses a sub-routine that alerts it when resources have become available, and the component can perform a transition into the IDLE state.
The IDLE state indicates that the component has all of its needed static resources but is not processing data. The EXECUTING state indicates that the component is pending reception of buffers to process data and will try to retrieve them later. The PAUSED state maintains a context of buffer execution with the component without processing data or exchanging buffers. Transitioning from PAUSED to EXECUTING enables buffer processing to resume where the component left off. Transitioning from EXECUTING or PAUSED to IDLE will cause the context in which buffers were processed to be lost, which requires the start of a stream to be reintroduced. Transitioning from IDLE to LOADED will cause operational resources such as communication buffers to be lost.
When considering this from the buffer point of view, the IDLE state is the state of a component when buffers are allocated to the component and its ports, but no processing of data happens yet. Whenever a component is asked to perform a task, it either waits for input from some other component or delivers data to its one or several output ports. In both cases the data processing is triggered by the state transmission to EXECUTING state, which indicates the component is ready to start processing data, whenever the data is available to it in the buffers.
Communication behavior between OpenMAX components follows simple rules. Configuration of a component may be accomplished once the handle to the component has been received from the OpenMAX IL core. Data communication calls are enabled once the number of ports has been configured. Also, each port need to be configured for a specific data format, and the component must have been put in the appropriate state. Data communication is specific to a port of the component. Input ports are called from the IL client or its tunneled peer port with a function OMX_EmptyThisBuffer, whereas output ports are called from the IL client or its tunneled peer port with a function OMX_FillThisBuffer. Whenever these data transmissions are received by an input port of a component in its EXECUTING STATE, it starts processing the data contained in the buffer before forwarding it to next component in the chain.
OpenMAX IL standard defines the meta data used to describe the buffers allocated and exchanged between the components. A buffer header element holds all the necessary parameters describing the details of the buffer and a pointer to the exact location of data within the physical buffer. In this text concept "sending a buffer" refers simply to action where a buffer header with pointers to actual data is sent from one port to another.
In following some definitions of the parameters used in the header are described shortly: · pBuffer is a pointer to the actual buffer where data is stored but not necessarily the start of valid data.
nAllocLen is the total size of the allocated buffer in bytes, including valid and unused byte.
nFilledLen is the total size of valid bytes currently in the buffer starting from the location specified by pBuffer and nOffset. This includes any padding, e.g. the unused bytes at the end of a line of video when stride in bytes is larger than width in bytes.
nOffset is the start offset of valid data in bytes from the start of the buffer. A pointer to the valid data may be obtained by adding nOffset to pBuffer.
pAppPrivate is a pointer to an IL client private structure.
pPlatformPrivate is a pointer to a private platform-specific structure.
pOutputPortPrivate is a private pointer of the output port that uses the buffer.
· plnputPortPrivate is a private pointer of the input port that uses the buffer.
nFlags field contains buffer specific flags. nOutputPortlndex contains the port index of the output port that uses the buffer.
nlnputPortlndex contains the port index of the input port that uses the buffer.
In the context of a single port, each data buffer has a header associated with it that contains meta-information about the buffer. The IL client shares buffer headers with each port with which it is communicating. Likewise, each pair of tunneling ports share buffer headers; otherwise, the same buffer transferred over multiple ports will have distinct buffer headers associated with it for each port.
The port configuration is used to determine and define the format of the data to be transferred on a component port, but the configuration does not define how that data exists in the buffer.
There are generally three cases that describe how a buffer can be filled with data. Each case presents its own benefits. In all cases, the range and location of valid data in a buffer is defined by the pBuffer, nOffset, and nFilledLen parameters of the buffer header. The pBuffer parameter points to the start of the buffer. The nOffset parameter indicates the number of bytes between the start of the buffer and the start of valid data. The nFilledLen parameter specifies the number of contiguous bytes of valid data in the buffer. The valid data in the buffer is therefore located in the range pBuffer + nOffset to pBuffer + nOffset + nFilledLen.
The following cases are representative of compressed data in a buffer that is transferred into or out of a component when decoding or encoding. In all cases, the buffer just provides a transport mechanism for the data with no particular requirement on the content. The requirement for the content is defined by the port configuration parameters.
Case 1 : Each buffer is filled in whole or in part. In the case of buffers containing compressed data frames, the frames are denoted by f1 to fn. Case 1 provides a benefit when decoding for playback. The buffer can accommodate multiple frames and reduce the number of transactions required to buffer an amount of data for decoding. However, this case may require the decoder to parse the data when decoding the frames. It also may require the decoder component to have a frame-building buffer in which to put the parsed data or maintain partial frames that would be completed with the next buffer.
Case 2: Each buffer is filled with only complete frames of compressed data. Case 2 differs from case 1 because it requires the compressed data to be parsed first so that only complete frames are put in the buffers. Case 2 may also require the decoder component to parse the data for decoding. This case may not require the extra working buffer for parsing frames required in case 1 .
Case 3: Each buffer is filled with only one frame of compressed data. The benefit in case 3 is that a decoding component does not have to parse the data. Parsing would be required at the source component. However, this method creates a bottleneck in data transfer. Data transfer would be limited to one frame per transfer. Depending on the implementation, one transaction per frame could have a greater impact on performance than parsing frames from a buffer.
Depending on component requirements and system configurations, a need may arise where additional supporting information will need to be appended to the end of the buffer to further process the buffer payload content within the next component.
In figure 2A is presented a simplified format of a buffer pointer transmission unit containing also pointers to additional payload of the buffer. The existence of additional buffer payload information 208 is identified via the extra data buffer flag within the buffer header structure 206. This additional buffer payload information applies to the first new logical unit in the buffer. Thus, in case of multiple logical units in a buffer, the extra data flag applies to the logical unit whose starting boundary occurs first in the buffer. Subsequent logical units in a buffer don't have explicit extra data. When extra data is present, the data attributes like type and size are identified by a corresponding data structure 206, immediately following the buffer payload 208 and preceding the actual data. Multiple types of extra data may be appended to the end of the normal payload 202 as series of block pairs. If the reserved buffer structure is bigger than the actual amount of data in the offset padding 200, the rest of the buffer structure is filled with unused badding 21 2. Figure 2B represents the relationship how data reservation is partitioned between actual data consumed by components and extra data within a buffer, which size is defined with nBufferSize parameter and starting point in memory space is expressed with pBuffer pointer. The parameter contains information of the minimum size in bytes for buffers that are allocated for a certain port. If there is no extra data 21 8 to be processed in the buffer, the whole buffer is reserved for actual data 21 4. As can be seen from the Figure 2B, the amount of extra data can vary from zero to a predetermined value. Data communications with components is directed to a specific component port. This way each port has a component-defined minimum number of buffers it can allocate or use. A port associates a buffer header with each buffer. A buffer header references data in the buffer and provides metadata associated with the contents of the buffer. Every component port is capable of allocating its own buffers or using pre-allocated buffers; one of these choices will usually be more efficient than the other.
For a given tunnel, exactly one port supplies the buffers and passes those buffers to the non-supplier port. Normally the supplier port of a tunnel also allocates the buffers. Under the right circumstances, however, a tunneling component may choose to re-use buffers from one port on another to avoid memory copies and optimize memory usage.
Generally in the OpenMAX environment, among a pair of ports that are tunneling, the port that calls UseBuffer on its neighbor is known as a supplier port. A buffer supplier port does not necessarily allocate its buffers; it may re-use buffer from another port on the same component.
In Figure 3 is illustrated a typical relationship between ports. Component A has a buffer 300, which is shared between other components. Ports 308 and 31 2 are illustrated as supplier ports. The port that receives the UseBuffer 302 calls from its neighbor is known as a non-supplier port. Ports 31 0 and 31 4 illustrate non-supplier ports. A port's tunneling port is the port neighboring it with which it shares a tunnel. For example, port 31 0 is the tunneling port to port 308. Likewise, port 308 is the tunneling port to port 31 0. An allocator port is a supplier port that also allocates its own buffers. Port 308 is the only allocator port in the Figure 3. Another port type is a sharing port. That is a port that re-uses buffers from another port on the same component. This way, port 31 2 in Figure 3 is a sharing port as it reuses the buffer from port 31 0. Sharing relation is marked with 304 in Figure 3. In one embodiment a buffer sharing extends the dependency of the components so that all ports that share the same set of buffers form an implicit dependency chain. One port in that dependency chain allocates the buffers shared by all of them. In Figure 3 that sharing port is port 308. The port can have a set of requirements for a buffer. These requirements may be, for example the number of buffers required by the port and the size of each required buffer. The maximum of multiple sets of buffer requirements is defined as the largest number of buffers derived from any set combined with the largest size derived from any set.
One embodiment relates to a situation where component is attached to the dependency chain afterwards when the original component system has been established some time already. Adding new component to the set of chained components already running happens by disabling the specific ports of the neighboring components. Disabling the ports resets the memory buffer allocated at those ports to the default values the components had initially. If the neighboring ports were part of the big buffer sharing chain, then the ports and components involved in this buffer sharing chain had to be disabled as well, and the new allocation of the buffers would happen. The new component brings a new source or a sink to the available buffer resource. A buffer size calculation should be performed in this situation in such a way that each port having extra data requirements to a buffer, informs its production of extra data or capability to receive other components' extra data to its buffer with a parameter dedicated to this purpose. The parameter can be for example nExtraDataSizeShared or nExtraDataSizePropagated The use of these parameters is described in more detail later on in this document.
In one embodiment a data reservation unit extra data is introduced. It may or may not be a real variable for a data reservation. Extra data is appended to the buffer that holds the component's processed data. The buffer header of this buffer is then propagated to next component in the chain which utilizes the extra data found in the pointed location in order to properly process the actual data in the buffer. Component will communicate with this information to other components so that they are making their own extra data allocation calculations based on the data they receive from other components taking part to the buffer allocation component chain. Allocator component's port will query sharing ports downstream and in this process, update its extra data size by cumulating the size of extra data across all sharing components downstream. At the same time, they will query any buffer size being propagated through its input ports from upstream and use this information to correctly compute the extra data size. As an example, the parameter that may contain the original buffer size information could be nBufferSize.. This parameter for any component will not contain the extra data size, only the buffer size of the original dependency chain. If, for example, the buffer is supposed to handle a video frame of 320x240 resolution and 3 bytes per pixel, then memory needed would be 320x240x3 bytes = 230400 bytes. The given component may require more memory for certain alignments etc. All these will be accumulated to form the nBufferSize. This will not include extra data size which will be computed separately. This way the total size of buffer will be original buffer size added to the extra data size.
In another embodiment buffers may need to be copied. When the producer of extra data does not share the same buffer with the consumer of extra data, the intermediate component which allocates a new set of buffers needs to copy across the extra data and forward it downstream all the way till the consumer component. In consequence of this, the intermediate component or its tunneled port needs to allocate a new set of buffers. In order to do this buffer allocation successfully, the allocator component preferably take into account any extra data size coming from upstream.
One example of a code that is suitable for defining extra data sharing could be expressed as following:
OMXJndexParamExtraDataSizeShared
typedef struct OMX_PARAM_EXTRADATASIZESHARED {
OMXJJ32 nSize;
OMX_VERSIONTYPE nVersion;
OMXJJ32 nPortlndex;
OMXJJ32 nExtraDataSizeShared; //read-only
} OMX_PARAM_EXTRADATASIZESHARED; Another example of a code that is suitable for defining extra data propagation could be expressed as following:
OMXJndexParamExtraDataSizePropagated
typedef struct OMX_PARAM_EXTRADATASIZEPROPAGATED { OMXJJ32 nSize;
OMX_VERSIONTYPE nVersion;
OMXJJ32 nPortlndex;
OMXJJ32 nExtraDataSizePropagated; //read-only
} OMX_PARAM_EXTRADATASIZEPROPAGATED;
These code examples can be utilized for example in OpenMAX environment by calling a macro GetParameterQ, defined in the OpenMAX IL standard version 1 .1 .2. from the year 2008, in chapter 3.3.5.
Abovementioned protocol code examples are used in one possible embodiment where the components move from loaded to idle state and resource allocation happens. Another possible embodiment is an occurrence where the disabled ports get enabled in a state other than LOADED state. The protocol codes define certain new structure types in OpenMAX that are used for storing data units to variables nExtraDataSizeShared and nExtraDataSizePropagated respectively. Other information units the both structures contain are the information of the size of the unit stored in nSize parameter, the used OpenMAX standard version as stored in nVersion parameter and nPortlndex parameter representing the read-only value containing the index of the port.
In one embodiment the first set of code is used and the default values of the buffers in the components are initially set to zero.
In another embodiment the second set of code is used and the default values of the buffers in the components are initially set to zero.
Figure 4 shows as one embodiment how the buffer allocation with sharing can be established. Figure 4 depicts the needed steps for component C to achieve an idle state. Alongside this the whole chain of components moves to idle state. Following concentrates to component C only for clarity reasons. When the IL client commands component C to transition from loaded to idle state, it follows the following prescribed steps:
- Component C knows that it can re-use port 414 buffers 406 since port 416 is a supplier port. Component C establishes a sharing relationship from port 414 to port 416.
- Component C decides that since port 414 is a supplier port that does not re-use buffers, port 414 shall be an allocator port.
- Component C allocates and distributes port 414 buffers. Since port 41 6 will re-use the buffer of port 41 4, component C calls determines the buffer requirements of port 41 6. After that port 41 6 calls OMX_GetParameter function on port 41 8 to determine its buffer requirements and reports the requirements as the maximum between its own and those of port 41 8. Next port 41 4 calls OMX_GetParameter function on port 41 2 to determine its buffer requirements via nBufferSize. Port 41 2 determines the buffer requirements of port 41 0. Port 41 0 returns the maximum of its own requirements and the requirement of port 408, retrieved via OMX_GetParameter function call. Port 41 2 then returns the maximum of its own requirements and the requirements that port 41 0 returns. Port 41 4 allocates buffers according to the maximum of its own requirements and the requirements that ports 41 2 and 41 6 return. The resulting buffers are effectively allocated according to the maximum requirements of ports 408, 41 0, 41 2, 41 4, 41 6 and 41 8, all of which use the buffers of port 414Since port 41 6 will re-use the buffers of port 41 4, component C shares these buffers with port 41 6. For utilizing this, port 41 6 calls OMXJJseBuffer function on port 41 8 for every buffer that is shared. For each buffer allocated, port 41 4 calls OMXJJseBuffer function on port 41 2. Now port 41 2 shares each buffer with port 41 0. Port 41 0, in turn, calls OMXJJseBuffer function on port 408 with the buffer.
Since all ports of all components now have their buffers, all components in the system may transition to idle state and the buffer allocation sharing is completed. . Summarizing the outcome, the size of the buffers being shared in this embodiment is the maximum of the nBufferSize's from components A to D.
Figure 5 depicts another possible embodiment of the invention with the details of buffer sharing in case the sharing components also generate extra data. In this embodiment, the buffer size needed for the actual data reservation of the components is determined in two parts, first the maximum shared buffer size, parameter nBufferSize, is determined in the same way as procedure depicted along with Figure 4. In addition to this, there is also need to determine the accumulated extra data each component will provide to the component chain. Next this part is discussed in more detail. In Figure 5 the operating set up is arranged such that ports 500, 508 and 514 are allocator ports. Allocator output port 500 on component A will query its tunneled input port 502 on B for extra data size on any of component B's sharing ports downstream. Component can, for example, use an OpenMAX function call
GetParameter(OMX_lndexParamExtraDataSizeShared,
OMX_PARAM_EXTRADATASIZESHARED*) for querying this information.
As a result of this, a query will be passed on to output port 504 on component B which will return any extra data size it might have after querying its tunneled input port 506 on component C. Since buffer sharing is defined here to be within the range 516, this port does not share buffers with its output port 508 on C, the query stops here. At every stage, any extra data size queried will be gradually added and eventually be available to allocator output port 500 on component A. Allocator output port 500 on component A does not have any input port so it does not query whether there is any extra data size being propagated through its input ports. So, total extra data size will be sum of extra data size of port 500 added to extra data size other ports share with port 500.The size of the buffers being shared is the summation of the maximum of the nBufferSize's from components A to C added to the total extra data size from components A to C.
Similarly, allocator output port 514 on component E will query its tunneled input port 512 on component C for extra data size on any of its sharing port downstream. Since port 512 of component C does not share buffers with component C's output port 508, the query stops there, indicated with buffer sharing range 520 in Figure 5. Thereafter, total extra data size port 514 could use is its own extra data size only and no buffer sharing happens here.
Allocator output port 508 on component C will query any extra data size on its sharing port 510 downstream. Figure 5. depicts that port 510 has no sharing port downstream. Port 510 has got tunneled port 508 upstream, but the query will not proceed forward downstream since input port 510 on component D does not have any other port to communicate with. This way, ports 51 0 and 508 share a buffer the allocator port 508 provides. At the same time, output port 508 on component C has got two input ports 506 and 512 which it uses to process the buffers and forward it downstream. It will query for example both input port 506 and input port 512 on C via e.g. an OpenMAX function call
GetParameter(OMX_lndexParamExtraDataSizePropogated,
OMX_PARAM_EXTRADATASIZEPROPAGATED*) and this query, performed in upstream direction, will keep on passing till it reaches the relevant source component. As a result, these queries will finally provide extra data size propagated from source components. Allocator output port 508 on component C will compute total extra data size as a sum of its own extra data size plus the extra data size propagated from input port 506 on component C plus the extra data size propagated from input port 512 on component C. The total size of the buffers being shared in this scenario is the summation of the maximum of the nBufferSize's from components A, B, C and E added to the total extra data size from components A to C and E. As an example embodiment a given chain of components is supposed to process a video frame of 320x240 resolution and for example 3 bytes per pixel. The nBufferSize will point to 320x240x3 bytes plus any component specific memory needed for alignment of components etc. For example, components A, B and C require extra data of 1 00 bytes each. If the nBufferSize also accommodates the extra data needs of a component, then nBufferSize from a given component will be a sum of 320x240x3 bytes plus 100 bytes plus any component specific memory needed for the alignment. By simply picking the max of nBuffer size will not address the memory needs for the buffer sharing chain since components A ,B and C will each need 100 bytes of extra memory to append their extra data. Figure 6. depicts the data flow chart of one possible embodiment of the invention. In Figure 6 situation starts with idle state. That means that all components in the previous situation have got the resources allocated to them. Next, a disturbance occurs to the chain which causes difference to the amount of data the system comprises. The disturbance can be caused by, e.g. if an additional component is added to the existing chain of components or some already existing component is removed or an existing component alters the amount of data it produces. This causes the system to disable all ports that were participating the earlier data allocation chain. The disabling also releases all data allocation reservations from the ports that were part of the allocation chain. Right after the release of the ports the new combination of components is established, and the ports are re- enabled for being operative again for new allocations in the new component chain. Now an allocator port of the newly formed chain has to determine the new balance of allocation units each component is providing to the allocation chain. The allocator port cumulates the information of extra data each port in the chain provides. Based on this information, the total amount of memory needed for the buffer on the total allocation chain is computed. Finally the allocator port provides a buffer for data allocation chain with the right size and the system with new component setup becomes operative again.
In one embodiment the buffer allocation happens in a portable device. The portable device may be a terminal device, which may belong to variety of telecommunications networks like GSM, UMTS, WCDMA or some other networks. The device may communicate with WLAN, Bluetooth or some other near field technology with other network peers. The device may have local area network connection, UPnP connectivity, or it can belong to other pervasive home connectivity network. In the device there may be a circuitry and electronics providing means for handling, receiving and transmitting data. The device may consist of a touch sensitive or non-touch sensitive display, an input arrangement for inputting user's commands, a speaker and a microphone arrangement for conveying voice information, a digital video camera arrangement capable of capturing visual data at least in still and video formats, a microphone arrangement capable for capturing live audio, a microprocessor for executing program codes defining functionality of the device. Coupled to the microprocessor there may be a memory arrangement implemented with ROM, RAM, SRAM, DRAM, CMOS, FLASH, DDR, SDRAM or some other memory technology. Further the device may consist of a hardware accelerator coupled to another hardware accelerator or the microprocessor with a memory bus. In the memory there may be stored a computer program product, which by executed by the processor, causes the device to perform various steps. The device may be arranged in such a way that it provides the means that are both essential and necessity for performing of the steps. The computer program product may be composed with various programming languages depending on the needs of the programmers. The use of the portable device may require the program code produced by the program product to be adjusted to new circumstances. The program code adjusts its functions to the new circumstances and continues to provide executable steps to the processor. The computer program product may consist of means for implementing multimedia capabilities. Yet further, a network device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of the embodiments.
It is obvious that the present invention is not limited solely to the above- presented embodiments, but it can be modified within the scope of the appended claims.

Claims

Claims:
1 . A method comprising :
- providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain;
- determining control data amount consumed by the components involved in said buffer allocation chain ;
- in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain;
- computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain; and
- providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
2. A method according to claim 1 , wherein said announcements are received upon said component chain initiates its functionality first time.
3. A method according to claim 1 , wherein said announcements are received upon detecting a disturbance made to said component setup.
4. A method according to claim 3, wherein making disturbance to said component setup comprises: - introducing or removing at least one component to/from at least one chain of a plurality of components that share said data buffer.
5. A method according to claims 3 or 4, wherein the disturbance in said component setup causes:
- disabling all ports that are part of a buffer allocation chain established by said at least two components;
- releasing all buffers from all ports belonging to said buffer allocation chain;
- re-enabling all ports that were affected on said buffer allocation chain.
6. A method according to claims 1 to 5, further comprising:
- receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
7. A method according to claims 1 to 6, further comprising:
- delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information; and
- receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
8. A method according to claim 7, where the received information is used for computing the total amount of data not consumed by the components of said data allocation chain and needed to be reserved for the buffer.
9. A method according to claims 1 to 8, where determining the actual data size comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain.
1 0. A method according to claim 1 to 9, where releasing all buffers from all ports belonging to said buffer allocation chain sets the initial values of said buffers to zero.
1 1 . A method according to claim 1 to 1 0, where amount of available but not consumed control data is extra data.
1 2. An apparatus comprising a processor; and a memory including computer program product, the memory and the computer program product configured to, with the processor, cause the apparatus to perform at least the following:
- providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain;
- determining control data amount consumed by the components involved in said buffer allocation chain ;
- in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain;
- computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain; and - providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
1 3. An apparatus according to claim 1 2, wherein said announcements are received upon said component chain initiates its functionality first time.
1 4. An apparatus according to claim 1 2, wherein said announcements are received upon detecting a disturbance made to said component setup.
1 5. An apparatus according to claim 1 4, wherein making disturbance to said component setup comprises:
- introducing or removing at least one component to/from at least one chain of a plurality of components that share said data buffer.
1 6. An apparatus according to claims 1 4 or 1 5, wherein the disturbance in said component setup causes:
- disabling all ports that are part of a buffer allocation chain established by said at least two components;
- releasing all buffers from all ports belonging to said buffer allocation chain;
- re-enabling all ports that were affected on said buffer allocation chain.
1 7. An apparatus according to claims 1 2 to 1 6, further comprising:
- receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
1 8. An apparatus according to claims 1 2 to 1 7, further comprising: - delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information; and
- receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
1 9. An apparatus according to claim 1 8, where the received information is used for computing the total amount of data not consumed by the components of said data allocation chain and needed to be reserved for the buffer.
20. An apparatus according to claims 1 2 to 1 9, where determining the actual data size comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain.
21 . An apparatus according to claim 1 2 to 20, where releasing all buffers from all ports belonging to said buffer allocation chain sets the initial values of said buffers to zero.
22. An apparatus according to claim 1 2 to 21 , where amount of available but not consumed control data is extra data. 23. An apparatus comprising:
- means for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain; - means for determining control data amount consumed by the components involved in said buffer allocation chain ;
- means for receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain;
- means for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain;
- providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
24. An apparatus according to claim 23, further comprising: - means for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
25. An apparatus according to claim 23, further comprising:
- means for delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information; and
- means for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
26. An apparatus according to claim 23, further comprising:
- means for disabling all ports that are part of a buffer allocation chain established by said at least two components;
- means for releasing all buffers from all ports belonging to said buffer allocation chain; and
- means for re-enabling all ports that were affected on said buffer allocation chain. 27. A computer program product stored on a computer readable medium and executable in a data processing device, the computer program product comprising:
- a computer program code section for providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain;
- a computer program code section for determining control data amount consumed by the components involved in said buffer allocation chain;
- a computer program code section for in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain;
- a computer program code section for computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain; and a computer program code section for providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
A computer program product according to claim 27, further comprising:
a computer program code section for receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
A computer program product according to claim 27, further comprising:
a computer program code section for delivering the accumulation information of the amount of data not consumed by the components of said data allocation chain propagated from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information; and
a computer program code section for receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
A computer program product according to claim 27, where the received information is used for computing the total amount of data not consumed by the components of said data allocation chain and needed to be reserved for the buffer. A computer program product according to claim 27, where determining the actual data size comprises the selection from maximum of nBufferSize values from components in said buffer allocation chain.
32. A computer program product according to claim 27, where releasing all buffers from all ports belonging to said buffer allocation chain sets the initial values of said buffers to zero.
33. A computer program product according to claim 27, where amount of available but not consumed control data is extra data.
34. A system for allocating buffer, comprising steps, when executed by a processor, for
- providing a chain of at least two components for carrying out a set of functions, at least one of said components comprising a buffer for storing control data for said set of functions, at least one component comprising an allocator functionality for controlling the allocation of said control data among said one or more buffers to form a buffer allocation chain;
- determining control data amount consumed by the components involved in said buffer allocation chain ;
- in response to receiving, in the component comprising said allocator functionality, announcements from other components in the said allocation chain, cumulating information of available control data, which is not consumed by the components of said buffer allocation chain;
- computing a total amount of memory needed for successfully carrying out said allocation of said control data among said buffers by adding the amount of control data consumed by the components involved in said buffer allocation chain and the amount of accumulated control data which is available but not consumed by the components of said buffer allocation chain; and
- providing a buffer with computed size to be included in said buffer allocation chain for said allocation of said control data among said buffers.
35. A system according to claim 34, further comprising steps, when executed by a processor, for
- receiving announcements from other components in the said allocation chain for information of available control data which is not consumed by the components of said buffer allocation chain, wherein the said components reside downstream from the cumulating component.
36. A system according to claim 1 5, further comprising steps, when executed by a processor, for:
- delivering the accumulation information of available but not consumed control data by the components of said data allocation chain from at least one shared port of at least one component, wherein the at least one component providing this information resides upstream from the component cumulating the information; and
- receiving an announcement of the need of extra data buffer calculated by at least one component residing downstream from the component cumulating the information, the calculations basing on said cumulating information propagated from upstream.
PCT/FI2010/050257 2010-03-31 2010-03-31 System and method for allocating buffers WO2011121168A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/FI2010/050257 WO2011121168A1 (en) 2010-03-31 2010-03-31 System and method for allocating buffers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2010/050257 WO2011121168A1 (en) 2010-03-31 2010-03-31 System and method for allocating buffers

Publications (1)

Publication Number Publication Date
WO2011121168A1 true WO2011121168A1 (en) 2011-10-06

Family

ID=44711391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2010/050257 WO2011121168A1 (en) 2010-03-31 2010-03-31 System and method for allocating buffers

Country Status (1)

Country Link
WO (1) WO2011121168A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5675793A (en) * 1992-09-30 1997-10-07 Microsoft Corporation Dynamic allocation of a common buffer for use by a set of software routines
US6209041B1 (en) * 1997-04-04 2001-03-27 Microsoft Corporation Method and computer program product for reducing inter-buffer data transfers between separate processing components
US20020099758A1 (en) * 2000-12-06 2002-07-25 Miller Daniel J. System and related methods for reducing memory requirements of a media processing system
US20060126653A1 (en) * 2004-12-10 2006-06-15 Matthew Joseph Anglin Transferring data between system and storage in a shared buffer
US7710426B1 (en) * 2005-04-25 2010-05-04 Apple Inc. Buffer requirements reconciliation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5675793A (en) * 1992-09-30 1997-10-07 Microsoft Corporation Dynamic allocation of a common buffer for use by a set of software routines
US6209041B1 (en) * 1997-04-04 2001-03-27 Microsoft Corporation Method and computer program product for reducing inter-buffer data transfers between separate processing components
US20020099758A1 (en) * 2000-12-06 2002-07-25 Miller Daniel J. System and related methods for reducing memory requirements of a media processing system
US20060126653A1 (en) * 2004-12-10 2006-06-15 Matthew Joseph Anglin Transferring data between system and storage in a shared buffer
US7710426B1 (en) * 2005-04-25 2010-05-04 Apple Inc. Buffer requirements reconciliation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
OH H. ET AL: "Data memory minimization by sharing large size buffers", PROCEEDINGS OF THE ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE 2000 (ASP-DAC'00), 25 January 2000 (2000-01-25) - 28 January 2000 (2000-01-28), YOKOHAMA, JAPAN, pages 491 - 496, XP010376394 *
RAITHEL, T.: "Software Synthesis with Evolutionary Algorithms", PROCEEDINGS OF THE IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE '99), 12 July 1999 (1999-07-12) - 16 July 1999 (1999-07-16), BLED, SLOVENIA, pages 1490 - 1495, XP010353915 *
THE KHRONOS GROUP INC.: "OpenMAX(TM) Integration Layer Application Programming Interface Specification, Version 1.1.2 [online]", 1 September 2008 (2008-09-01), Retrieved from the Internet <URL:http://www.khronos.org/registry/omxil/specs/OpenMAX_IL_1_1_2_Specification.pdf> [retrieved on 20110103] *

Similar Documents

Publication Publication Date Title
US11829787B2 (en) Multi-process model for cross-platform applications
EP1047240B1 (en) Method and apparatus for object rendering in a network
US8782117B2 (en) Calling functions within a deterministic calling convention
JP2002312331A (en) Quality of service for media accelerator
US10303529B2 (en) Protocol for communication of data structures
US7457845B2 (en) Method and system for TCP/IP using generic buffers for non-posting TCP applications
JP7100154B2 (en) Processor core scheduling method, equipment, terminals and storage media
US11528316B2 (en) Graph representation and description of configuration parameters for media processing function in network-based media processing (NBMP)
US20230418660A1 (en) Method and apparatus for a step-enabled workflow
WO2024066828A1 (en) Data processing method and apparatus, and device, computer-readable storage medium and computer program product
JP2020198636A (en) System and method for efficient call processing
KR102601576B1 (en) Method and apparatus for step-assisted workflow
CN111427557A (en) Application microservice method and device, electronic equipment and readable storage medium
CN116775522A (en) Data processing method based on network equipment and network equipment
WO2021238259A1 (en) Data transmission method, apparatus and device, and computer-readable storage medium
CN114189569B (en) Data transmission method, device and system
CN116560878B (en) Memory sharing method and related device
CN110505478A (en) Decode management method, device, equipment and the medium of resource
CN112181542A (en) Function calling method and device, electronic equipment and storage medium
US6711737B1 (en) Data processing system, data processing method, and program-providing medium therewith
WO2011121168A1 (en) System and method for allocating buffers
US20150135269A1 (en) Method and system for sharing applications among a plurality of electronic devices
CN116049085A (en) Data processing system and method
US8621445B2 (en) Wrapper for porting a media framework and components to operate with another media framework
US20140056309A1 (en) Method and apparatus for frame transfer using virtual buffer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10848792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10848792

Country of ref document: EP

Kind code of ref document: A1