US20130174059A1 - Communicating between a virtual area and a physical space - Google Patents

Communicating between a virtual area and a physical space Download PDF

Info

Publication number
US20130174059A1
US20130174059A1 US13/554,084 US201213554084A US2013174059A1 US 20130174059 A1 US20130174059 A1 US 20130174059A1 US 201213554084 A US201213554084 A US 201213554084A US 2013174059 A1 US2013174059 A1 US 2013174059A1
Authority
US
United States
Prior art keywords
physical
network node
communicant
virtual
virtual area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/554,084
Inventor
David Van Wie
Paul J. Brody
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Social Communications Co
Sococo Inc
Original Assignee
Social Communications Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/554,084 priority Critical patent/US20130174059A1/en
Application filed by Social Communications Co filed Critical Social Communications Co
Publication of US20130174059A1 publication Critical patent/US20130174059A1/en
Priority to US13/954,742 priority patent/US9319357B2/en
Priority to US14/056,192 priority patent/US9288242B2/en
Assigned to SOCIAL COMMUNICATIONS COMPANY reassignment SOCIAL COMMUNICATIONS COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VAN WIE, DAVID
Priority to US15/070,551 priority patent/US9602447B2/en
Assigned to SOCIAL COMMUNICATIONS COMPANY reassignment SOCIAL COMMUNICATIONS COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRODY, PAUL J.
Assigned to Sococo, Inc. reassignment Sococo, Inc. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SOCIAL COMMUNICATIONS COMPANY
Priority to US15/460,125 priority patent/US9942181B2/en
Priority to US15/950,067 priority patent/US10608969B2/en
Priority to US16/814,702 priority patent/US20200213256A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06Q50/60
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/567Multimedia conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/42Graphical user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/10Aspects of automatic or semi-automatic exchanges related to the purpose or context of the telephonic communication
    • H04M2203/1016Telecontrol
    • H04M2203/1025Telecontrol of avatars

Definitions

  • FIG. 1 is a block diagram of an example of a network communications environment that includes virtual presence apparatus in a physical space, a remote client network node, and a virtual environment creator.
  • FIG. 2 shows a flow diagram of a method of communicating between a virtual area and a physical space.
  • FIG. 3 is a flow diagram of an example of a method performed by an example of virtual presence apparatus.
  • FIG. 4 is a block diagram of an example of virtual presence apparatus.
  • FIG. 5A is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 5B is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 5C is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 6 is a flow diagram of an example of a method of administering communications between a virtual area and a physical space.
  • FIG. 7 is a flow diagram of an example of a method of communicating between a virtual area and a physical space.
  • FIG. 8 is a diagrammatic view of an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in a physical space.
  • FIG. 9 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 10 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 11 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 12 is a diagrammatic view of an example of a visualization of a virtual area.
  • FIG. 13 is a flow diagram of an example of a method by which an example of a server network node manages communications between virtual area zones and multiple physical apparatus in respective real-world locations.
  • FIG. 14 is a diagrammatic view of a network communications environment that includes network resources connected to an example of a client network node that generates an example of a graphical user interface that includes a spatial visualization of the network resources.
  • a “communicant” is a person who communicates or otherwise interacts with other persons over one or more network connections, where the communication or interaction may or may not occur in the context of a virtual area.
  • a “user” is a communicant who is operating a particular network node that defines a particular perspective for descriptive purposes.
  • a “computer” is any machine, device, or apparatus that processes data according to computer-readable instructions that are stored on a computer-readable medium either temporarily or permanently.
  • a “computer operating system” is a software component of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources.
  • a “software application” (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks.
  • a “data file” is a block of information that durably stores data for use by a software application.
  • computer-readable medium refers to any tangible, non-transitory medium capable storing information (e.g., instructions and data) that is readable by a machine (e.g., a computer).
  • Storage devices suitable for tangibly embodying such information include, but are not limited to, all forms of physical, non-transitory computer-readable memory, including, for example, semiconductor memory devices, such as random access memory (RAM), EPROM, EEPROM, and Flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • a “window” is a visual area of a display that typically includes a user interface.
  • a window typically displays the output of a software process and typically enables a user to input commands or data for the software process.
  • a window that has a parent is called a “child window.”
  • a window that has no parent, or whose parent is the desktop window, is called a “top-level window.”
  • a “desktop” is a system-defined window that paints the background of a graphical user interface (GUI) and serves as the base for all windows displayed by all software processes.
  • GUI graphical user interface
  • a “data sink” (referred to herein simply as a “sink”) is any of a device (e.g., a computer), part of a device, or software that receives data.
  • a “data source” (referred to herein simply as a “source”) is any of a device (e.g., a computer), part of a device, or software that originates data.
  • a “network node” (also referred to simply as a “node”) is a junction or connection point in a communications network. Examples of network nodes include, but are not limited to, a terminal, a computer, and a network switch.
  • a “server” network node is a host computer on a network that responds to requests for information or service.
  • a “client network node” is a computer on a network that requests information or service from a server.
  • a “network connection” is a link between two communicating network nodes.
  • a “connection handle” is a pointer or identifier (e.g., a uniform resource identifier (URI)) that can be used to establish a network connection with a network resource.
  • a “network communication” can include any type of information (e.g., text, voice, audio, video, electronic mail message, data file, motion data stream, and data packet) that is transmitted or otherwise conveyed from one network node to another network node over a network connection.
  • Synchronous conferencing refers to communications in which communicants participate at the same time. Synchronous conferencing encompasses all types of networked collaboration technologies, including instant messaging (e.g., text chat), audio conferencing, video conferencing, application sharing, and file sharing technologies.
  • a “communicant interaction” is any type of direct or indirect action or influence between a communicant and another network entity, which may include for example another communicant, a virtual area, or a network service.
  • Examples of types of communicant communications include communicants communicating with each other in realtime, a communicant entering a virtual area, and a communicant requesting access to a resource from a network service.
  • Presence refers to the ability and willingness of a networked entity (e.g., a communicant, service, or device) to communicate, where such willingness affects the ability to detect and obtain information about the state of the entity on a network and the ability to connect to the entity.
  • a networked entity e.g., a communicant, service, or device
  • a “realtime data stream” is data that is structured and processed in a continuous flow and designed to be received with no delay or only imperceptible delay.
  • Realtime data streams include digital representations of voice, video, user movements, facial expressions and other physical phenomena, as well as data within the computing environment that may benefit from rapid transmission, rapid execution, or both rapid transmission and rapid execution, including for example, avatar movement instructions, text chat, realtime data feeds (e.g., sensor data, machine control instructions, transaction streams and stock quote information feeds), screen shares, and file transfers.
  • a “physical space” is a three-dimensional real-world environment in which a communicant can be located physically.
  • a “virtual area” (also referred to as an “area” or a “place”) is a representation of a computer-managed space or scene.
  • Virtual areas typically are one-dimensional, two-dimensional, or three-dimensional representations; although in some examples a virtual area may correspond to a single point.
  • a virtual area is designed to simulate a physical, real-world space. For example, using a traditional computer monitor, a virtual area may be visualized as a two-dimensional graphic of a three-dimensional computer-generated space. However, virtual areas do not require an associated visualization.
  • a virtual area typically refers to an instance of a virtual area schema, where the schema defines the structure and contents of a virtual area in terms of variables and the instance defines the structure and contents of a virtual area in terms of values that have been resolved from a particular context.
  • a “persistent virtual area” is a virtual area that persists even after all communicants have disconnected from the virtual area. The state of a persistent virtual area is preserved so that it can be restored the next time a communicant connects to the virtual area.
  • a “persistent association” between a virtual area and virtual presence apparatus is an association that persists even after all communicants and the virtual presence apparatus have disconnected from the virtual area.
  • a “virtual area application” (also referred to as a “virtual area specification”) is a description of a virtual area that is used in creating a virtual environment.
  • a virtual area application typically includes definitions of geometry, physics, and realtime switching rules that are associated with one or more zones of the virtual area.
  • a “virtual area enabled communications application” is a client communications application that integrates realtime communications (e.g., synchronous conferencing functionalities, such as audio, video, chat, and realtime other data communications) with a virtual area.
  • realtime communications e.g., synchronous conferencing functionalities, such as audio, video, chat, and realtime other data communications
  • a “virtual environment” is a representation of a computer-managed space that includes at least one virtual area and supports realtime communications between communicants.
  • a “position” in a virtual area refers to a location of a point or an area or a volume in the virtual area.
  • a point typically is represented by a single set of one-dimensional, two-dimensional, or three-dimensional coordinates (e.g., x, y, z) that define a spot in the virtual area.
  • An area typically is represented by the three-dimensional coordinates of three or more coplanar vertices that define a boundary of a closed two-dimensional shape in the virtual area.
  • a volume typically is represented by the three-dimensional coordinates of four or more non-coplanar vertices that define a closed boundary of a three-dimensional shape in the virtual area.
  • VoIP Voice over Internet Protocol
  • IP Internet Protocol
  • the term “includes” means includes but not limited to, the term “including” means including but not limited to.
  • the term “based on” means based at least in part on.
  • Examples that are described herein provide systems and methods of communicating between a virtual area and a physical space. These examples bridge the virtual area into the physical space and bridge the physical space into the virtual area through virtual presence apparatus (VPA) located in the physical space.
  • VPA virtual presence apparatus
  • Examples of the virtual presence apparatus transduce human perceptible stimulus (e.g., audio, visual, mechanical, and other sensory stimulus) between the virtual area and the physical space such that communicant interactions in the virtual area are seamlessly integrated into the physical space and vice versa.
  • FIG. 1 shows an embodiment of an exemplary network communications environment 10 that includes a virtual presence apparatus 12 in a physical space 14 , a remote client network node 16 , and a virtual environment creator 18 that are interconnected by a network (not shown) that supports the transmission of a wide variety of different media types (e.g., text, voice, audio, video, and other data) between network nodes.
  • the network connections between network nodes may be arranged in a variety of different stream handling topologies, including a peer-to-peer architecture, a server-mediated architecture, and hybrid architectures that combine aspects of peer-to-peer and server-mediated architectures. Exemplary topologies of these types are described in U.S. Pat. Nos. 7,769,806 and 7,844,724.
  • the client network node 16 includes input/output (I/O) hardware, a processor, and a computer-readable memory that stores an instance 20 of at least one virtual area enabled communications application that is executable by the processor.
  • the communications application 20 typically provides graphical interface and communications functions for communicating with the virtual presence apparatus 12 , the virtual environment creator 18 , and other client network nodes in connection with one or more virtual areas. Examples of the communications applications are described in U.S. application Ser. No. 12/418,243, filed Apr. 3, 2009, U.S. application Ser. No. 12/630,973, filed Dec. 4, 2009, U.S. application Ser. No. 12/354,709, filed Jan. 15, 2009, U.S. application Ser. No. 12/509,658, filed Jul. 27, 2009, U.S.
  • the client network node 16 has a respective set of one or more sources and a respective set of one or more sinks.
  • exemplary sources include an audio source (e.g., an audio capture device, such as a microphone), a video source (e.g., a video capture device, such as a video camera), a chat source (e.g., a text capture device, such as a keyboard), a motion data source (e.g., a pointing device, such as a computer mouse), and other sources (e.g., file sharing source or a source of a customized real-time data stream).
  • an audio source e.g., an audio capture device, such as a microphone
  • a video source e.g., a video capture device, such as a video camera
  • a chat source e.g., a text capture device, such as a keyboard
  • a motion data source e.g., a pointing device, such as a computer mouse
  • other sources e.g., file sharing
  • Exemplary sinks include an audio sink (e.g., an audio rendering device, such as a speaker or headphones), a video sink (e.g., a video rendering device, such as a display monitor), a chat sink (e.g., a text rendering device, such as a display monitor), a motion data sink (e.g., a movement rendering device, such as a display monitor), and other sinks (e.g., a printer for printing shared files, a device for rendering real-time data streams different from those already described, or software that processes real-time streams for analysis or customized display).
  • an audio sink e.g., an audio rendering device, such as a speaker or headphones
  • a video sink e.g., a video rendering device, such as a display monitor
  • a chat sink e.g., a text rendering device, such as a display monitor
  • a motion data sink e.g., a movement rendering device, such as a display monitor
  • other sinks e.g., a
  • the client network node 16 also typically includes administrative policies, user preferences (including preferences regarding the exportation of the user's presence and the connection of the user to the virtual environment creator 18 and other communicants), and other settings that define a local configuration that influences the administration of realtime connections with the virtual presence apparatus 12 , the virtual environment creator 18 , and other network nodes.
  • the virtual presence apparatus 12 is located in the physical space 14 .
  • the virtual presence apparatus 12 is positioned on a table 22 in a real-world conference room containing five communicants 24 , 26 , 28 , 30 , 32 .
  • the four communicants 24 - 30 are seated around the table 22 , and the fifth communicant 32 is standing beside a real-world view screen 34 .
  • Three of the seated communicants 24 - 28 are operating respective virtual area enabled communications applications on their client network nodes 36 , 38 , 40 (e.g., mobile computers, such as laptop computers, tablet computers, and mobile phones) through which they are connected to the remote client network node 16 ; these three communicants 24 - 28 and the other two communicants 30 , 32 are connected to the remote client network node 16 through the virtual presence apparatus 12 .
  • client network nodes 36 , 38 , 40 e.g., mobile computers, such as laptop computers, tablet computers, and mobile phones
  • the virtual presence apparatus 12 typically includes software and hardware resources that enable the virtual presence apparatus 12 to connect to the virtual environment creator 18 and the remote client network node 16 , either directly (e.g., peer-to-peer) or through a hosted network connection.
  • the virtual presence apparatus 12 or a network node hosting the virtual presence apparatus includes a complete or modified version of the communications application 20 , which provides functions for communicating with the virtual environment creator 18 and establishing network connections and communicating realtime data streams with the client network nodes.
  • the virtual presence apparatus 12 can be registered in association with and/or logged into the one or more virtual areas.
  • the virtual presence apparatus 12 When logged into a virtual area, the virtual presence apparatus 12 transduces human perceptible stimulus (e.g., audio, visual, mechanical, and other sensory stimulus) between the client network nodes of communicants who are present in the virtual area and the physical space 14 . In this way, the virtual presence apparatus 12 bridges a physical experience of the physical space 14 to communicants in the one or more virtual areas (i.e., communicants who are present in the virtual areas) and bridges communicant interactions in the one or more virtual areas to communicants in the physical space 14 .
  • human perceptible stimulus e.g., audio, visual, mechanical, and other sensory stimulus
  • the virtual environment creator 18 includes at least one server network node 42 that provides a network infrastructure service environment 44 that manages sessions of the remote client network node 16 and the virtual presence apparatus 12 in one or more virtual areas 46 in accordance with respective virtual area applications 48 .
  • Each of the virtual area applications 48 is hosted by a respective one of the virtual areas 46 and includes a description of the respective virtual area 46 .
  • Communicants operating respective client network nodes connect to the virtual area applications 48 through virtual area enabled communications applications.
  • a virtual area typically includes one or more zones.
  • a zone may be a rendered spatial extent, a set of rules applied to a spatial extent, or both. Zones may be arranged hierarchically in a virtual area, with an outermost zone (referred to herein as the “Global Governance Zone”) enclosing all other zones in the virtual area.
  • Global governance Zone there can be location zones (e.g., rooms of a virtual area) or smaller governance zones that enclose a group of location zones and provide regions of governance on the map.
  • a zone definition typically also includes one or more channel definitions that describe how to create respective channels in the zone and specify the information about the channel that is published to a client network node that becomes present in the zone.
  • a channel is always uniquely defined point-to-point and is unique to a virtual area application and a session between a client network node and the virtual area platform.
  • Examples of the types of rules that may be associated with a zone include switching rules, governance rules, and permission rules.
  • Switching rules govern realtime stream connections between network nodes that are linked to the virtual area (e.g., network nodes that are associated with objects, such as avatars, in the virtual area).
  • the switching rules typically include a description of conditions for connecting sources and sinks of realtime data streams in terms of positions in the virtual area.
  • Each switching rule typically includes attributes that define the realtime data stream type to which the rule applies and the location or locations in the virtual area where the rule applies.
  • each of the rules optionally may include one or more attributes that specify a required role of the source, a required role of the sink, a priority level of the stream, and a requested data routing topology.
  • one or more implicit or default switching rules may apply to that part of the virtual area.
  • governance rules control who has access to resources (e.g., the virtual area itself, regions with the virtual area, and objects within the virtual area), who has access to data (e.g., data streams and other content) that is associated with the virtual area, what is the scope of that access to the data associated the virtual area (e.g., what can a user do with the data), and what are the follow-on consequences of accessing that data (e.g., record keeping, such as audit logs, and payment requirements).
  • an entire virtual area or a zone of the virtual area is associated with a “governance mesh” that enables a software application developer to associate governance rules with a virtual area or a zone of a virtual area. This avoids the need for the creation of individual permissions for every file in a virtual area and avoids the need to deal with the complexity that potentially could arise when there is a need to treat the same document differently depending on the context.
  • a permission rule defines a respective capability requirement (e.g., for a respective action, behavior, or state) in terms of one or more capabilities, attributes, and settings, which may be persistent or transient. Examples of capabilities systems for administering permission rules are described in U.S. Provisional Patent Application No. 61/535,910, filed Sep. 16, 2011.
  • a virtual area is defined by a specification that includes a description of geometric elements of the virtual area and one or more rules, including switching rules and governance rules. Examples of virtual area specifications are described in U.S. application Ser. No. 12/418,243, filed Apr. 3, 2009, U.S. application Ser. No. 12/818,517, filed Jun. 18, 2010, U.S. patent application Ser. No. 12/855,210, filed Aug. 12, 2010, and U.S. Provisional Application No. 61/563,088, filed Nov. 23, 2011.
  • the network infrastructure service environment 44 typically includes one or more network infrastructure services that cooperate with the virtual area enabled communications application 20 to establish and administer network connections between the virtual presence apparatus 12 , the remote client network node 16 , and other network nodes.
  • network infrastructure services that are included in an exemplary embodiment of the network infrastructure service environment 44 are an account service, a security service, an area service, a rendezvous service, and an interaction service.
  • the structure, operation, and components of such an embodiment of the network infrastructure service environment 44 are described in U.S. patent application Ser. No. 12/825,512, filed Jun. 29, 2010.
  • the virtual area enabled communications applications 20 , the area applications 48 , and the network infrastructure service environment 44 together provide a platform (referred to herein as “the platform”) that administers the realtime connections with network nodes in a virtual area subject to a set of constraints that control access to the virtual area instance.
  • the platform a platform that administers the realtime connections with network nodes in a virtual area subject to a set of constraints that control access to the virtual area instance.
  • the platform tracks communicants' realtime availabilities and activities across the different communication contexts that are defined by the area applications 48 .
  • This information is presented by the virtual area enabled communications applications to the communicants in the form of realtime visualizations that enable the communicants to make more informed network interaction decisions (e.g., when to interact with a contact) and encourages the communicants to initiate interactions with other communicants and to join contexts (e.g., an ongoing conversation between communicants) of which the communicants otherwise would not have been aware.
  • the realtime visualization includes visual cues as to the presence and activities of the communicants in the contexts of the area applications 48 .
  • the presentation of these visual cues typically depends on one or more of governance rules associated with the virtual areas 46 , administrative policies, and user preferences (including preferences regarding the exportation of the user's presence and the connection of the user to areas and other communicants), which may define tiered relationship based predicates that control access to presence information and/or network resources on a zone-by-zone basis.
  • the server network node 42 remotely manages client communication sessions with each other and with the virtual presence apparatus 12 , and remotely configures audio and graphic rendering engines on the client network nodes, as well as switching of data streams by sending instructions (also referred to as definitions) from the remotely hosted area applications 48 to the client network nodes in accordance with the stream transport protocol described in U.S. application Ser. No. 12/825,512, filed Jun. 29, 2010. Data is shared between the client network node 16 and other network nodes as definition records over transport protocol sockets.
  • the client communications application 16 receives configuration instructions from the server node 42 through definition records that are received over a server session between the client network node 16 and the server network node 42 .
  • the server network node 42 sends to each of the client network nodes provisioning messages that configure the client network nodes to interconnect respective data streams between active ones of their complementary sources and sinks over respective peer-to-peer (P2P) sessions in accordance with switching rules specified in the area applications 48 and the locations where the communicants and the virtual presence apparatus are present in the virtual area 46 .
  • the client network node 16 sends content to and receives content from other network nodes through definition records that are transmitted on content-specific channels on respective sessions with the other network nodes. Data is shared in accordance with a publish/subscribe model. A stream transport service on the client network node 16 subscribes only to the data that are needed by the client network node.
  • the stream transport service negotiates a channel on a session that is established with another network node.
  • the channel is negotiated by well-known GUID for the particular area application 48 .
  • Definition records are transmitted only when a subscriber exists on the other end of a transport protocol socket.
  • Definition records that are received by the stream transport service are delivered to the subscribing ones of the client communications application processes on arrival.
  • the server network node 42 connects the virtual presence apparatus 12 to the virtual area 46 so that the virtual presence apparatus 12 can bridge a physical experience of the physical space 14 to communicants in the virtual area 46 and bridge a physical experience of communicant interactions in the virtual area 46 to communicants in the physical space 14 .
  • the communications application 20 operating on the remote client network node 16 presents a respective spatial visualization 50 of the virtual area 46 in accordance with data received from the network infrastructure service environment 44 .
  • the communications application 20 also provides a graphical interface for receiving user commands and providing a spatial interface that enhances the realtime communications between the communicants.
  • the spatial visualization 50 includes respective graphical representations 52 , 54 , 56 , 58 (referred to herein as “avatars” or “sprites”) of the communicants who are present in the virtual area 46 in spatial relation to a graphical representation 59 of the virtual area 46 .
  • the sprites 52 , 54 , 56 represent the three communicants 24 , 26 , 28 (Beth, Fran, Art) who are seated in the physical space 14 and are operating the local client network nodes 36 , 38 , 40 , and the sprite 58 represents the communicant (Ed) who is operating the remote client network node 16 .
  • the spatial visualization 50 may include other objects (also referred to as “props”). Examples of such objects include a view screen object 60 for interfacing with application sharing functions of the communications application 20 (as described in, e.g., U.S. application Ser. No. 12/418,270, filed Apr.
  • the spatial visualization 50 typically is presented in a respective window 66 that is generated by the communications application 20 on a “desktop” or other system-defined base window on the display hardware 68 of the remote client network node 16 .
  • the activities of the communicants in the virtual area 46 may be inferred from the activities on the various communication channels over which the respective client network nodes are configured to communicate.
  • the activities on the communication channels are represented in the graphical interface by visual cues that are depicted in association with the graphical representations 52 - 58 of the communicants.
  • the “on” or “off” state of a communicant's local speaker channel is depicted by the presence or absence of a headphones graphic 61 on the communicant's sprite.
  • the headphones graphic 61 is present (see sprites Beth and Fran) and, when the communicant's speakers are off, the headphones graphic 61 is absent.
  • the “on” or “off” state of the communicant's microphone is depicted by the presence or absence of a microphone graphic 63 on the communicant's sprite. When the microphone is on, the microphone graphic 63 is present (see sprite Fran); and, when the microphone is off, the microphone graphic 63 is absent.
  • the headphones graphic 61 and the microphone graphic 63 provide visual cues of the activity states of the communicant's sound playback and microphone devices.
  • the “on” or “off” state of the communicant's microphone is depicted by the presence or absence of a microphone graphic on the communicant's graphic representation and a series of concentric circles 65 that dynamically radiate away from the communicant's graphic representation in a series of expanding waves.
  • the microphone graphic 63 and the radiating concentric circles 65 are present and, when the microphone is off, the microphone graphic 63 and the radiating concentric circles 65 are absent.
  • the current activity on a communicant's microphone channel is indicated by a dynamic visualization that lightens and darkens the communicant's avatar in realtime to reflect the presence or absence of audio data on the microphone channel.
  • communicants can determine when another communicant is speaking by the “blinking” of the coloration of that communicant's avatar.
  • the activity on a communicant's text chat channel is depicted by the presence or absence of the hand graphic 67 adjacent the communicant's sprite (see sprite Ed).
  • the hand graphic 67 is present, and when a communicant is not transmitting text chat data the hand graphic 67 is not present.
  • text chat data is transmitted only when keyboard keys are depressed, in which case the visualization of the communicant's text channel appears as a flashing on and off of the hand graphic 67 .
  • the view screen prop 60 is associated with application sharing functionality of the platform that enables communicants to share applications operating their respective client network nodes.
  • the application sharing functionality is invoked by activating a view screen (e.g., by single-clicking the view screen object with an input device).
  • the platform provides visual cues that indicate whether or not a communicant is sharing an application over an application sharing channel.
  • the communicant's sprite automatically is moved to a position in the graphical representation of the virtual area that is adjacent the view screen prop.
  • the position of a communicant's sprite adjacent the view screen prop indicates that the communicant currently is sharing or is about to share an application with the other communicants in the virtual area.
  • Other communicants in the virtual subscribe to the shared application data by selecting the view screen prop in their respective views of the spatial visualization 50 .
  • the avatar of each communicant who is viewing a shared application is depicted with a pair of “eyes” to indicate that the represented communicants are viewing the content being shared in connection with the view screen props.
  • the graphical depiction of view screen prop is changed depending on whether or not an active application sharing session is occurring. For example, the depicted color of the view screen may change from a brighter color during an active application sharing session to a darker color when there is no application sharing taking place.
  • FIG. 2 shows an example of a method by which the virtual environment creator 18 and the virtual presence apparatus 12 bridge the virtual area into the physical space and bridge the physical space into the virtual area.
  • the virtual presence apparatus 12 transforms human perceptible physical stimuli in a physical space into physical space data streams of different respective data stream types ( FIG. 2 , block 101 ).
  • the server network node 42 publishes respective ones of the physical space data streams in a zone of a virtual area in a virtual communications environment ( FIG. 2 , block 103 ).
  • the zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone.
  • the server network node 42 establishes a respective presence in the zone for each of multiple communicants associated with respective client network nodes.
  • Each of one or more of the client network nodes publishes one or more respective client data streams ( FIG. 2 , block 105 ).
  • the server network node 42 provisions data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes, transmitting respective ones of the published client data streams to respective ones of the client network nodes, and transmitting respective ones of the published client data streams to the physical space ( FIG. 2 , block 107 ).
  • the virtual presence apparatus 12 transforms the published client data streams transmitted to the physical space into human perceptible physical stimuli in the physical space ( FIG. 5 , block 109 ).
  • FIG. 3 shows an example of a process that is implemented by an example of the virtual presence apparatus 12 .
  • the virtual presence apparatus 12 transmits a globally unique identifier of the virtual presence apparatus 12 for association with a virtual area by a network service administering the virtual area ( FIG. 3 , block 90 ).
  • the virtual presence apparatus 12 generates output data from human perceptible stimulus in a physical space ( FIG. 3 , block 92 ), and transmits the output data in connection with the virtual area ( FIG. 3 , block 94 ).
  • the virtual presence apparatus 12 receives input data associated with the virtual area ( FIG. 3 , block 96 ), and generates human perceptible stimulus in the physical space from the input data ( FIG. 3 , block 98 ).
  • FIG. 4 shows an example 70 of the virtual presence apparatus 12 that includes an input transducer 72 , an output transducer 74 , a communication interface 76 , a computer-readable memory 78 that stores a globally unique identifier of the virtual presence apparatus 70 , and a processor 80 .
  • the communication interface 76 transmits an output signal 86 and receives an input signal 88 .
  • the virtual presence apparatus 70 may be implemented in a variety of different ways.
  • the virtual presence apparatus 70 is composed of multiple components (e.g., two or more of a speaker, a microphone, a light projector, and a camera) that are integrated into a unitary device.
  • the virtual presence apparatus 70 is composed of a central hub (e.g., a virtual area enabled network switch or router) that controls and configures one or more separate and distinct peripheral components (e.g., a speakerphone, a digital projector, a camera, and a remote-controlled laser pointer) that are connected to respective ports (e.g., Universal Serial Bus (USB) ports) of the hub.
  • USB Universal Serial Bus
  • Examples of the virtual presence apparatus 70 may have different industrial designs.
  • the virtual presence apparatus 70 has the form factor of a desktop appliance (e.g., a form factor similar to that of a computer, speakerphone, a digital projector, or a network hub), whereas other examples of the virtual presence apparatus 70 have robotic form factors (e.g., a remote-controlled electro-mechanical machine, which may or may not have a humanoid appearance).
  • a desktop appliance e.g., a form factor similar to that of a computer, speakerphone, a digital projector, or a network hub
  • robotic form factors e.g., a remote-controlled electro-mechanical machine, which may or may not have a humanoid appearance
  • the input transducer 72 generates output data from human perceptible stimulus 82 in the physical space 14 .
  • the input transducer 72 typically generates the output data from human perceptible stimulus that is broadcasted into the physical space.
  • the input transducer 72 may generate output data from one or more human perceptible stimuli, including for example audio, visual, mechanical, and other sensory stimuli.
  • the input transducer 72 includes one or more of an acoustic-to-electric transducer (e.g., a microphone, which may be a component of a telephony device, such as a mobile phone or a VoIP phone, or a headset), a light-to-electric transducer (e.g., a camera, such as a still image camera, a video camera, and a scanner that scans physical documents into scanned images), an electric-to-electric transducer (e.g., a touchscreen or other touch-sensitive sensor equipped with resistive, capacitive, surface acoustic wave, optical, or other touch-sensitive technologies), a mechanical-to-electric transducer (e.g., a tactile or other pressure- or force-sensitive transducer, a texture-sensitive transducer), and a chemical-to-electric transducer (e.g., a olfactory sensor that is capable of detecting one or more odorants).
  • the output transducer 74 generates human perceptible stimulus 84 in the physical space 14 .
  • the output transducer 74 typically broadcasts the human perceptible stimulus into the physical space.
  • the output transducer 74 may generate one or more human perceptible stimuli from input data, including for example audio, visual, mechanical, and other sensory stimuli.
  • the output transducer 74 includes one or more of an electric-to-acoustic transducer (e.g., a speaker, which may be a component of a telephony device, such as a mobile phone or a VoIP phone, or a headset), an electric-to-light transducer (e.g., an image projector such as a digital projector, a touchscreen display, a light beam projector such as a laser pointer, or a three-dimensional hologram generator), an electric-to-mechanical transducer (e.g., a haptic transducer, an electric motor that moves mechanical components, such as light sources and robot tools, and other components in the physical space, and a printer that outputs printed documents or three-dimensional objects), and an electric-to-chemical transducer (e.g., an electric odorant delivery system).
  • an electric-to-acoustic transducer e.g., a speaker, which may be a component of a telephony device, such as a mobile
  • the virtual presence apparatus 70 bridges communicant activity in the physical space 14 into the virtual area 46 and bridges communicant activity in the virtual area into the physical space 14 .
  • the virtual presence apparatus 70 typically encodes output data generated by the input transducer 72 from communicant activity in the physical space 14 into the output signal 86 that is sent to the remote network node 16 connected to the virtual area, and decodes the input signal 88 , which is received from the remote network node 16 and relates to communicant activity in the virtual area, into input data that is sent to the output transducer 74 .
  • the virtual presence apparatus 12 typically is registered with the server network node 42 before the virtual presence apparatus 12 can be logged into a server session with the server network node 42 .
  • the virtual presence apparatus 12 includes hardware and software resources that enable it to register directly with the server network node 42 .
  • FIG. 5A shows an example of a network connection between the server network node 42 and an example 85 of the virtual presence apparatus 12 that can register directly with the server network node 42 .
  • a host computer registers the virtual presence apparatus 12 with the server network node 42 .
  • FIG. 5B shows an example of a network connection between the server network node 42 and an example 87 of the virtual presence apparatus 12 that is hosted by a client network node 89 in the physical space 14 .
  • the client network node 89 submits to the server network node 42 registration information and login requests on behalf of both the virtual presence apparatus 87 and the communicant who uses the client network node 89 to access the virtual area 46 .
  • FIG. 5C shows an example of a network connection between the server network node 42 and an example of 93 of the virtual presence apparatus 12 that includes one or more integral components of a client network node 91 in the physical space 14 .
  • the virtual presence apparatus 93 typically includes one or more hardware and software resources of the client network node 91 .
  • the virtual presence apparatus 93 includes software that resides in the memory of the client network node 91 and is executed by the processor of the client network node 91 to leverage hardware resources of the client network node 91 in the process of integrating communicant interactions in the virtual area into the physical space.
  • hardware resources of the client network node 91 are partitioned (e.g., by a hypervisor or virtual machine monitor that reserves a respective set of client system resources for each partition or virtual machine) into a set of hardware resources that are used by the virtual area enabled communications application 20 and a separate set of hardware resources that constitute elements of the virtual presence apparatus 93 .
  • a peripheral headset may be reserved for use by the virtual area enabled communications application 20
  • separate microphone and speaker hardware may be reserved for use by the virtual presence apparatus 93 .
  • certain hardware resources of the client network node 91 e.g., a camera, a hard drive memory, or an optical disk drive
  • certain hardware resources of the client network node 91 are associated with respective objects in the virtual area 46 , allowing those resources to be shared by other communicants in the virtual area 46 .
  • the virtual presence apparatus 12 transmits (either directly, or indirectly through a network node hosting the virtual presence apparatus 12 ) registration data through its communication interface to the server network node 42 .
  • the registration data typically includes the globally unique identifier of the virtual presence apparatus 12 and configuration data.
  • the configuration data may include, for example, a device type identifier, an indication whether the virtual presence apparatus 12 should be associated with an existing virtual area or a new virtual area, one or more conditions on the availability of the associated virtual area (e.g., the associated virtual area is accessible to communicants conditioned on the virtual area apparatus 12 being present in or logged into the virtual area), a specification of the source and sink capabilities of the virtual presence apparatus 12 , and a specification of a graphical representation of the virtual presence apparatus 12 .
  • the server network node 42 Based on the registration data, the server network node 42 generates one or more database records that store the registration information, including the identifier of the virtual presence apparatus 12 and an identifier of the new or existing virtual area.
  • the one or more database records create a persistent association between the virtual presence apparatus 12 and the virtual area.
  • the virtual presence apparatus identifier typically is registered with the server network node 42 independently of any communicant identifier.
  • the server network node 42 determines the source and sink capabilities of the virtual presence apparatus, either directly (e.g., from the configuration data) or indirectly (e.g., by using the device type identifier for the virtual presence apparatus 12 as an index into a device capabilities table).
  • the virtual presence apparatus 12 is associated with a virtual area independently of any particular communicant such that it is available as a resource for any communicant who is present in the virtual area. In this way, the virtual presence apparatus functions as a prop or a fixture of the associated virtual area, which is tied to the physical location of the virtual presence apparatus. In some examples, the association between the virtual presence apparatus and the virtual area is such that the virtual area is inaccessible until after the virtual presence apparatus has been logged into the network infrastructure service environment 44 .
  • communicants cannot establish a presence in the associated virtual area (and, in some cases, may not even be presented with an option for entering the virtual area such that the virtual area does not appear to exist) until after the virtual presence apparatus has been connected to the virtual area by the network infrastructure service environment 44 .
  • These examples allow communicants to establish a persistent association between a virtual area and a particular physical space by leaving the virtual presence apparatus in the same physical space, thereby leveraging the persistent spatial association with the real-world location of the physical space to further strengthen the bridging between the virtual area and the physical space.
  • the virtual presence apparatus 12 can be logged into the network infrastructure service environment 44 .
  • the virtual presence apparatus 12 can either log itself into the network infrastructure service environment 44 automatically each time it is turned on or it can be logged into the network infrastructure service environment 44 by a host computer.
  • the server network node 42 sends provisioning instructions for establishing respective sessions between the virtual presence apparatus 12 and the client network nodes of the communicants who are present in the virtual area and for updating the appearance of the virtual area to include a graphical representation of the virtual presence apparatus 12 in the graphical interfaces displayed on the client network nodes. If the associated virtual area has not yet been instantiated, the server network node 42 instantiates the associated virtual area so that communicants operating respective client network nodes can access the virtual area.
  • the provisioning instructions sent by the server network node 42 are used to establish communication sessions between the client network nodes and the virtual presence apparatus.
  • data is shared between the client network nodes and the virtual presence apparatus 12 as definition records over transport protocol sockets.
  • the client network nodes and the virtual presence apparatus 12 receive content from each other through definition records that are received on content-specific channels on respective peer-to-peer sessions.
  • Data is shared in accordance with a publish/subscribe model.
  • a stream transport service on each of the client network nodes and the virtual presence apparatus 12 subscribes only to the data that are needed. To subscribe, the stream transport service negotiates a channel on a session that is established with another network node. The channel is negotiated by well-known GUID for the particular area application 48 .
  • Definition records are transmitted only when a subscriber exists on the other end of a transport protocol socket. Definition records that are received by the stream transport service are delivered to the subscribing ones of the local communication processes on arrival. Example of the structure and operation of the stream transport service and the data sharing communication sessions are described in U.S. patent application Ser. No. 12/825,512, filed Jun. 29, 2010.
  • the virtual presence apparatus 12 transmits the output data corresponding to the human perceptible stimulus in the physical space to the client network nodes in connection with the virtual area.
  • the virtual presence apparatus 12 typically processes the output data and configures its communication interface to incorporate the output data into the output signal that is sent to a client network node.
  • the output signal includes at least one of the globally unique identifier of the virtual presence apparatus, an identifier of the virtual area, and optionally an identifier of a zone of the virtual area.
  • the output signal typically is free of any communicant identifier (i.e., an identifier that identifies a particular communicant).
  • the virtual area serves as a termination point for one or more data streams that represent physical stimuli in a physical space occupied by the virtual presence apparatus 12 , where the data streams are published by the virtual presence apparatus in the virtual area/zone, communicants who are present in the virtual area/zone respectively are able to subscribe to one or more of the data streams, and the server network node 42 provisions data stream connections for the data streams that are subscribed to by respective ones of the communicants who are present in the virtual area/zone.
  • the virtual presence apparatus 12 typically determines the input data from the input signal that is received through its communication interface from a respective client network node that is connected to the virtual area.
  • the input signal typically includes at least one of a globally unique identifier of the respective client network node and an identifier of the virtual area.
  • the virtual presence apparatus 12 typically derives input data from the input signal and passes the input data to an output transducer, which generates human perceptible stimulus in the physical space.
  • FIG. 6 shows an example of a method that is implemented by an example of the server network node 42 for administering communications between a virtual area and a physical space.
  • the server network node 42 creates a persistent association between virtual presence apparatus in a physical space and a virtual area ( FIG. 6 , block 100 ).
  • the apparatus has an apparatus source of a respective data stream content type and an apparatus sink of a respective data stream content type.
  • the server network node 42 establishes a respective presence in the virtual area for a communicant operating a client network node connected to the virtual area ( FIG. 6 , block 102 ).
  • the client network node has a client sink that is complementary to the apparatus source and a client source that is complementary to the apparatus sink.
  • the server network node 42 administers a respective connection between each active pair of complementary sources and sinks of the client network node and the apparatus in association with the virtual area, where each connection supports transmission of the respective data stream content type between the apparatus and the client network node ( FIG. 6 , block 104 ).
  • the association between the virtual presence apparatus 12 and the virtual area is independent of any particular communicant.
  • the server network node 42 typically receives a globally unique identifier of the virtual presence apparatus 12 , and associates the identifier with the virtual area.
  • the virtual area includes multiple zones each of which supports establishment of a respective presence for one or more communicants and defines a respective persistent context for realtime communications between the client network nodes of communicants who are present in the zone.
  • the server network node 42 creates a persistent association between the physical presence apparatus and a respective one of the zones of the virtual area.
  • the server network node 42 establishes a respective presence in the zone for the virtual presence apparatus. In some examples, the server network node 42 establishes the presence for the virtual presence apparatus in response to receipt of a login request identifying the virtual presence apparatus.
  • the virtual presence apparatus or a network node e.g., a central hub or a client network node
  • the server network node 42 may generate the login request for the virtual presence apparatus.
  • the server network node 42 establishes a presence for both the virtual presence apparatus and a communicant in response to respective login requests that are sent by the same client network node.
  • the server network node 42 in response to receipt of a login request that includes the identifier of the virtual presence apparatus, the server network node 42 initiates the virtual area to enable the virtual area to be communicant accessible.
  • the server network node 42 typically associates the virtual presence apparatus 12 with an object in the virtual area.
  • the server network node 42 typically creates an object that represents the virtual presence apparatus 12 in the virtual area.
  • the object typically is associated with an interface for interacting with the virtual presence apparatus 12 .
  • the server network node 42 associates the object with a graphical representation of the virtual presence apparatus 12 .
  • the graphical representation of the virtual presence apparatus 12 includes a brand that is associated with the virtual presence apparatus.
  • the brand may include a name, term, design, symbol, or any other feature that identifies a source (e.g., manufacturer or seller) of the virtual presence apparatus.
  • the server network node 42 transmits to each of the communicants who are present in the zone a respective specification of a visualization of graphical representations of the object and the avatars in the virtual area.
  • the client network nodes use the specifications to display respective graphical representations of the virtual presence apparatus 12 and the communicants in spatial relation to a graphical representation of the virtual area.
  • the object representing the virtual presence apparatus is associated with a particular communicant and the visualization of the virtual area shows an association between a graphical representation of the object and the particular communicant.
  • the visualization of the virtual area shows the graphical representation of the object associated with a graphical representation of the avatar representing the particular communicant.
  • the virtual presence apparatus may be personal gear (e.g., a human interface device, such as a headset, or other personal device) that is carried or worn by the particular communicant, and the visualization may show a graphical representation of the gear as a decoration or embellishment on the graphical representation of the particular communicant's avatar (e.g., showing a graphical representation of a headset on the communicant's avatar).
  • the visualization of the virtual area shows the graphical representation of the object representing the virtual presence apparatus associated with a location in the virtual area that is assigned to the particular communicant.
  • the virtual presence apparatus may be personal gear (e.g., a personal printer, scanner, telephony device, or a memory resource of a personal computer) that is assigned or belongs to the particular communicant, and the visualization may show a graphical representation of the in a room (e.g., an office or personal room) of the virtual area that is assigned to the particular communicant.
  • the server network node 42 may determine the style used to represent the personal gear in the visualization based on configuration information received from the particular communicant (e.g., an indication that the graphical representation of the gear should be associated with the communicant's avatar or the communicant's designated default zone, such as the communicant's home zone or office) or automatically based on a predefined mapping between personal gear category types and presentation styles (e.g., headsets are represented as graphical decorations on the respective communicants' avatars, whereas hard drive of personal computers are represented as icons in the respective communicants' designated default zones).
  • configuration information received from the particular communicant e.g., an indication that the graphical representation of the gear should be associated with the communicant's avatar or the communicant's designated default zone, such as the communicant's home zone or office
  • headsets are represented as graphical decorations on the respective communicants' avatars
  • hard drive of personal computers are represented as icons in the respective communicants' designated default zones.
  • the server network node 42 transmits to the client network node a specification of visual cues for displaying indications of respective states of a source of the virtual presence apparatus 12 . Based on a determination that that the source of the virtual presence apparatus is in an active state, the server network node 42 transmits to the client network node a specification of a first visual cue, and based on a determination that the source of the virtual presence apparatus is in an inactive state, the server network node 42 transmits to the client network node a specification of a second visual cue that is different from the first visual cue.
  • the specifications of the first and second visual cues are provided in respective definition records.
  • the server network node 42 administers realtime communications between the respective network nodes of the communicants who are present in the zone and provisions at least one data stream connection between the virtual presence apparatus 12 and one or more of the network nodes of the communicants who are present in the zone.
  • the server network node 42 administers respective connections between each active pair of complementary sources and sinks of the client network node and the apparatus. These connections bridge communicant activity in the physical space into the virtual area and bridge communicant activity in the virtual area into the physical space.
  • the server network node 42 administers connections that relay data corresponding to communicant activity in the physical space from the source of the virtual presence apparatus 12 to the client network node.
  • the server network node 42 administers connections that relay data corresponding to communicant activity in the virtual area from the client network node to the sink of the virtual presence apparatus 12 .
  • the virtual presence apparatus 12 publishes data streams of different data stream types, and the server network node 42 provisions the client network nodes to receive data streams of different data stream types that are published by the particular physical apparatus.
  • the server network node 42 provisions a data stream connection between a client network node and the virtual presence apparatus in response to a request from the client network node to subscribe to data published by the particular physical apparatus.
  • the server network node 42 provisions a data stream connection between a client network node of a particular communicant and the virtual presence apparatus automatically upon entry of the particular communicant into the zone.
  • the source of the virtual presence apparatus 12 corresponds to a transducer that transforms human perceptible stimulus that is broadcasted in the physical space into output data of the respective data stream content type. In some examples, the source of the virtual presence apparatus 12 corresponds to a transducer that transforms input data of the respective data stream content type into human perceptible stimulus that is broadcasted into the physical space. In some examples, the source of the virtual presence apparatus 12 includes a microphone and the sink of the virtual presence apparatus 12 includes a speaker. The microphone generates output voice data from human voice sound projected into the physical space. The speaker projects human voice sound into the physical space based on input voice data associated with the virtual area.
  • the server network node 42 administers connections that relay the output voice data from the apparatus to the client network node and that relay the input voice data from the client network node to the apparatus.
  • the source of the virtual presence apparatus 12 includes a camera that captures images of a scene in the physical space 14 and generates output image data from the captured images.
  • the server network node 42 administers a connection that relays the output image data from the virtual presence apparatus 12 to the client network node.
  • the sink of the virtual presence apparatus 12 includes a projector that projects images into the physical space.
  • the server network node 42 administers a connection that relays input control data for controlling the projecting from the client network node to the virtual presence apparatus 12 .
  • the sink of the virtual presence apparatus 12 includes a laser pointer that projects a laser beam into the physical space.
  • the server network node 42 administers a connection that relays input control data for controlling the projecting of the laser beam from the client network node to the virtual presence apparatus 12 .
  • the server network node 42 administers a connection between an audio source of the client network node and an audio sink of the virtual presence apparatus 12 .
  • the server network node 42 administers a connection between an application sharing source of the client network node and an image projection sink of the virtual presence apparatus 12 .
  • the server network node 42 administers a connection between a laser pointer control source of the client network node and a laser pointer control sink of the apparatus.
  • the virtual presence apparatus 12 is located in a particular physical space
  • the server network node 42 locates the object representing the virtual presence apparatus 12 in a particular one of the zones of the virtual area according to a mapping between the particular physical space and the particular zone.
  • the mapping associates an identifier of the physical space with an identifier of the particular zone, creating a persistent association between the particular physical space and the particular zone of the virtual area.
  • the mapping additionally associates an identifier of the physical apparatus 12 with the identifier of the physical space.
  • the visualization of the virtual area shows the particular zone with a label that connotes a name associated with the physical space.
  • the server network node 42 establishes a respective presence in the virtual area for a particular communicant based on a determination that the particular communicant is in the physical space 14 .
  • the server network node 42 receives location data (e.g., Global Positioning System (GPS) data) that is associated with the particular communicant (e.g., by a GPS component of a mobile device, such as a mobile phone or other mobile communication device), and determines that the particular communicant is in the physical space based on comparison of the received location data with location data associated with the physical space.
  • location data e.g., Global Positioning System (GPS) data
  • GPS Global Positioning System
  • the server network node 42 receives audio data from the source of virtual presence apparatus 12 , and associates the audio data with a communicant in the physical space based on comparison of the audio data with one or more voice data records associated with respective communicants.
  • the voice records typically correspond to voiceprints (also referred to as voice templates or voice models) that are created from features that are extracted from the recorded speech of known communicants in accordance with a speaker recognition enrollment process. Each voiceprint is associated with the identity of a particular communicant.
  • the server network node 42 typically associates the audio data with the communicant in response to a determination that features extracted from the audio data correspond to the voiceprint previously associated with the communicant.
  • the server network node 42 can automatically identify communicants who are in the physical space without requiring them to log into the network infrastructure service environment 44 through respective client network nodes. Once a particular communicant in the physical space 14 has been identified, the server network node 42 can automatically establish a presence for that communicant in the virtual area associated with the virtual presence apparatus 12 and track utterances from that communicant in the audio data captured by the virtual presence apparatus such that visual cues indicative of the state of that communicant's voice (e.g., speaking or silent) can be presented in the spatial visualization of the virtual area that is displayed to the remote communicant on the remote client network node 16 .
  • the server network node 42 can automatically establish a presence for that communicant in the virtual area associated with the virtual presence apparatus 12 and track utterances from that communicant in the audio data captured by the virtual presence apparatus such that visual cues indicative of the state of that communicant's voice (e.g., speaking or silent) can be presented in the spatial visualization of the virtual area that is displayed
  • FIG. 7 shows an example of a method that is performed by an example of the communications application 20 for communicating between a virtual area and a physical space.
  • the communications application 20 displays a graphical interface that includes a graphical representation of the virtual area that supports establishment of respective presences of communicants operating respective client network nodes, a graphical representation of each of the communicants who is present in the virtual area, and a graphical representation of an object associated with an apparatus (e.g., the virtual presence apparatus 12 ) in the physical space ( FIG. 7 , block 110 ).
  • the apparatus has an apparatus sink that is complementary to the client source and an apparatus source that is complementary to the client sink.
  • the communications application 20 establishes a respective connection between each active pair of complementary sources and sinks of the client network node and the apparatus in association with the virtual area, where each connection supports transmission of the respective data stream content type between the apparatus and the client network node ( FIG. 7 , block 112 ).
  • the communications application 20 presents interaction controls associated with the object for interacting with communicants who are present in the physical space ( FIG. 7 , block 114 ).
  • the graphical representation of the virtual area corresponds to a virtualized representation of the physical space.
  • the virtualized representation connotes the real-world physical space.
  • the virtualized representation may have a virtual presentation that resembles one or more distinctive visual features of the real-world physical space or the virtualized representation may include a descriptive name or other label that is associated with the real-world physical space.
  • the communications application 20 receives from a network service administering the virtual area a specification for displaying the graphical representation of the object in spatial relation to the graphical representation of the virtual area.
  • the communications application 20 shows in the graphical interface indications of respective states of the apparatus source of the virtual presence apparatus 12 in connection with the graphical representation of the object.
  • the process of showing the state indications involves displaying a first visual cue when the virtual presence apparatus source is in an active state, and displaying a second visual cue that is different from the first visual cue when the virtual presence apparatus source is in an inactive state.
  • the communications application 20 based on communicant input in connection with the object, establishes a connection between an audio source of the client network node and an audio sink of the virtual presence apparatus 12 .
  • the communications application 20 based on communicant input in connection with the object, establishes a connection between an application sharing source of the client network node and an image projection sink of the virtual presence apparatus 12 .
  • the communications application 20 based on communicant input in connection with the object, establishes a connection between a laser pointer control source of the client network node and a laser pointer control sink of the virtual presence apparatus 12 .
  • the source of the virtual presence apparatus 12 includes a microphone and the sink of the virtual presence apparatus 12 includes a speaker.
  • the microphone generates output voice data from human voice sound projected into the physical space
  • the speaker projects human voice sound into the physical space from input voice data associated with the virtual area
  • the communications application 20 establishes connections that relay the output voice data from the virtual presence apparatus 12 to the client network node and that relay the input voice data from the client network node to the virtual presence apparatus 12 .
  • the source of the virtual presence apparatus 12 includes a camera that captures images of a scene in the physical space and generates output image data from the captured images, and the communications application 20 establishes a connection that relays the output image data from the virtual presence apparatus 12 to the client network node.
  • the sink of the virtual presence apparatus 12 includes a projector that projects images into the physical space, and the communications application 20 establishes a connection that relays input control data for controlling the projecting from the client network node to the virtual presence apparatus 12 .
  • the sink of the virtual presence apparatus 12 includes a laser pointer that projects a laser beam into the physical space, and the communications application 20 establishes a connection that relays input control data for controlling the projecting of the laser beam from the client network node to the virtual presence apparatus 12 .
  • FIG. 8 shows an example of a graphical interface 120 that is generated by the communications application 20 on a client network node (e.g., client node 16 ) for interfacing a user with an example 122 of the virtual presence apparatus 12 in the physical space 14 .
  • client network node e.g., client node 16
  • the graphical interface 120 includes a toolbar 124 and a viewer panel 126 .
  • the toolbar 124 includes a headphone control 128 for toggling on and off the local speakers of the client network node, a microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60 .
  • the user also may select the view screen object 60 to initiate an application sharing session in the virtual area 46 .
  • the viewer panel 126 typically shows communicant selectable content being rendered by the client network node.
  • Examples of such content include a spatial visualization of the virtual area 46 (currently shown) and application content (e.g., web service content rendered by a web browser application such as Microsoft® Internet Explorer®, or document content being rendered by a document processing application such as Microsoft® Word® or Power Point® software applications).
  • application content e.g., web service content rendered by a web browser application such as Microsoft® Internet Explorer®, or document content being rendered by a document processing application such as Microsoft® Word® or Power Point® software applications.
  • the virtual presence apparatus 122 is a virtual area enabled speakerphone, which is represented by a speakerphone object 138 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126 .
  • the virtual presence apparatus 122 includes a microphone that generates output voice data from human voice sounds projected into the physical space 14 and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area.
  • the “on” or “off” state of the speakerphone microphone is depicted in the spatial visualization of the virtual area by the presence or absence of a series of concentric circles 140 that dynamically radiate away from the speakerphone object 138 in a series of expanding waves.
  • the radiating concentric circles 140 are present and, when the microphone is off, the radiating concentric circles 140 are absent.
  • the current activity state of the speakerphone microphone channel is indicated by a dynamic visualization that lightens and darkens the speakerphone object 138 in realtime to reflect the presence or absence of audio data on the speakerphone microphone channel.
  • the user can determine when a communicant in the physical space 14 is speaking by the “blinking” of the coloration of the speakerphone object 138 .
  • FIG. 9 shows an example of a graphical interface 150 that is generated by the communications application 20 on a client network node (e.g., client node 16 ) for interfacing a user with an example 152 of the virtual presence apparatus 12 in the physical space 14 .
  • client network node e.g., client node 16
  • the graphical interface 150 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8 .
  • the toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60 .
  • the user may select the view screen object 60 to initiate an application sharing session in the virtual area 46 .
  • the viewer panel 126 typically shows communicant selectable content that is rendered by the client network node.
  • the virtual presence apparatus 152 is a virtual area enabled device that integrates speakerphone and digital projector functionalities.
  • the virtual presence apparatus 152 includes a microphone that generates output voice data from human voice sounds projected into the physical space 14 , a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area, and a projector that projects light into the physical space 14 responsive to input data transmitted by the client network node in connection with the virtual area 46 .
  • the virtual presence apparatus 152 is represented by a projector object 154 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126 .
  • the communications application 20 modifies the graphical interface 150 to include a Share button 134 and a Stop button 136 in the tool bar 124 , and sets the viewer panel 126 to display the contents of an application being shared.
  • the user initiates an application sharing session in the physical space 14 by selecting the Share button 134 .
  • the communications application 20 provides an interface that enables the user to select an application to share (e.g., Microsoft® PowerPoint®), sets the viewer panel 126 to display the contents being rendered by the selected application, and streams screen share data to the virtual presence apparatus 152 , which projects the screen share data onto the real-world view screen 34 in the physical space 14 .
  • an application to share e.g., Microsoft® PowerPoint®
  • sets the viewer panel 126 to display the contents being rendered by the selected application sets the viewer panel 126 to display the contents being rendered by the selected application, and streams screen share data to the virtual presence apparatus 152 , which projects the screen share data onto the real-world view screen 34 in the physical space 14 .
  • Examples of systems and methods of generating and streaming screen share data are described in U.S. patent application Ser. No. 12/418,270, filed Apr. 3, 2009. The user can terminate the application sharing session in the physical space 14 by selecting the Stop button 136 .
  • FIG. 10 shows an example of a graphical interface 160 that is generated by the communications application 20 on a client network node (e.g., client node 16 ) for interfacing a user with an example 162 of the virtual presence apparatus 12 in the physical space 14 .
  • the communicant 34 is giving a presentation on a white board 158 in the physical space 14 .
  • the graphical interface 160 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8 .
  • the toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60 .
  • the user may select the view screen object 60 to initiate an application sharing session in the virtual area 46 .
  • the viewer panel 126 typically shows communicant selectable content being rendered by the client network node.
  • the virtual presence apparatus 162 is a virtual area enabled device that integrates a speakerphone, a digital projector, and a camera.
  • the speakerphone includes a microphone that generates output voice data from human voice sounds projected into the physical space 14 , and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area.
  • the projector projects light (e.g., images, shapes, lines, and spots) into the physical space 14 responsive to input data transmitted by the client network node in connection with the virtual area 46 .
  • the projector is a digital image projector.
  • the projector is a remote-controlled laser pointer.
  • the camera captures images of a scene in the physical space 14 (e.g., images of the whiteboard 158 ) and generates output image data from the captured images.
  • the camera may be implemented by any type of imaging device that is capable of capturing one-dimensional or two-dimensional images of a scene.
  • the camera typically is a digital video camera.
  • the virtual presence apparatus 162 is represented by a projector-camera object 164 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126 .
  • the communications application 20 modifies the graphical interface 150 to include a Share button 134 and a Stop button 136 in the tool bar 124 , and sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162 in the physical space 14 .
  • the user initiates a sharing session in the physical space 14 by selecting the Share button 134 .
  • the communications application 20 sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162 , provides an interface that enables the user to provide inputs in relation to the images displayed in the viewer panel (e.g., superimpose graphical content, such as predesigned or hand drawn images or comments, onto the images), and streams data describing the inputs to the virtual presence apparatus 162 , which projects the streamed data onto the whiteboard 158 in the physical space 14 .
  • the user can terminate the sharing session in the physical space 14 by selecting the Stop button 136 .
  • the communications application 20 provides drawings tools (e.g., the pencil tool 166 ) that allow the user to superimpose lines, shapes (e.g., the ellipse 168 ), and other graphical content onto the image of the view screen 34 captured by the camera component of the virtual presence apparatus 162 .
  • drawings tools e.g., the pencil tool 166
  • shapes e.g., the ellipse 168
  • other graphical content onto the image of the view screen 34 captured by the camera component of the virtual presence apparatus 162 .
  • the communications application 20 may stream data describing the user inputs to the virtual presence apparatus 162 .
  • the communications application 20 may convert the user inputs into control data for controlling the movement of the remote-controlled laser pointer in the physical space 14 .
  • the user can interact with the communicants in the physical space 14 in a substantive way.
  • the user can provide comments or other visual indications that highlight or direct a viewer's attention to specific parts of the presentation being given by the communicant 32 in connection with the white board 158 .
  • the graphical interface 160 includes additional controls for streaming application sharing data from the client network node to the virtual presence apparatus 162 for projection onto the whiteboard 158 or other surface in the virtual space 14 , as described above in connection with the example shown in FIG. 9 .
  • FIG. 11 shows an example of a graphical interface 170 that is generated by the communications application 20 on a client network node for interfacing a user with an example 172 of the virtual presence apparatus 12 in the physical space 14 .
  • the communicant 34 is giving a presentation on a white board 158 in the physical space 14 .
  • the graphical interface 170 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8 .
  • the toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60 .
  • the user may select the view screen object 60 to initiate an application sharing session in the virtual area 46 .
  • the viewer panel 126 typically shows communicant selectable content being rendered by the client network node.
  • the virtual presence apparatus 162 is a virtual area enabled device that integrates a speakerphone and a camera.
  • the speakerphone includes a microphone that generates output voice data from human voice sounds projected into the physical space 14 , and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area.
  • the camera captures images of a scene in the physical space 14 and generates output image data from the captured images.
  • the camera may be implemented by any type of imaging device that is capable of capturing one-dimensional or two-dimensional images of a scene.
  • the camera typically is a digital video camera.
  • the virtual presence apparatus 172 is represented by a camera object 174 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126 .
  • the communications application 20 modifies the graphical interface 150 to include a View button 176 and a Stop button 178 in the tool bar 124 , and sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162 in the physical space 14 .
  • the user initiates a viewing session in the physical space 14 by selecting the View button 176 .
  • the communications application 20 sets the viewer panel 126 to display the images captured by the virtual presence apparatus 172 , provides an interface 180 that enables the user to control the view of the physical space that is captured by the camera component of the virtual presence apparatus, and streams data describing the control inputs to the virtual presence apparatus 172 , which moves the camera based on the streamed data.
  • the communications application 20 provides a navigation control tool 180 that allows the user to control the pan and tilt of the camera component of the virtual presence apparatus 172 .
  • the remote communicant can interact with the physical space 14 in a substantive way (e.g., see different views of the persons and activities in the physical space 14 ).
  • the user can terminate the sharing session in the physical space 14 by selecting the Stop button 178 .
  • different elements of the graphical interfaces described above in connection with the examples shown in FIGS. 8-11 are incorporated into a single graphical interface that may be used to interact with the virtual presence apparatus 164 , which integrates a speakerphone, a digital projector, and a camera.
  • the graphical interface provides independent control over the respective functionalities of the speakerphone, the digital projector, and camera to enable application sharing, image projection of comments and other annotations, and camera viewing modes of operation.
  • the virtual environment creator 18 enhances the immersive connections between virtual area locations (e.g., zones) and physical spaces by creating persistent associations between the virtual area locations and the respective physical spaces. These persistent associations typically are stored in a table or other data structure that maps each real-world location to a respective zone. In some of these examples, the virtual environment creator 18 reinforces these associations with visual cues in the visualizations of the virtual area locations that connote the real-world physical spaces (e.g., by having a virtual presentation that resembles one or more distinctive visual features of the real-world physical space or by including a descriptive name or other label that is associated with the real-world physical space).
  • FIG. 12 shows an example of a two-dimensional visualization of a virtual area 200 (“Sococo HQ”).
  • the Sococo HQ virtual area includes a lobby 202 , a Main conference room 204 , a West Conference room 206 , an East Conference room 208 , a West Nook zone 210 , an East Nook zone 212 , a Courtyard zone 214 , and sixteen offices.
  • the conference rooms 204 - 208 include respective viewscreen objects 216 - 230 , table objects 232 , 234 , and 236 , and objects representing respective virtual presence apparatus 238 , 240 , 242 and supports realtime audio, chat, and application and network resource sharing communications between the network nodes in the same conference room.
  • Each of the offices includes respective viewscreen objects (not shown) and a respective telephony object (not shown) and supports realtime audio, chat, and application and network resource sharing communications between the network nodes in the same office.
  • Each of the telephony objects supports shared dial-in and dial-out telephony communications as described in U.S. patent application Ser. No. 13/165,729, filed Jun. 21, 2011, and communicants interacting with the telephony objects are represented by avatars decorated with a graphical representation of a telephone (see, e.g., the avatar 215 in Carl's Office).
  • Each of the West Nook 210 , East Nook 212 , and Lobby 202 zones respectively supports realtime audio and chat communications between the network nodes in the same zone.
  • the conference rooms 204 - 208 are associated with different real-world physical spaces.
  • the different real-world physical spaces may be physically connected to or proximate one another (e.g., rooms connected by a common structure, such as rooms in an office building, or disconnected rooms of related co-located structures, such as rooms in a distributed office building complex) or they may be physically remote from one another (e.g., rooms in separate and distinct real-world office buildings, which may be in the same or different geographic regions).
  • the virtual environment creator 18 reinforces these associations with visual cues in the visualizations of the virtual area locations that connote the corresponding real-world physical spaces. In the example shown in FIG.
  • each of the virtual conference rooms 204 - 208 is labeled with a respective name (e.g., Main, West Conference, and East Conference) that corresponds to the name that is used to identify the corresponding real-world physical space.
  • virtual presentations of the virtual conference rooms 204 - 208 include respective features (e.g., the number and placement of virtual view screens 216 - 230 , virtual plants 240 , 242 and virtual artwork 244 ) that correspond to distinctive visual features of the associated real-world physical spaces.
  • the resulting visualization of the virtual area 200 allows a user to see multiple concurrent independent conversations and other interactions that are occurring in different physical spaces in a single view in which the interactions are organized according to a spatial metaphor that allows the user to quickly learn who is meeting with whom and the contexts of those meetings (as defined by the zones in which the meetings are occurring).
  • the objects 238 - 242 in the virtual conference rooms 204 - 208 provide interfaces for communicants in the virtual area 200 to interact with the associated virtual presence apparatus and thereby be bridged into the corresponding physical spaces.
  • FIG. 13 shows an example of a method by which the server network node 42 manages communications between virtual area zones and multiple respective real-world locations via respective physical apparatus.
  • the server network node 42 administers zones of one or more virtual areas in a virtual communications environment ( FIG. 13 , block 190 ).
  • Each of respective ones of the zones defines a respective persistent context for realtime communications between client network nodes of respective communicants who are present in the zone.
  • the server network node 42 administers realtime communications between the respective network nodes of co-present communicants in respective ones of the zones.
  • the server network node 42 For each of multiple physical apparatus in respective real-world locations, the server network node 42 establishes a respective presence for the physical apparatus in a respective one of the zones based on mappings between the respective real-world location and the respective zone, and creates a respective object that represents the physical apparatus in the respective zone and is associated with a respective interface for communicant interaction with the physical apparatus ( FIG. 13 , block 192 ).
  • the server network node 42 transmits to each of one or more of the respective client network nodes a respective specification of a visualization of a spatial layout of the zones, graphical representations of the objects in their respective zones of presence, and graphical representations of avatars representing communicants in their respective zones of presence ( FIG. 13 , block 194 ).
  • the server network node 42 provisions respective data stream connections between respective ones of the physical apparatus and respective ones of the client network nodes ( FIG. 13 , block 196 ).
  • the server network node 42 establishes the respective presence of each of respective ones of the physical apparatus in response to receipt of a respective login request that is generated by and identifies the respective physical apparatus.
  • the server network node 42 For each of one or more of the physical apparatus, the server network node 42 publishes in the respective zone of presence of the physical apparatus one or more physical space data streams that include respective representations of human perceptible physical stimuli in the respective real-world location. In the process of provisioning the respective data stream connections, the server network node 42 provisions data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes for transforming into human perceptible stimuli. Each of one or more of the client network nodes typically publishes one or more respective client data streams.
  • the server network node 42 provisions data stream connections for transmitting respective ones of the published client data streams to respective ones of the client network nodes for transforming into human perceptible stimuli, and transmitting respective ones of the published client data streams to respective ones of the physical apparatus for transforming into human perceptible stimuli in the respective real-world locations.
  • a particular one of the physical apparatus is operable to perform a respective function in its respective real-world location in response to data transmitted by a particular one of the network nodes on a respective one of the data stream connections that is provisioned based on a request from the particular network node referencing the object representing the particular physical apparatus.
  • the particular physical apparatus sends notifications of events relating to the respective function that is performable by the particular physical apparatus. Based on a notification of an event relating to the respective function that is performable by the particular physical apparatus, the server network node 42 sends a notification of the event to the particular one of the client network nodes of a respective one of the communicants.
  • the particular physical apparatus may include a printer, in which case the server network node 42 may send to the particular network node a notification that a document has been printed.
  • the particular physical apparatus may include a facsimile machine, in which case the server network node 42 may send to the particular network node a notification of an incoming facsimile.
  • the particular physical apparatus may include a telephony device, in which case the server network node 42 may send to the particular network node a notification of an incoming telephone call.
  • the server network node 42 may provision a variety of different data stream connections between respective ones of the client network nodes and the physical apparatus.
  • the particular physical apparatus may include a printer and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to print a document.
  • the particular physical apparatus may include a facsimile machine and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to one of send a facsimile and receive a facsimile.
  • the particular physical apparatus may include a telephony device and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to one of place an outgoing telephone call and receive an incoming telephone call.
  • FIG. 14 shows an example of a network communications environment 300 that includes network resources that may be connected over a network 302 to an example of a client network node 304 by the virtual environment creator 18 according to mappings 305 between the network resources and zones of one or more virtual areas.
  • the virtual environment creator 18 sends a specification to the client network node 304 for generating an example of a graphical user interface 306 that includes a spatial visualization that partitions the network resources into zones according to the mappings 305 .
  • the visualization shows a first zone 344 (i.e., a Work Office zone) that is mapped to a real-world work office 314 of a user (Ed in this example) and a second zone 346 (i.e., a Home Office zone) that is mapped to the user's real-world home office 322 .
  • the Work Office zone 344 and the Home Office zone 346 are zones of a common virtual area (e.g., the user's Business virtual area).
  • the Work Office zone 344 and the Home Office zone 346 are zones of different virtual areas (e.g., a Work virtual area and a Home virtual area).
  • the network resources include: a facsimile machine 308 , a telephony device 310 (e.g., a SIP phone), and a network video camera 312 that are located in the user's work office 314 ; and a printer 316 , a telephony device 318 , and a scanner 320 that are located in the user's home office 322 .
  • a facsimile machine 308 e.g., a SIP phone
  • a network video camera 312 that are located in the user's work office 314
  • printer 316 e.g., a printer
  • a telephony device 318 e.g., a SIP phone
  • the graphical user interface 306 includes a toolbar 326 , a viewer panel 328 of the type described above in connection with FIGS. 8-11 , a people panel 330 , and a chat panel 331 .
  • the viewer panel 328 includes a canvas area for displaying visual content.
  • the viewer panel 328 displays virtual area visualizations that are rendered by the virtual area enabled communications application 20 , network resource content that is rendered by a web browser application such as Microsoft® Internet Explorer®, application sharing content that is being shared by the user or another communicant in the virtual area, and visual content received from virtual presence apparatus in the respective real-world locations that are linked to virtual area zones that are associated with the user.
  • the viewer panel 328 is operating in a map view mode that shows respective visualizations of the Work Office zone 344 and the Home Office zone 346 .
  • the visualization of the Work Office zone 344 includes graphical representations of viewscreen objects 338 , 340 , a table object 354 , a telephony object 356 , (on the left-hand side from top to bottom) objects 357 that respectively represent the facsimile machine 308 , the phone 310 , and the camera 312 in the real-world work office 314 , and graphical representations of the communicants who are present in the virtual Work Office zone 344 .
  • the visualization of the Home Office zone 346 includes graphical representations of the viewscreen object 342 , a table object 358 , a telephony object 360 , (on the left-hand side from top to bottom) objects 361 that respectively represent the printer 316 , the phone 318 , and the scanner 320 in the real-world home office 322 , and graphical representations of the communicants who are present in the virtual Home Office zone 346 .
  • Each of the telephony objects 356 - 360 supports shared dial-in and dial-out telephony communications with one or more public switched telephony (PSTN) devices 303 over a PSTN 305 , as described in U.S.
  • the toolbar 326 includes a headphone control 332 for toggling on and off the local speakers of the client network node, a microphone control 334 for toggling on and off the local microphone of the client network node, and one or more view screen buttons 336 for setting the viewer panel 328 to show content being shared in connection with respective view screen objects 338 , 340 , 342 in the user's current zone of presence (e.g., the Work Office zone 344 or the Home Office zone 346 ).
  • a headphone control 332 for toggling on and off the local speakers of the client network node
  • a microphone control 334 for toggling on and off the local microphone of the client network node
  • one or more view screen buttons 336 for setting the viewer panel 328 to show content being shared in connection with respective view screen objects 338 , 340 , 342 in the user's current zone of presence (e.g., the Work Office zone 344 or the Home Office zone 346 ).
  • the people panel 330 depicts the realtime availabilities and activities of some of or all the user's contacts across the different communication contexts defined by the Work Office zone 344 and the Home Office zone 346 .
  • the people panel 330 shows Ed′ contacts segmented into a Work Office section 348 , a Home Office section 350 , and a Contacts section 352 .
  • the Work Office section 348 shows graphical representations, respective states, and realtime activities of the communicants who are present in the Work Office zone 344 (i.e., Ed, Paul, and David); the Home Office section 350 shows graphical representations, respective states, and realtime activities of the communicants who are present in the Home Office zone 346 (i.e., Josh and Matt); and the Contacts section 352 shows all or a selected portion of Ed's contacts who are not represented in any of the other sections 348 , 350 . Examples of the people panel 330 are described in U.S. patent application Ser. No. 13/209,812, filed Aug. 15, 2011, and U.S. patent application Ser. No. 12/354,709, filed Jan. 15, 2009.
  • the chat panel 331 shows a chat interface for a persistent virtual chat area for interactions occurring in connection with the user's current zone of presence.
  • the user (Ed) is present in the Work Office zone 344 ; therefore, the chat panel 331 shows the persistent virtual chat area for text chat and other interactions occurring in the Work Office zone 344 .
  • Examples of the chat panel 331 are described in U.S. patent application Ser. No. 13/209,812, filed Aug. 15, 2011, and U.S. patent application Ser. No. 12/354,709, filed Jan. 15, 2009.
  • the virtual environment creator 18 manages communications between communicants who are present in the virtual Work Office and Home Office zones 344 , 346 and the physical apparatus 308 - 312 , 316 - 320 in their respective real-world locations in accordance with the mappings 305 between the zones 344 , 346 and the physical apparatus 308 - 312 , 316 - 320 .
  • communicants in the virtual Work Office zone 344 are able to interact with any of the facsimile machine 308 , the phone 310 , and the camera 312 in the real-world Work Office 314 via the interfaces provided in connection with the graphical representations 357 of those apparatus in the visualization of the virtual Work Office zone 344 ; similarly, communicants in the virtual Home Office zone 344 are able to interact with any of the printer 316 , the phone 318 , and the scanner 320 in the real-world Home Office 322 via the interfaces provided in connection with the graphical representations 361 of those apparatus in the visualization of the virtual Home Office zone 346 .
  • the virtual environment creator 18 bridges real-world notifications (e.g., notifications of events, alerts, and the like) that are generated by the physical apparatus 308 - 312 , 316 - 320 in their respective real-world locations 314 , 322 into the virtual zones 314 , 322 and bridges responses to those notifications received in connection with the virtual zones 314 , 322 into the physical real-world locations 314 , 322 of the physical apparatus 308 - 312 , 316 - 320 .
  • the facsimile machine 308 generates in the real-world Work Office 314 a notification that a facsimile was received from a particular fax number.
  • the facsimile machine 308 also sends a fax receipt notification to the virtual environment creator 18 .
  • the virtual environment creator 18 relays the fax receipt notification 370 to the user and to other communicants who are present in the virtual Work Office zone 344 .
  • the user may click the View Fax button 372 in the notification window to cause the received facsimile to be displayed in the viewer panel 328 .
  • the phone 318 generates in the real-world Home Office 322 a notification of an incoming call from a particular phone number.
  • the phone 318 also sends an incoming call notification to the virtual environment creator 18 .
  • the virtual environment creator 18 relays the incoming call receipt notification 374 to the user and to other communicants who are present in the virtual Home Office zone 346 .
  • the user may click the Answer Call button 376 in the notification window to cause the user's headset to be connected to receive the incoming call.
  • the virtual environment creator 18 also provides visual cues indicating the states and realtime activities of the real-world physical apparatus 308 - 312 , 316 - 320 .
  • the “on” or “off” states of the real-world physical apparatus 308 - 312 , 316 - 320 are indicated by having two different presentations of their respective graphical representations 357 , 361 (e.g., a first or brighter coloration when a physical apparatus in turned-on, and a darker or dimmed coloration when the physical apparatus is turned-off).
  • the current activity states of the real-world physical apparatus 308 - 312 , 316 - 320 are indicated by having two different presentations of their respective graphical representations 357 , 361 (e.g., using a static graphical representation of a physical apparatus when the physical apparatus current is inactive, and using a dynamic graphical representation—e.g., a blinking of the coloration of the graphical representation—when the physical apparatus currently is active).
  • the network resources in the real-world Work Office and Home Office area available both physically (via their respective physical interfaces in their respective real-world locations) and virtually (via their respective virtual interfaces in their respective virtual locations).
  • the presentation of the virtual representations of the physical apparatus according to the spatial metaphor shown in FIG. 14 provides a way for users to organize their network resources that is particularly more intuitive and effective than traditional non-spatial directory-based visualizations of network resources.
  • the virtual environment creator 18 administers virtual areas based on signals received from intelligent personal gear that is associated with communicants.
  • the personal gear is able to infer information about a communicant's state (e.g., a headset is on the communicant's head, the communicant is proximate his client network node, and the communicant is located in a particular physical space), determine state event changes based on those inferences, and report those state event changes to the virtual environment creator 18 , which reflects them in the virtual area representation.
  • the server network node 42 administers a virtual area in a virtual communications environment.
  • the virtual area includes one or more zones, where each zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone.
  • the server network node administers realtime communications between the respective network nodes of communicants who are co-present in the zone.
  • the server network node 42 transmits a respective specification of a visualization of the virtual area that includes graphical representations of the one or more zones and avatars that respectively represent the communicants in the one or more zones in which they respectively have presence.
  • the server network node 42 From sensing apparatus co-located with a particular one of the communicants in a physical space, the server network node 42 receives state information describing information relating to a physical state of the particular communicant. Based on the state information, the server network node updates the specification of the visualization of the virtual area and the avatars and transmits the updated specification to each of respective ones of the communicants who are present in the virtual area.
  • the state information describes the current real-world location of the particular communication
  • the server network node 42 provides an indication of the current real-world location of the particular communicant in the visualization.
  • the process of updating the visualization specification includes locating the avatar representing the particular communicant in a zone of the virtual area associated with the current real-world location of the particular communicant.
  • the process of updating the visualization specification includes providing a descriptive label indicative of the current real-world location of the particular communicant in association with the graphical representation of the avatar representing the particular communicant.
  • the state information describes the state of the particular communicant in relation to a physical device that is associated with the particular communicant
  • the server network node 42 updates the graphical representation of the avatar representing the particular communicant based on the state of the particular communicant in relation to the physical device.
  • the physical device is a headset. Based on a determination that the state information indicates that the particular communicant is wearing the headset, the server network node 42 includes a graphical representation of a headset with the graphical representation of the avatar of the particular communicant.
  • the state information describes a physical relationship between the particular communicant and another one of the communicants who present in the virtual area. This information may be obtained from a variety of different detection apparatus that are able to detect when the communicants are in the same physical space or when one communicant is within a certain distance of the other communicant.
  • the server network node 42 updates the visualization specification by updating the graphical representations of the avatars of the particular communicant and the other communicant to reflect the physical relationship between the particular communicant and the other communicant.
  • the server network node 42 may include respective indications that the particular communicant and the other communicant are physically co-located with the graphical representations of the avatars of the particular communicant and the other communicant.

Abstract

Examples of systems and methods of communicating between a virtual area and a physical space bridge the virtual area into the physical space and bridge the physical space into the virtual area through physical apparatus located in the physical space. A virtual area may include a zone that defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone. A respective presence in the zone typically is established for each of a physical apparatus and a communicant associated with a network node. An object represents the physical apparatus in the virtual area and an avatar represents the communicant in the virtual area. The object typically is associated with an interface for interacting with the physical apparatus. At least one data stream connection typically is provisioned between the physical apparatus and the network node.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • Under 35 U.S.C. §119(e), this application claims the benefit of U.S. Provisional Application No. 61/510,698, filed Jul. 22, 2011, and U.S. Provisional Application No. 61/637,190, filed Apr. 23, 2012, the entirety of each of which is incorporated herein by reference.
  • This application also relates to the following co-pending patent applications, the entirety of each of which is incorporated herein by reference: U.S. patent application Ser. No. 13/409,344, filed Mar. 1, 2012; U.S. application Ser. No. 13/229,349, filed Sep. 9, 2011; U.S. application Ser. No. 13/229,395, filed Sep. 9, 2011; U.S. application Ser. No. 13/209,812, filed Aug. 15, 2011; U.S. application Ser. No. 12/825,512, filed Jun. 29, 2010; U.S. application Ser. No. 12/694,126, filed Jan. 26, 2010; U.S. application Ser. No. 12/509,658, filed Jul. 27, 2009; U.S. application Ser. No. 12/418,243, filed Apr. 3, 2009; U.S. application Ser. No. 12/418,270, filed Apr. 3, 2009; U.S. application Ser. No. 12/354,709, filed Jan. 15, 2009; U.S. application Ser. No. 12/630,973, filed on Dec. 4, 2009; U.S. application Ser. No. 12/818,517, filed Jun. 18, 2010; U.S. patent application Ser. No. 12/855,210, filed Aug. 12, 2010; U.S. Provisional Patent Application No. 61/563,088, filed Nov. 23, 2011; and U.S. Provisional Patent Application No. 61/535,910, filed Sep. 16, 2011.
  • BACKGROUND
  • When face-to-face communications are not practical, people often rely on one or more technological solutions to meet their communications needs. These solutions typically are designed to simulate one or more aspects of face-to-face communications. Traditional telephony systems enable voice communications between callers. Instant messaging (also referred to as “chat”) communications systems enable users to communicate text messages in real time through instant message computer clients. Some instant messaging systems additionally allow users to be represented in a virtual environment by user-controllable graphical objects (referred to as “avatars”). Interactive virtual reality communication systems enable users in remote locations to communicate and interact with each other by manipulating their respective avatars in virtual spaces.
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of an example of a network communications environment that includes virtual presence apparatus in a physical space, a remote client network node, and a virtual environment creator.
  • FIG. 2 shows a flow diagram of a method of communicating between a virtual area and a physical space.
  • FIG. 3 is a flow diagram of an example of a method performed by an example of virtual presence apparatus.
  • FIG. 4 is a block diagram of an example of virtual presence apparatus.
  • FIG. 5A is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 5B is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 5C is a block diagram of an example of virtual presence apparatus connected to a server network node.
  • FIG. 6 is a flow diagram of an example of a method of administering communications between a virtual area and a physical space.
  • FIG. 7 is a flow diagram of an example of a method of communicating between a virtual area and a physical space.
  • FIG. 8 is a diagrammatic view of an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in a physical space.
  • FIG. 9 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 10 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 11 is a diagrammatic view of an example of a physical space and an example of a graphical interface for interfacing a user with an example of virtual presence apparatus in the physical space.
  • FIG. 12 is a diagrammatic view of an example of a visualization of a virtual area.
  • FIG. 13 is a flow diagram of an example of a method by which an example of a server network node manages communications between virtual area zones and multiple physical apparatus in respective real-world locations.
  • FIG. 14 is a diagrammatic view of a network communications environment that includes network resources connected to an example of a client network node that generates an example of a graphical user interface that includes a spatial visualization of the network resources.
  • DETAILED DESCRIPTION
  • In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
  • I. Definition of Terms
  • A “communicant” is a person who communicates or otherwise interacts with other persons over one or more network connections, where the communication or interaction may or may not occur in the context of a virtual area. A “user” is a communicant who is operating a particular network node that defines a particular perspective for descriptive purposes.
  • A “computer” is any machine, device, or apparatus that processes data according to computer-readable instructions that are stored on a computer-readable medium either temporarily or permanently. A “computer operating system” is a software component of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources. A “software application” (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks. A “data file” is a block of information that durably stores data for use by a software application.
  • The term “computer-readable medium” (also referred to as “memory”) refers to any tangible, non-transitory medium capable storing information (e.g., instructions and data) that is readable by a machine (e.g., a computer). Storage devices suitable for tangibly embodying such information include, but are not limited to, all forms of physical, non-transitory computer-readable memory, including, for example, semiconductor memory devices, such as random access memory (RAM), EPROM, EEPROM, and Flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • A “window” is a visual area of a display that typically includes a user interface. A window typically displays the output of a software process and typically enables a user to input commands or data for the software process. A window that has a parent is called a “child window.” A window that has no parent, or whose parent is the desktop window, is called a “top-level window.” A “desktop” is a system-defined window that paints the background of a graphical user interface (GUI) and serves as the base for all windows displayed by all software processes.
  • A “data sink” (referred to herein simply as a “sink”) is any of a device (e.g., a computer), part of a device, or software that receives data.
  • A “data source” (referred to herein simply as a “source”) is any of a device (e.g., a computer), part of a device, or software that originates data.
  • A “network node” (also referred to simply as a “node”) is a junction or connection point in a communications network. Examples of network nodes include, but are not limited to, a terminal, a computer, and a network switch. A “server” network node is a host computer on a network that responds to requests for information or service. A “client network node” is a computer on a network that requests information or service from a server.
  • A “network connection” is a link between two communicating network nodes. A “connection handle” is a pointer or identifier (e.g., a uniform resource identifier (URI)) that can be used to establish a network connection with a network resource. A “network communication” can include any type of information (e.g., text, voice, audio, video, electronic mail message, data file, motion data stream, and data packet) that is transmitted or otherwise conveyed from one network node to another network node over a network connection.
  • Synchronous conferencing refers to communications in which communicants participate at the same time. Synchronous conferencing encompasses all types of networked collaboration technologies, including instant messaging (e.g., text chat), audio conferencing, video conferencing, application sharing, and file sharing technologies.
  • A “communicant interaction” is any type of direct or indirect action or influence between a communicant and another network entity, which may include for example another communicant, a virtual area, or a network service. Examples of types of communicant communications include communicants communicating with each other in realtime, a communicant entering a virtual area, and a communicant requesting access to a resource from a network service.
  • “Presence” refers to the ability and willingness of a networked entity (e.g., a communicant, service, or device) to communicate, where such willingness affects the ability to detect and obtain information about the state of the entity on a network and the ability to connect to the entity.
  • A “realtime data stream” is data that is structured and processed in a continuous flow and designed to be received with no delay or only imperceptible delay. Realtime data streams include digital representations of voice, video, user movements, facial expressions and other physical phenomena, as well as data within the computing environment that may benefit from rapid transmission, rapid execution, or both rapid transmission and rapid execution, including for example, avatar movement instructions, text chat, realtime data feeds (e.g., sensor data, machine control instructions, transaction streams and stock quote information feeds), screen shares, and file transfers.
  • A “physical space” is a three-dimensional real-world environment in which a communicant can be located physically.
  • A “virtual area” (also referred to as an “area” or a “place”) is a representation of a computer-managed space or scene. Virtual areas typically are one-dimensional, two-dimensional, or three-dimensional representations; although in some examples a virtual area may correspond to a single point. Oftentimes, a virtual area is designed to simulate a physical, real-world space. For example, using a traditional computer monitor, a virtual area may be visualized as a two-dimensional graphic of a three-dimensional computer-generated space. However, virtual areas do not require an associated visualization. A virtual area typically refers to an instance of a virtual area schema, where the schema defines the structure and contents of a virtual area in terms of variables and the instance defines the structure and contents of a virtual area in terms of values that have been resolved from a particular context.
  • A “persistent virtual area” is a virtual area that persists even after all communicants have disconnected from the virtual area. The state of a persistent virtual area is preserved so that it can be restored the next time a communicant connects to the virtual area. A “persistent association” between a virtual area and virtual presence apparatus is an association that persists even after all communicants and the virtual presence apparatus have disconnected from the virtual area.
  • A “virtual area application” (also referred to as a “virtual area specification”) is a description of a virtual area that is used in creating a virtual environment. A virtual area application typically includes definitions of geometry, physics, and realtime switching rules that are associated with one or more zones of the virtual area.
  • A “virtual area enabled communications application” is a client communications application that integrates realtime communications (e.g., synchronous conferencing functionalities, such as audio, video, chat, and realtime other data communications) with a virtual area.
  • A “virtual environment” is a representation of a computer-managed space that includes at least one virtual area and supports realtime communications between communicants.
  • A “position” in a virtual area refers to a location of a point or an area or a volume in the virtual area. A point typically is represented by a single set of one-dimensional, two-dimensional, or three-dimensional coordinates (e.g., x, y, z) that define a spot in the virtual area. An area typically is represented by the three-dimensional coordinates of three or more coplanar vertices that define a boundary of a closed two-dimensional shape in the virtual area. A volume typically is represented by the three-dimensional coordinates of four or more non-coplanar vertices that define a closed boundary of a three-dimensional shape in the virtual area.
  • VoIP (Voice over Internet Protocol) refers to systems and methods of delivering voice and other communications over Internet Protocol (IP) networks.
  • As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on.
  • II. Communicating Between a Virtual Area and a Physical Space
  • Examples that are described herein provide systems and methods of communicating between a virtual area and a physical space. These examples bridge the virtual area into the physical space and bridge the physical space into the virtual area through virtual presence apparatus (VPA) located in the physical space. Examples of the virtual presence apparatus transduce human perceptible stimulus (e.g., audio, visual, mechanical, and other sensory stimulus) between the virtual area and the physical space such that communicant interactions in the virtual area are seamlessly integrated into the physical space and vice versa.
  • FIG. 1 shows an embodiment of an exemplary network communications environment 10 that includes a virtual presence apparatus 12 in a physical space 14, a remote client network node 16, and a virtual environment creator 18 that are interconnected by a network (not shown) that supports the transmission of a wide variety of different media types (e.g., text, voice, audio, video, and other data) between network nodes. The network connections between network nodes may be arranged in a variety of different stream handling topologies, including a peer-to-peer architecture, a server-mediated architecture, and hybrid architectures that combine aspects of peer-to-peer and server-mediated architectures. Exemplary topologies of these types are described in U.S. Pat. Nos. 7,769,806 and 7,844,724.
  • The client network node 16 includes input/output (I/O) hardware, a processor, and a computer-readable memory that stores an instance 20 of at least one virtual area enabled communications application that is executable by the processor. The communications application 20 typically provides graphical interface and communications functions for communicating with the virtual presence apparatus 12, the virtual environment creator 18, and other client network nodes in connection with one or more virtual areas. Examples of the communications applications are described in U.S. application Ser. No. 12/418,243, filed Apr. 3, 2009, U.S. application Ser. No. 12/630,973, filed Dec. 4, 2009, U.S. application Ser. No. 12/354,709, filed Jan. 15, 2009, U.S. application Ser. No. 12/509,658, filed Jul. 27, 2009, U.S. application Ser. No. 13/209,812, filed Aug. 15, 2011, and U.S. application Ser. No. 13/229,349, filed Sep. 9, 2011. The client network node 16 has a respective set of one or more sources and a respective set of one or more sinks. Exemplary sources include an audio source (e.g., an audio capture device, such as a microphone), a video source (e.g., a video capture device, such as a video camera), a chat source (e.g., a text capture device, such as a keyboard), a motion data source (e.g., a pointing device, such as a computer mouse), and other sources (e.g., file sharing source or a source of a customized real-time data stream). Exemplary sinks include an audio sink (e.g., an audio rendering device, such as a speaker or headphones), a video sink (e.g., a video rendering device, such as a display monitor), a chat sink (e.g., a text rendering device, such as a display monitor), a motion data sink (e.g., a movement rendering device, such as a display monitor), and other sinks (e.g., a printer for printing shared files, a device for rendering real-time data streams different from those already described, or software that processes real-time streams for analysis or customized display). The client network node 16 also typically includes administrative policies, user preferences (including preferences regarding the exportation of the user's presence and the connection of the user to the virtual environment creator 18 and other communicants), and other settings that define a local configuration that influences the administration of realtime connections with the virtual presence apparatus 12, the virtual environment creator 18, and other network nodes.
  • The virtual presence apparatus 12 is located in the physical space 14. In the illustrated example, the virtual presence apparatus 12 is positioned on a table 22 in a real-world conference room containing five communicants 24, 26, 28, 30, 32. The four communicants 24-30 are seated around the table 22, and the fifth communicant 32 is standing beside a real-world view screen 34. Three of the seated communicants 24-28 are operating respective virtual area enabled communications applications on their client network nodes 36, 38, 40 (e.g., mobile computers, such as laptop computers, tablet computers, and mobile phones) through which they are connected to the remote client network node 16; these three communicants 24-28 and the other two communicants 30, 32 are connected to the remote client network node 16 through the virtual presence apparatus 12.
  • The virtual presence apparatus 12 typically includes software and hardware resources that enable the virtual presence apparatus 12 to connect to the virtual environment creator 18 and the remote client network node 16, either directly (e.g., peer-to-peer) or through a hosted network connection. In some examples, the virtual presence apparatus 12 or a network node hosting the virtual presence apparatus includes a complete or modified version of the communications application 20, which provides functions for communicating with the virtual environment creator 18 and establishing network connections and communicating realtime data streams with the client network nodes. When connected to the virtual environment creator 18, the virtual presence apparatus 12 can be registered in association with and/or logged into the one or more virtual areas. When logged into a virtual area, the virtual presence apparatus 12 transduces human perceptible stimulus (e.g., audio, visual, mechanical, and other sensory stimulus) between the client network nodes of communicants who are present in the virtual area and the physical space 14. In this way, the virtual presence apparatus 12 bridges a physical experience of the physical space 14 to communicants in the one or more virtual areas (i.e., communicants who are present in the virtual areas) and bridges communicant interactions in the one or more virtual areas to communicants in the physical space 14.
  • In the illustrated example, the virtual environment creator 18 includes at least one server network node 42 that provides a network infrastructure service environment 44 that manages sessions of the remote client network node 16 and the virtual presence apparatus 12 in one or more virtual areas 46 in accordance with respective virtual area applications 48. Each of the virtual area applications 48 is hosted by a respective one of the virtual areas 46 and includes a description of the respective virtual area 46. Communicants operating respective client network nodes connect to the virtual area applications 48 through virtual area enabled communications applications.
  • A virtual area typically includes one or more zones. A zone may be a rendered spatial extent, a set of rules applied to a spatial extent, or both. Zones may be arranged hierarchically in a virtual area, with an outermost zone (referred to herein as the “Global Governance Zone”) enclosing all other zones in the virtual area. Within the Global Governance Zone, there can be location zones (e.g., rooms of a virtual area) or smaller governance zones that enclose a group of location zones and provide regions of governance on the map. A zone definition typically also includes one or more channel definitions that describe how to create respective channels in the zone and specify the information about the channel that is published to a client network node that becomes present in the zone. A channel is always uniquely defined point-to-point and is unique to a virtual area application and a session between a client network node and the virtual area platform.
  • Examples of the types of rules that may be associated with a zone include switching rules, governance rules, and permission rules.
  • Switching rules govern realtime stream connections between network nodes that are linked to the virtual area (e.g., network nodes that are associated with objects, such as avatars, in the virtual area). The switching rules typically include a description of conditions for connecting sources and sinks of realtime data streams in terms of positions in the virtual area. Each switching rule typically includes attributes that define the realtime data stream type to which the rule applies and the location or locations in the virtual area where the rule applies. In some examples, each of the rules optionally may include one or more attributes that specify a required role of the source, a required role of the sink, a priority level of the stream, and a requested data routing topology. In some examples, if there are no explicit switching rules defined for a particular part of the virtual area, one or more implicit or default switching rules may apply to that part of the virtual area.
  • Governance rules control who has access to resources (e.g., the virtual area itself, regions with the virtual area, and objects within the virtual area), who has access to data (e.g., data streams and other content) that is associated with the virtual area, what is the scope of that access to the data associated the virtual area (e.g., what can a user do with the data), and what are the follow-on consequences of accessing that data (e.g., record keeping, such as audit logs, and payment requirements). In some examples, an entire virtual area or a zone of the virtual area is associated with a “governance mesh” that enables a software application developer to associate governance rules with a virtual area or a zone of a virtual area. This avoids the need for the creation of individual permissions for every file in a virtual area and avoids the need to deal with the complexity that potentially could arise when there is a need to treat the same document differently depending on the context.
  • A permission rule defines a respective capability requirement (e.g., for a respective action, behavior, or state) in terms of one or more capabilities, attributes, and settings, which may be persistent or transient. Examples of capabilities systems for administering permission rules are described in U.S. Provisional Patent Application No. 61/535,910, filed Sep. 16, 2011.
  • In some examples, a virtual area is defined by a specification that includes a description of geometric elements of the virtual area and one or more rules, including switching rules and governance rules. Examples of virtual area specifications are described in U.S. application Ser. No. 12/418,243, filed Apr. 3, 2009, U.S. application Ser. No. 12/818,517, filed Jun. 18, 2010, U.S. patent application Ser. No. 12/855,210, filed Aug. 12, 2010, and U.S. Provisional Application No. 61/563,088, filed Nov. 23, 2011.
  • The network infrastructure service environment 44 typically includes one or more network infrastructure services that cooperate with the virtual area enabled communications application 20 to establish and administer network connections between the virtual presence apparatus 12, the remote client network node 16, and other network nodes. Among the network infrastructure services that are included in an exemplary embodiment of the network infrastructure service environment 44 are an account service, a security service, an area service, a rendezvous service, and an interaction service. The structure, operation, and components of such an embodiment of the network infrastructure service environment 44 are described in U.S. patent application Ser. No. 12/825,512, filed Jun. 29, 2010.
  • The virtual area enabled communications applications 20, the area applications 48, and the network infrastructure service environment 44 together provide a platform (referred to herein as “the platform”) that administers the realtime connections with network nodes in a virtual area subject to a set of constraints that control access to the virtual area instance.
  • The platform tracks communicants' realtime availabilities and activities across the different communication contexts that are defined by the area applications 48. This information is presented by the virtual area enabled communications applications to the communicants in the form of realtime visualizations that enable the communicants to make more informed network interaction decisions (e.g., when to interact with a contact) and encourages the communicants to initiate interactions with other communicants and to join contexts (e.g., an ongoing conversation between communicants) of which the communicants otherwise would not have been aware. In some embodiments, the realtime visualization includes visual cues as to the presence and activities of the communicants in the contexts of the area applications 48. The presentation of these visual cues typically depends on one or more of governance rules associated with the virtual areas 46, administrative policies, and user preferences (including preferences regarding the exportation of the user's presence and the connection of the user to areas and other communicants), which may define tiered relationship based predicates that control access to presence information and/or network resources on a zone-by-zone basis.
  • In some embodiments, the server network node 42 remotely manages client communication sessions with each other and with the virtual presence apparatus 12, and remotely configures audio and graphic rendering engines on the client network nodes, as well as switching of data streams by sending instructions (also referred to as definitions) from the remotely hosted area applications 48 to the client network nodes in accordance with the stream transport protocol described in U.S. application Ser. No. 12/825,512, filed Jun. 29, 2010. Data is shared between the client network node 16 and other network nodes as definition records over transport protocol sockets. The client communications application 16 receives configuration instructions from the server node 42 through definition records that are received over a server session between the client network node 16 and the server network node 42. In some of these examples, the server network node 42 sends to each of the client network nodes provisioning messages that configure the client network nodes to interconnect respective data streams between active ones of their complementary sources and sinks over respective peer-to-peer (P2P) sessions in accordance with switching rules specified in the area applications 48 and the locations where the communicants and the virtual presence apparatus are present in the virtual area 46. The client network node 16 sends content to and receives content from other network nodes through definition records that are transmitted on content-specific channels on respective sessions with the other network nodes. Data is shared in accordance with a publish/subscribe model. A stream transport service on the client network node 16 subscribes only to the data that are needed by the client network node. To subscribe, the stream transport service negotiates a channel on a session that is established with another network node. The channel is negotiated by well-known GUID for the particular area application 48. Definition records are transmitted only when a subscriber exists on the other end of a transport protocol socket. Definition records that are received by the stream transport service are delivered to the subscribing ones of the client communications application processes on arrival. In this way, the server network node 42 connects the virtual presence apparatus 12 to the virtual area 46 so that the virtual presence apparatus 12 can bridge a physical experience of the physical space 14 to communicants in the virtual area 46 and bridge a physical experience of communicant interactions in the virtual area 46 to communicants in the physical space 14.
  • In the illustrated embodiment, the communications application 20 operating on the remote client network node 16 presents a respective spatial visualization 50 of the virtual area 46 in accordance with data received from the network infrastructure service environment 44. The communications application 20 also provides a graphical interface for receiving user commands and providing a spatial interface that enhances the realtime communications between the communicants. The spatial visualization 50 includes respective graphical representations 52, 54, 56, 58 (referred to herein as “avatars” or “sprites”) of the communicants who are present in the virtual area 46 in spatial relation to a graphical representation 59 of the virtual area 46. In the illustrated example, the sprites 52, 54, 56 represent the three communicants 24, 26, 28 (Beth, Fran, Art) who are seated in the physical space 14 and are operating the local client network nodes 36, 38, 40, and the sprite 58 represents the communicant (Ed) who is operating the remote client network node 16. The spatial visualization 50 may include other objects (also referred to as “props”). Examples of such objects include a view screen object 60 for interfacing with application sharing functions of the communications application 20 (as described in, e.g., U.S. application Ser. No. 12/418,270, filed Apr. 3, 2009), a table object 62 for interfacing with file sharing functions of the communications application 20, and a VPA object 64 for interfacing with the virtual presence apparatus 12 in the physical space 14. The spatial visualization 50 typically is presented in a respective window 66 that is generated by the communications application 20 on a “desktop” or other system-defined base window on the display hardware 68 of the remote client network node 16.
  • The activities of the communicants in the virtual area 46 may be inferred from the activities on the various communication channels over which the respective client network nodes are configured to communicate. The activities on the communication channels are represented in the graphical interface by visual cues that are depicted in association with the graphical representations 52-58 of the communicants. For example, the “on” or “off” state of a communicant's local speaker channel is depicted by the presence or absence of a headphones graphic 61 on the communicant's sprite. When the speakers of the communicant who is represented by the sprite are on, the headphones graphic 61 is present (see sprites Beth and Fran) and, when the communicant's speakers are off, the headphones graphic 61 is absent. The “on” or “off” state of the communicant's microphone is depicted by the presence or absence of a microphone graphic 63 on the communicant's sprite. When the microphone is on, the microphone graphic 63 is present (see sprite Fran); and, when the microphone is off, the microphone graphic 63 is absent. The headphones graphic 61 and the microphone graphic 63 provide visual cues of the activity states of the communicant's sound playback and microphone devices. The “on” or “off” state of the communicant's microphone is depicted by the presence or absence of a microphone graphic on the communicant's graphic representation and a series of concentric circles 65 that dynamically radiate away from the communicant's graphic representation in a series of expanding waves. When the microphone is on, the microphone graphic 63 and the radiating concentric circles 65 are present and, when the microphone is off, the microphone graphic 63 and the radiating concentric circles 65 are absent. In addition to or alternatively, the current activity on a communicant's microphone channel is indicated by a dynamic visualization that lightens and darkens the communicant's avatar in realtime to reflect the presence or absence of audio data on the microphone channel. Thus, whether or not their local speakers are turned on, communicants can determine when another communicant is speaking by the “blinking” of the coloration of that communicant's avatar.
  • The activity on a communicant's text chat channel is depicted by the presence or absence of the hand graphic 67 adjacent the communicant's sprite (see sprite Ed). Thus, when a communicant is transmitting text chat data to another network node the hand graphic 67 is present, and when a communicant is not transmitting text chat data the hand graphic 67 is not present. In some embodiments, text chat data is transmitted only when keyboard keys are depressed, in which case the visualization of the communicant's text channel appears as a flashing on and off of the hand graphic 67.
  • The view screen prop 60 is associated with application sharing functionality of the platform that enables communicants to share applications operating their respective client network nodes. The application sharing functionality is invoked by activating a view screen (e.g., by single-clicking the view screen object with an input device). In some embodiments, the platform provides visual cues that indicate whether or not a communicant is sharing an application over an application sharing channel. In response to a communicant's selection of the view screen prop, the communicant's sprite automatically is moved to a position in the graphical representation of the virtual area that is adjacent the view screen prop. The position of a communicant's sprite adjacent the view screen prop indicates that the communicant currently is sharing or is about to share an application with the other communicants in the virtual area. Other communicants in the virtual subscribe to the shared application data by selecting the view screen prop in their respective views of the spatial visualization 50. The avatar of each communicant who is viewing a shared application is depicted with a pair of “eyes” to indicate that the represented communicants are viewing the content being shared in connection with the view screen props. The graphical depiction of view screen prop is changed depending on whether or not an active application sharing session is occurring. For example, the depicted color of the view screen may change from a brighter color during an active application sharing session to a darker color when there is no application sharing taking place. Examples of the application sharing process are described in connection with FIGS. 26-28 of U.S. patent application Ser. No. 12/354,709, filed Jan. 15, 2009, and in U.S. patent application Ser. No. 12/418,270, filed Apr. 3, 2009.
  • FIG. 2 shows an example of a method by which the virtual environment creator 18 and the virtual presence apparatus 12 bridge the virtual area into the physical space and bridge the physical space into the virtual area. In accordance with this method, the virtual presence apparatus 12 transforms human perceptible physical stimuli in a physical space into physical space data streams of different respective data stream types (FIG. 2, block 101). The server network node 42 publishes respective ones of the physical space data streams in a zone of a virtual area in a virtual communications environment (FIG. 2, block 103). The zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone. The server network node 42 establishes a respective presence in the zone for each of multiple communicants associated with respective client network nodes. Each of one or more of the client network nodes publishes one or more respective client data streams (FIG. 2, block 105). The server network node 42 provisions data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes, transmitting respective ones of the published client data streams to respective ones of the client network nodes, and transmitting respective ones of the published client data streams to the physical space (FIG. 2, block 107). The virtual presence apparatus 12 transforms the published client data streams transmitted to the physical space into human perceptible physical stimuli in the physical space (FIG. 5, block 109).
  • FIG. 3 shows an example of a process that is implemented by an example of the virtual presence apparatus 12. In accordance with this method, the virtual presence apparatus 12 transmits a globally unique identifier of the virtual presence apparatus 12 for association with a virtual area by a network service administering the virtual area (FIG. 3, block 90). The virtual presence apparatus 12 generates output data from human perceptible stimulus in a physical space (FIG. 3, block 92), and transmits the output data in connection with the virtual area (FIG. 3, block 94). The virtual presence apparatus 12 receives input data associated with the virtual area (FIG. 3, block 96), and generates human perceptible stimulus in the physical space from the input data (FIG. 3, block 98).
  • FIG. 4 shows an example 70 of the virtual presence apparatus 12 that includes an input transducer 72, an output transducer 74, a communication interface 76, a computer-readable memory 78 that stores a globally unique identifier of the virtual presence apparatus 70, and a processor 80. The communication interface 76 transmits an output signal 86 and receives an input signal 88.
  • The virtual presence apparatus 70 may be implemented in a variety of different ways. In some examples, the virtual presence apparatus 70 is composed of multiple components (e.g., two or more of a speaker, a microphone, a light projector, and a camera) that are integrated into a unitary device. In other examples, the virtual presence apparatus 70 is composed of a central hub (e.g., a virtual area enabled network switch or router) that controls and configures one or more separate and distinct peripheral components (e.g., a speakerphone, a digital projector, a camera, and a remote-controlled laser pointer) that are connected to respective ports (e.g., Universal Serial Bus (USB) ports) of the hub. Examples of the virtual presence apparatus 70 may have different industrial designs. In some examples, the virtual presence apparatus 70 has the form factor of a desktop appliance (e.g., a form factor similar to that of a computer, speakerphone, a digital projector, or a network hub), whereas other examples of the virtual presence apparatus 70 have robotic form factors (e.g., a remote-controlled electro-mechanical machine, which may or may not have a humanoid appearance).
  • The input transducer 72 generates output data from human perceptible stimulus 82 in the physical space 14. The input transducer 72 typically generates the output data from human perceptible stimulus that is broadcasted into the physical space. Depending on the desired communication application, the input transducer 72 may generate output data from one or more human perceptible stimuli, including for example audio, visual, mechanical, and other sensory stimuli. In some examples, the input transducer 72 includes one or more of an acoustic-to-electric transducer (e.g., a microphone, which may be a component of a telephony device, such as a mobile phone or a VoIP phone, or a headset), a light-to-electric transducer (e.g., a camera, such as a still image camera, a video camera, and a scanner that scans physical documents into scanned images), an electric-to-electric transducer (e.g., a touchscreen or other touch-sensitive sensor equipped with resistive, capacitive, surface acoustic wave, optical, or other touch-sensitive technologies), a mechanical-to-electric transducer (e.g., a tactile or other pressure- or force-sensitive transducer, a texture-sensitive transducer), and a chemical-to-electric transducer (e.g., a olfactory sensor that is capable of detecting one or more odorants).
  • The output transducer 74 generates human perceptible stimulus 84 in the physical space 14. The output transducer 74 typically broadcasts the human perceptible stimulus into the physical space. Depending on the desired communications application, the output transducer 74 may generate one or more human perceptible stimuli from input data, including for example audio, visual, mechanical, and other sensory stimuli. In some examples, the output transducer 74 includes one or more of an electric-to-acoustic transducer (e.g., a speaker, which may be a component of a telephony device, such as a mobile phone or a VoIP phone, or a headset), an electric-to-light transducer (e.g., an image projector such as a digital projector, a touchscreen display, a light beam projector such as a laser pointer, or a three-dimensional hologram generator), an electric-to-mechanical transducer (e.g., a haptic transducer, an electric motor that moves mechanical components, such as light sources and robot tools, and other components in the physical space, and a printer that outputs printed documents or three-dimensional objects), and an electric-to-chemical transducer (e.g., an electric odorant delivery system).
  • The virtual presence apparatus 70 bridges communicant activity in the physical space 14 into the virtual area 46 and bridges communicant activity in the virtual area into the physical space 14. In this process, the virtual presence apparatus 70 typically encodes output data generated by the input transducer 72 from communicant activity in the physical space 14 into the output signal 86 that is sent to the remote network node 16 connected to the virtual area, and decodes the input signal 88, which is received from the remote network node 16 and relates to communicant activity in the virtual area, into input data that is sent to the output transducer 74.
  • The virtual presence apparatus 12 typically is registered with the server network node 42 before the virtual presence apparatus 12 can be logged into a server session with the server network node 42. In some examples, the virtual presence apparatus 12 includes hardware and software resources that enable it to register directly with the server network node 42.
  • For example, FIG. 5A shows an example of a network connection between the server network node 42 and an example 85 of the virtual presence apparatus 12 that can register directly with the server network node 42.
  • In other examples, a host computer (e.g., one of the client network nodes 36-40 in the physical space) registers the virtual presence apparatus 12 with the server network node 42. FIG. 5B shows an example of a network connection between the server network node 42 and an example 87 of the virtual presence apparatus 12 that is hosted by a client network node 89 in the physical space 14. In this example, the client network node 89 submits to the server network node 42 registration information and login requests on behalf of both the virtual presence apparatus 87 and the communicant who uses the client network node 89 to access the virtual area 46.
  • FIG. 5C shows an example of a network connection between the server network node 42 and an example of 93 of the virtual presence apparatus 12 that includes one or more integral components of a client network node 91 in the physical space 14. The virtual presence apparatus 93 typically includes one or more hardware and software resources of the client network node 91. In some examples, the virtual presence apparatus 93 includes software that resides in the memory of the client network node 91 and is executed by the processor of the client network node 91 to leverage hardware resources of the client network node 91 in the process of integrating communicant interactions in the virtual area into the physical space. In some of these examples, hardware resources of the client network node 91 are partitioned (e.g., by a hypervisor or virtual machine monitor that reserves a respective set of client system resources for each partition or virtual machine) into a set of hardware resources that are used by the virtual area enabled communications application 20 and a separate set of hardware resources that constitute elements of the virtual presence apparatus 93. For example, a peripheral headset may be reserved for use by the virtual area enabled communications application 20, whereas separate microphone and speaker hardware may be reserved for use by the virtual presence apparatus 93. In some examples, certain hardware resources of the client network node 91 (e.g., a camera, a hard drive memory, or an optical disk drive) that are allocated to the virtual presence apparatus 93 are associated with respective objects in the virtual area 46, allowing those resources to be shared by other communicants in the virtual area 46.
  • During registration, the virtual presence apparatus 12 transmits (either directly, or indirectly through a network node hosting the virtual presence apparatus 12) registration data through its communication interface to the server network node 42. The registration data typically includes the globally unique identifier of the virtual presence apparatus 12 and configuration data. The configuration data may include, for example, a device type identifier, an indication whether the virtual presence apparatus 12 should be associated with an existing virtual area or a new virtual area, one or more conditions on the availability of the associated virtual area (e.g., the associated virtual area is accessible to communicants conditioned on the virtual area apparatus 12 being present in or logged into the virtual area), a specification of the source and sink capabilities of the virtual presence apparatus 12, and a specification of a graphical representation of the virtual presence apparatus 12. Based on the registration data, the server network node 42 generates one or more database records that store the registration information, including the identifier of the virtual presence apparatus 12 and an identifier of the new or existing virtual area. The one or more database records create a persistent association between the virtual presence apparatus 12 and the virtual area. The virtual presence apparatus identifier typically is registered with the server network node 42 independently of any communicant identifier. The server network node 42 determines the source and sink capabilities of the virtual presence apparatus, either directly (e.g., from the configuration data) or indirectly (e.g., by using the device type identifier for the virtual presence apparatus 12 as an index into a device capabilities table).
  • In some examples, the virtual presence apparatus 12 is associated with a virtual area independently of any particular communicant such that it is available as a resource for any communicant who is present in the virtual area. In this way, the virtual presence apparatus functions as a prop or a fixture of the associated virtual area, which is tied to the physical location of the virtual presence apparatus. In some examples, the association between the virtual presence apparatus and the virtual area is such that the virtual area is inaccessible until after the virtual presence apparatus has been logged into the network infrastructure service environment 44. In some of these examples, communicants cannot establish a presence in the associated virtual area (and, in some cases, may not even be presented with an option for entering the virtual area such that the virtual area does not appear to exist) until after the virtual presence apparatus has been connected to the virtual area by the network infrastructure service environment 44. These examples allow communicants to establish a persistent association between a virtual area and a particular physical space by leaving the virtual presence apparatus in the same physical space, thereby leveraging the persistent spatial association with the real-world location of the physical space to further strengthen the bridging between the virtual area and the physical space.
  • After the virtual presence apparatus 12 has been registered with the server network node 42, the virtual presence apparatus 12 can be logged into the network infrastructure service environment 44. The virtual presence apparatus 12 can either log itself into the network infrastructure service environment 44 automatically each time it is turned on or it can be logged into the network infrastructure service environment 44 by a host computer. If the associated virtual area already has been instantiated, the server network node 42 sends provisioning instructions for establishing respective sessions between the virtual presence apparatus 12 and the client network nodes of the communicants who are present in the virtual area and for updating the appearance of the virtual area to include a graphical representation of the virtual presence apparatus 12 in the graphical interfaces displayed on the client network nodes. If the associated virtual area has not yet been instantiated, the server network node 42 instantiates the associated virtual area so that communicants operating respective client network nodes can access the virtual area.
  • The provisioning instructions sent by the server network node 42 are used to establish communication sessions between the client network nodes and the virtual presence apparatus. In some examples, data is shared between the client network nodes and the virtual presence apparatus 12 as definition records over transport protocol sockets. The client network nodes and the virtual presence apparatus 12 receive content from each other through definition records that are received on content-specific channels on respective peer-to-peer sessions. Data is shared in accordance with a publish/subscribe model. A stream transport service on each of the client network nodes and the virtual presence apparatus 12 subscribes only to the data that are needed. To subscribe, the stream transport service negotiates a channel on a session that is established with another network node. The channel is negotiated by well-known GUID for the particular area application 48. Definition records are transmitted only when a subscriber exists on the other end of a transport protocol socket. Definition records that are received by the stream transport service are delivered to the subscribing ones of the local communication processes on arrival. Example of the structure and operation of the stream transport service and the data sharing communication sessions are described in U.S. patent application Ser. No. 12/825,512, filed Jun. 29, 2010.
  • In some examples, the virtual presence apparatus 12 transmits the output data corresponding to the human perceptible stimulus in the physical space to the client network nodes in connection with the virtual area. In this process, the virtual presence apparatus 12 typically processes the output data and configures its communication interface to incorporate the output data into the output signal that is sent to a client network node. In some examples, the output signal includes at least one of the globally unique identifier of the virtual presence apparatus, an identifier of the virtual area, and optionally an identifier of a zone of the virtual area. The output signal typically is free of any communicant identifier (i.e., an identifier that identifies a particular communicant). In these examples, the virtual area (or the specified zone of the virtual area) serves as a termination point for one or more data streams that represent physical stimuli in a physical space occupied by the virtual presence apparatus 12, where the data streams are published by the virtual presence apparatus in the virtual area/zone, communicants who are present in the virtual area/zone respectively are able to subscribe to one or more of the data streams, and the server network node 42 provisions data stream connections for the data streams that are subscribed to by respective ones of the communicants who are present in the virtual area/zone.
  • The virtual presence apparatus 12 typically determines the input data from the input signal that is received through its communication interface from a respective client network node that is connected to the virtual area. The input signal typically includes at least one of a globally unique identifier of the respective client network node and an identifier of the virtual area. The virtual presence apparatus 12 typically derives input data from the input signal and passes the input data to an output transducer, which generates human perceptible stimulus in the physical space.
  • FIG. 6 shows an example of a method that is implemented by an example of the server network node 42 for administering communications between a virtual area and a physical space. In accordance with this method, the server network node 42 creates a persistent association between virtual presence apparatus in a physical space and a virtual area (FIG. 6, block 100). The apparatus has an apparatus source of a respective data stream content type and an apparatus sink of a respective data stream content type. The server network node 42 establishes a respective presence in the virtual area for a communicant operating a client network node connected to the virtual area (FIG. 6, block 102). The client network node has a client sink that is complementary to the apparatus source and a client source that is complementary to the apparatus sink. The server network node 42 administers a respective connection between each active pair of complementary sources and sinks of the client network node and the apparatus in association with the virtual area, where each connection supports transmission of the respective data stream content type between the apparatus and the client network node (FIG. 6, block 104).
  • In some examples of the method of FIG. 6, the association between the virtual presence apparatus 12 and the virtual area is independent of any particular communicant. The server network node 42 typically receives a globally unique identifier of the virtual presence apparatus 12, and associates the identifier with the virtual area. In some examples, the virtual area includes multiple zones each of which supports establishment of a respective presence for one or more communicants and defines a respective persistent context for realtime communications between the client network nodes of communicants who are present in the zone. In some of these examples, the server network node 42 creates a persistent association between the physical presence apparatus and a respective one of the zones of the virtual area.
  • The server network node 42 establishes a respective presence in the zone for the virtual presence apparatus. In some examples, the server network node 42 establishes the presence for the virtual presence apparatus in response to receipt of a login request identifying the virtual presence apparatus. The virtual presence apparatus or a network node (e.g., a central hub or a client network node) that is hosting the virtual presence apparatus may generate the login request for the virtual presence apparatus. In some examples, the server network node 42 establishes a presence for both the virtual presence apparatus and a communicant in response to respective login requests that are sent by the same client network node. In some examples, in response to receipt of a login request that includes the identifier of the virtual presence apparatus, the server network node 42 initiates the virtual area to enable the virtual area to be communicant accessible.
  • The server network node 42 typically associates the virtual presence apparatus 12 with an object in the virtual area. The server network node 42 typically creates an object that represents the virtual presence apparatus 12 in the virtual area. The object typically is associated with an interface for interacting with the virtual presence apparatus 12. In some examples, the server network node 42 associates the object with a graphical representation of the virtual presence apparatus 12. In some examples, the graphical representation of the virtual presence apparatus 12 includes a brand that is associated with the virtual presence apparatus. The brand may include a name, term, design, symbol, or any other feature that identifies a source (e.g., manufacturer or seller) of the virtual presence apparatus. The server network node 42 transmits to each of the communicants who are present in the zone a respective specification of a visualization of graphical representations of the object and the avatars in the virtual area. The client network nodes use the specifications to display respective graphical representations of the virtual presence apparatus 12 and the communicants in spatial relation to a graphical representation of the virtual area.
  • In some examples, the object representing the virtual presence apparatus is associated with a particular communicant and the visualization of the virtual area shows an association between a graphical representation of the object and the particular communicant. In some of these examples, the visualization of the virtual area shows the graphical representation of the object associated with a graphical representation of the avatar representing the particular communicant. For example, the virtual presence apparatus may be personal gear (e.g., a human interface device, such as a headset, or other personal device) that is carried or worn by the particular communicant, and the visualization may show a graphical representation of the gear as a decoration or embellishment on the graphical representation of the particular communicant's avatar (e.g., showing a graphical representation of a headset on the communicant's avatar). In some examples, the visualization of the virtual area shows the graphical representation of the object representing the virtual presence apparatus associated with a location in the virtual area that is assigned to the particular communicant. For example, the virtual presence apparatus may be personal gear (e.g., a personal printer, scanner, telephony device, or a memory resource of a personal computer) that is assigned or belongs to the particular communicant, and the visualization may show a graphical representation of the in a room (e.g., an office or personal room) of the virtual area that is assigned to the particular communicant. In some of these examples, the server network node 42 may determine the style used to represent the personal gear in the visualization based on configuration information received from the particular communicant (e.g., an indication that the graphical representation of the gear should be associated with the communicant's avatar or the communicant's designated default zone, such as the communicant's home zone or office) or automatically based on a predefined mapping between personal gear category types and presentation styles (e.g., headsets are represented as graphical decorations on the respective communicants' avatars, whereas hard drive of personal computers are represented as icons in the respective communicants' designated default zones).
  • In some examples, the server network node 42 transmits to the client network node a specification of visual cues for displaying indications of respective states of a source of the virtual presence apparatus 12. Based on a determination that that the source of the virtual presence apparatus is in an active state, the server network node 42 transmits to the client network node a specification of a first visual cue, and based on a determination that the source of the virtual presence apparatus is in an inactive state, the server network node 42 transmits to the client network node a specification of a second visual cue that is different from the first visual cue. In some examples, the specifications of the first and second visual cues are provided in respective definition records.
  • The server network node 42 administers realtime communications between the respective network nodes of the communicants who are present in the zone and provisions at least one data stream connection between the virtual presence apparatus 12 and one or more of the network nodes of the communicants who are present in the zone. In some examples, the server network node 42 administers respective connections between each active pair of complementary sources and sinks of the client network node and the apparatus. These connections bridge communicant activity in the physical space into the virtual area and bridge communicant activity in the virtual area into the physical space. In some of these examples, the server network node 42 administers connections that relay data corresponding to communicant activity in the physical space from the source of the virtual presence apparatus 12 to the client network node. In some of these examples, the server network node 42 administers connections that relay data corresponding to communicant activity in the virtual area from the client network node to the sink of the virtual presence apparatus 12. In some examples, the virtual presence apparatus 12 publishes data streams of different data stream types, and the server network node 42 provisions the client network nodes to receive data streams of different data stream types that are published by the particular physical apparatus. In some examples, the server network node 42 provisions a data stream connection between a client network node and the virtual presence apparatus in response to a request from the client network node to subscribe to data published by the particular physical apparatus. In some examples, the server network node 42 provisions a data stream connection between a client network node of a particular communicant and the virtual presence apparatus automatically upon entry of the particular communicant into the zone.
  • In some examples, the source of the virtual presence apparatus 12 corresponds to a transducer that transforms human perceptible stimulus that is broadcasted in the physical space into output data of the respective data stream content type. In some examples, the source of the virtual presence apparatus 12 corresponds to a transducer that transforms input data of the respective data stream content type into human perceptible stimulus that is broadcasted into the physical space. In some examples, the source of the virtual presence apparatus 12 includes a microphone and the sink of the virtual presence apparatus 12 includes a speaker. The microphone generates output voice data from human voice sound projected into the physical space. The speaker projects human voice sound into the physical space based on input voice data associated with the virtual area. In some of these examples, the server network node 42 administers connections that relay the output voice data from the apparatus to the client network node and that relay the input voice data from the client network node to the apparatus. In some examples, the source of the virtual presence apparatus 12 includes a camera that captures images of a scene in the physical space 14 and generates output image data from the captured images. In some of these examples, the server network node 42 administers a connection that relays the output image data from the virtual presence apparatus 12 to the client network node. In some examples, the sink of the virtual presence apparatus 12 includes a projector that projects images into the physical space. In some of these examples, the server network node 42 administers a connection that relays input control data for controlling the projecting from the client network node to the virtual presence apparatus 12. In some examples, the sink of the virtual presence apparatus 12 includes a laser pointer that projects a laser beam into the physical space. In some of these examples, the server network node 42 administers a connection that relays input control data for controlling the projecting of the laser beam from the client network node to the virtual presence apparatus 12.
  • Thus, in some examples, based on communicant input in connection with the object representing the virtual presence apparatus 12, the server network node 42 administers a connection between an audio source of the client network node and an audio sink of the virtual presence apparatus 12. In some examples, based on communicant input in connection with the object, the server network node 42 administers a connection between an application sharing source of the client network node and an image projection sink of the virtual presence apparatus 12. In some examples, based on communicant input in connection with the object, the server network node 42 administers a connection between a laser pointer control source of the client network node and a laser pointer control sink of the apparatus.
  • In some examples, the virtual presence apparatus 12 is located in a particular physical space, and the server network node 42 locates the object representing the virtual presence apparatus 12 in a particular one of the zones of the virtual area according to a mapping between the particular physical space and the particular zone. In some of these examples, the mapping associates an identifier of the physical space with an identifier of the particular zone, creating a persistent association between the particular physical space and the particular zone of the virtual area. In some of these examples, the mapping additionally associates an identifier of the physical apparatus 12 with the identifier of the physical space. In some examples, the visualization of the virtual area shows the particular zone with a label that connotes a name associated with the physical space.
  • In some examples, the server network node 42 establishes a respective presence in the virtual area for a particular communicant based on a determination that the particular communicant is in the physical space 14. In some examples, the server network node 42 receives location data (e.g., Global Positioning System (GPS) data) that is associated with the particular communicant (e.g., by a GPS component of a mobile device, such as a mobile phone or other mobile communication device), and determines that the particular communicant is in the physical space based on comparison of the received location data with location data associated with the physical space. In some examples, the server network node 42 receives audio data from the source of virtual presence apparatus 12, and associates the audio data with a communicant in the physical space based on comparison of the audio data with one or more voice data records associated with respective communicants. The voice records typically correspond to voiceprints (also referred to as voice templates or voice models) that are created from features that are extracted from the recorded speech of known communicants in accordance with a speaker recognition enrollment process. Each voiceprint is associated with the identity of a particular communicant. The server network node 42 typically associates the audio data with the communicant in response to a determination that features extracted from the audio data correspond to the voiceprint previously associated with the communicant. In this way, the server network node 42 can automatically identify communicants who are in the physical space without requiring them to log into the network infrastructure service environment 44 through respective client network nodes. Once a particular communicant in the physical space 14 has been identified, the server network node 42 can automatically establish a presence for that communicant in the virtual area associated with the virtual presence apparatus 12 and track utterances from that communicant in the audio data captured by the virtual presence apparatus such that visual cues indicative of the state of that communicant's voice (e.g., speaking or silent) can be presented in the spatial visualization of the virtual area that is displayed to the remote communicant on the remote client network node 16.
  • FIG. 7 shows an example of a method that is performed by an example of the communications application 20 for communicating between a virtual area and a physical space. In accordance with this method, the communications application 20 displays a graphical interface that includes a graphical representation of the virtual area that supports establishment of respective presences of communicants operating respective client network nodes, a graphical representation of each of the communicants who is present in the virtual area, and a graphical representation of an object associated with an apparatus (e.g., the virtual presence apparatus 12) in the physical space (FIG. 7, block 110). The apparatus has an apparatus sink that is complementary to the client source and an apparatus source that is complementary to the client sink. The communications application 20 establishes a respective connection between each active pair of complementary sources and sinks of the client network node and the apparatus in association with the virtual area, where each connection supports transmission of the respective data stream content type between the apparatus and the client network node (FIG. 7, block 112). The communications application 20 presents interaction controls associated with the object for interacting with communicants who are present in the physical space (FIG. 7, block 114).
  • In some examples of the method of FIG. 7, the graphical representation of the virtual area corresponds to a virtualized representation of the physical space. In some of these examples, the virtualized representation connotes the real-world physical space. For example, the virtualized representation may have a virtual presentation that resembles one or more distinctive visual features of the real-world physical space or the virtualized representation may include a descriptive name or other label that is associated with the real-world physical space.
  • In some examples, the communications application 20 receives from a network service administering the virtual area a specification for displaying the graphical representation of the object in spatial relation to the graphical representation of the virtual area.
  • In some examples, the communications application 20 shows in the graphical interface indications of respective states of the apparatus source of the virtual presence apparatus 12 in connection with the graphical representation of the object. In some of these examples, the process of showing the state indications involves displaying a first visual cue when the virtual presence apparatus source is in an active state, and displaying a second visual cue that is different from the first visual cue when the virtual presence apparatus source is in an inactive state.
  • In some examples, based on communicant input in connection with the object, the communications application 20 establishes a connection between an audio source of the client network node and an audio sink of the virtual presence apparatus 12.
  • In some examples, based on communicant input in connection with the object, the communications application 20 establishes a connection between an application sharing source of the client network node and an image projection sink of the virtual presence apparatus 12.
  • In some examples, based on communicant input in connection with the object, the communications application 20 establishes a connection between a laser pointer control source of the client network node and a laser pointer control sink of the virtual presence apparatus 12.
  • In some examples, the source of the virtual presence apparatus 12 includes a microphone and the sink of the virtual presence apparatus 12 includes a speaker. The microphone generates output voice data from human voice sound projected into the physical space, the speaker projects human voice sound into the physical space from input voice data associated with the virtual area, and the communications application 20 establishes connections that relay the output voice data from the virtual presence apparatus 12 to the client network node and that relay the input voice data from the client network node to the virtual presence apparatus 12.
  • In some examples, the source of the virtual presence apparatus 12 includes a camera that captures images of a scene in the physical space and generates output image data from the captured images, and the communications application 20 establishes a connection that relays the output image data from the virtual presence apparatus 12 to the client network node.
  • In some examples, the sink of the virtual presence apparatus 12 includes a projector that projects images into the physical space, and the communications application 20 establishes a connection that relays input control data for controlling the projecting from the client network node to the virtual presence apparatus 12.
  • In some examples, the sink of the virtual presence apparatus 12 includes a laser pointer that projects a laser beam into the physical space, and the communications application 20 establishes a connection that relays input control data for controlling the projecting of the laser beam from the client network node to the virtual presence apparatus 12.
  • FIG. 8 shows an example of a graphical interface 120 that is generated by the communications application 20 on a client network node (e.g., client node 16) for interfacing a user with an example 122 of the virtual presence apparatus 12 in the physical space 14.
  • The graphical interface 120 includes a toolbar 124 and a viewer panel 126. The toolbar 124 includes a headphone control 128 for toggling on and off the local speakers of the client network node, a microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60. The user also may select the view screen object 60 to initiate an application sharing session in the virtual area 46. The viewer panel 126 typically shows communicant selectable content being rendered by the client network node. Examples of such content include a spatial visualization of the virtual area 46 (currently shown) and application content (e.g., web service content rendered by a web browser application such as Microsoft® Internet Explorer®, or document content being rendered by a document processing application such as Microsoft® Word® or Power Point® software applications).
  • In the example shown in FIG. 8, the virtual presence apparatus 122 is a virtual area enabled speakerphone, which is represented by a speakerphone object 138 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126. The virtual presence apparatus 122 includes a microphone that generates output voice data from human voice sounds projected into the physical space 14 and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area. The “on” or “off” state of the speakerphone microphone is depicted in the spatial visualization of the virtual area by the presence or absence of a series of concentric circles 140 that dynamically radiate away from the speakerphone object 138 in a series of expanding waves. When the microphone is on, the radiating concentric circles 140 are present and, when the microphone is off, the radiating concentric circles 140 are absent. In addition to or alternatively, the current activity state of the speakerphone microphone channel is indicated by a dynamic visualization that lightens and darkens the speakerphone object 138 in realtime to reflect the presence or absence of audio data on the speakerphone microphone channel. Thus, the user can determine when a communicant in the physical space 14 is speaking by the “blinking” of the coloration of the speakerphone object 138.
  • FIG. 9 shows an example of a graphical interface 150 that is generated by the communications application 20 on a client network node (e.g., client node 16) for interfacing a user with an example 152 of the virtual presence apparatus 12 in the physical space 14.
  • The graphical interface 150 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8. The toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60. The user may select the view screen object 60 to initiate an application sharing session in the virtual area 46. The viewer panel 126 typically shows communicant selectable content that is rendered by the client network node.
  • In the example shown in FIG. 9, the virtual presence apparatus 152 is a virtual area enabled device that integrates speakerphone and digital projector functionalities. The virtual presence apparatus 152 includes a microphone that generates output voice data from human voice sounds projected into the physical space 14, a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area, and a projector that projects light into the physical space 14 responsive to input data transmitted by the client network node in connection with the virtual area 46.
  • The virtual presence apparatus 152 is represented by a projector object 154 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126. In this example, when the user selects the projector object 154, the communications application 20 modifies the graphical interface 150 to include a Share button 134 and a Stop button 136 in the tool bar 124, and sets the viewer panel 126 to display the contents of an application being shared. The user initiates an application sharing session in the physical space 14 by selecting the Share button 134. In response to the selection of the share button 134, the communications application 20 provides an interface that enables the user to select an application to share (e.g., Microsoft® PowerPoint®), sets the viewer panel 126 to display the contents being rendered by the selected application, and streams screen share data to the virtual presence apparatus 152, which projects the screen share data onto the real-world view screen 34 in the physical space 14. Examples of systems and methods of generating and streaming screen share data are described in U.S. patent application Ser. No. 12/418,270, filed Apr. 3, 2009. The user can terminate the application sharing session in the physical space 14 by selecting the Stop button 136.
  • FIG. 10 shows an example of a graphical interface 160 that is generated by the communications application 20 on a client network node (e.g., client node 16) for interfacing a user with an example 162 of the virtual presence apparatus 12 in the physical space 14. In this example, the communicant 34 is giving a presentation on a white board 158 in the physical space 14.
  • The graphical interface 160 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8. The toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60. The user may select the view screen object 60 to initiate an application sharing session in the virtual area 46. The viewer panel 126 typically shows communicant selectable content being rendered by the client network node.
  • In the example shown in FIG. 10, the virtual presence apparatus 162 is a virtual area enabled device that integrates a speakerphone, a digital projector, and a camera. The speakerphone includes a microphone that generates output voice data from human voice sounds projected into the physical space 14, and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area. The projector projects light (e.g., images, shapes, lines, and spots) into the physical space 14 responsive to input data transmitted by the client network node in connection with the virtual area 46. In some examples, the projector is a digital image projector. In other examples, the projector is a remote-controlled laser pointer. The camera captures images of a scene in the physical space 14 (e.g., images of the whiteboard 158) and generates output image data from the captured images. The camera may be implemented by any type of imaging device that is capable of capturing one-dimensional or two-dimensional images of a scene. The camera typically is a digital video camera.
  • The virtual presence apparatus 162 is represented by a projector-camera object 164 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126. In this example, when the user selects the projector-camera object 164, the communications application 20 modifies the graphical interface 150 to include a Share button 134 and a Stop button 136 in the tool bar 124, and sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162 in the physical space 14. The user initiates a sharing session in the physical space 14 by selecting the Share button 134. In response to the selection of the Share button 134, the communications application 20 sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162, provides an interface that enables the user to provide inputs in relation to the images displayed in the viewer panel (e.g., superimpose graphical content, such as predesigned or hand drawn images or comments, onto the images), and streams data describing the inputs to the virtual presence apparatus 162, which projects the streamed data onto the whiteboard 158 in the physical space 14. The user can terminate the sharing session in the physical space 14 by selecting the Stop button 136.
  • In the illustrated example, the communications application 20 provides drawings tools (e.g., the pencil tool 166) that allow the user to superimpose lines, shapes (e.g., the ellipse 168), and other graphical content onto the image of the view screen 34 captured by the camera component of the virtual presence apparatus 162.
  • In examples in which the projector component of the virtual presence apparatus 162 is a digital projector, the communications application 20 may stream data describing the user inputs to the virtual presence apparatus 162. In examples in which the projector is a remote-controlled laser pointer, the communications application 20 may convert the user inputs into control data for controlling the movement of the remote-controlled laser pointer in the physical space 14. In this way, the user can interact with the communicants in the physical space 14 in a substantive way. For example, the user can provide comments or other visual indications that highlight or direct a viewer's attention to specific parts of the presentation being given by the communicant 32 in connection with the white board 158. In some examples, the graphical interface 160 includes additional controls for streaming application sharing data from the client network node to the virtual presence apparatus 162 for projection onto the whiteboard 158 or other surface in the virtual space 14, as described above in connection with the example shown in FIG. 9.
  • FIG. 11 shows an example of a graphical interface 170 that is generated by the communications application 20 on a client network node for interfacing a user with an example 172 of the virtual presence apparatus 12 in the physical space 14. In this example, the communicant 34 is giving a presentation on a white board 158 in the physical space 14.
  • The graphical interface 170 includes the toolbar 124 and the viewer panel 126 of the graphical interface 120 shown in FIG. 8. The toolbar 124 includes the headphone control 128 for toggling on and off the local speakers of the client network node, the microphone control 130 for toggling on and off the local microphone of the client network node, and a view screen button 132 for setting the viewer panel 126 to content being shared in connection with the view screen object 60. The user may select the view screen object 60 to initiate an application sharing session in the virtual area 46. The viewer panel 126 typically shows communicant selectable content being rendered by the client network node.
  • In the example shown in FIG. 11, the virtual presence apparatus 162 is a virtual area enabled device that integrates a speakerphone and a camera. The speakerphone includes a microphone that generates output voice data from human voice sounds projected into the physical space 14, and a speaker that projects human voice sounds into the physical space 14 based on input voice data associated with the virtual area. The camera captures images of a scene in the physical space 14 and generates output image data from the captured images. The camera may be implemented by any type of imaging device that is capable of capturing one-dimensional or two-dimensional images of a scene. The camera typically is a digital video camera.
  • The virtual presence apparatus 172 is represented by a camera object 174 in the spatial visualization of the virtual area 46 that is shown in the viewer panel 126. In this example, when the user selects the camera object 174, the communications application 20 modifies the graphical interface 150 to include a View button 176 and a Stop button 178 in the tool bar 124, and sets the viewer panel 126 to display the images captured by the virtual presence apparatus 162 in the physical space 14. The user initiates a viewing session in the physical space 14 by selecting the View button 176. In response to the selection of the View button 176, the communications application 20 sets the viewer panel 126 to display the images captured by the virtual presence apparatus 172, provides an interface 180 that enables the user to control the view of the physical space that is captured by the camera component of the virtual presence apparatus, and streams data describing the control inputs to the virtual presence apparatus 172, which moves the camera based on the streamed data. In the illustrated example, the communications application 20 provides a navigation control tool 180 that allows the user to control the pan and tilt of the camera component of the virtual presence apparatus 172. In this way, the remote communicant can interact with the physical space 14 in a substantive way (e.g., see different views of the persons and activities in the physical space 14). The user can terminate the sharing session in the physical space 14 by selecting the Stop button 178.
  • In some examples, different elements of the graphical interfaces described above in connection with the examples shown in FIGS. 8-11 are incorporated into a single graphical interface that may be used to interact with the virtual presence apparatus 164, which integrates a speakerphone, a digital projector, and a camera. In these examples, the graphical interface provides independent control over the respective functionalities of the speakerphone, the digital projector, and camera to enable application sharing, image projection of comments and other annotations, and camera viewing modes of operation.
  • In some examples, the virtual environment creator 18 enhances the immersive connections between virtual area locations (e.g., zones) and physical spaces by creating persistent associations between the virtual area locations and the respective physical spaces. These persistent associations typically are stored in a table or other data structure that maps each real-world location to a respective zone. In some of these examples, the virtual environment creator 18 reinforces these associations with visual cues in the visualizations of the virtual area locations that connote the real-world physical spaces (e.g., by having a virtual presentation that resembles one or more distinctive visual features of the real-world physical space or by including a descriptive name or other label that is associated with the real-world physical space).
  • FIG. 12 shows an example of a two-dimensional visualization of a virtual area 200 (“Sococo HQ”). The Sococo HQ virtual area includes a lobby 202, a Main conference room 204, a West Conference room 206, an East Conference room 208, a West Nook zone 210, an East Nook zone 212, a Courtyard zone 214, and sixteen offices. The conference rooms 204-208 include respective viewscreen objects 216-230, table objects 232, 234, and 236, and objects representing respective virtual presence apparatus 238, 240, 242 and supports realtime audio, chat, and application and network resource sharing communications between the network nodes in the same conference room. Each of the offices includes respective viewscreen objects (not shown) and a respective telephony object (not shown) and supports realtime audio, chat, and application and network resource sharing communications between the network nodes in the same office. Each of the telephony objects supports shared dial-in and dial-out telephony communications as described in U.S. patent application Ser. No. 13/165,729, filed Jun. 21, 2011, and communicants interacting with the telephony objects are represented by avatars decorated with a graphical representation of a telephone (see, e.g., the avatar 215 in Carl's Office). Each of the West Nook 210, East Nook 212, and Lobby 202 zones respectively supports realtime audio and chat communications between the network nodes in the same zone.
  • In some examples, the conference rooms 204-208 are associated with different real-world physical spaces. The different real-world physical spaces may be physically connected to or proximate one another (e.g., rooms connected by a common structure, such as rooms in an office building, or disconnected rooms of related co-located structures, such as rooms in a distributed office building complex) or they may be physically remote from one another (e.g., rooms in separate and distinct real-world office buildings, which may be in the same or different geographic regions). The virtual environment creator 18 reinforces these associations with visual cues in the visualizations of the virtual area locations that connote the corresponding real-world physical spaces. In the example shown in FIG. 12, each of the virtual conference rooms 204-208 is labeled with a respective name (e.g., Main, West Conference, and East Conference) that corresponds to the name that is used to identify the corresponding real-world physical space. In addition, virtual presentations of the virtual conference rooms 204-208 include respective features (e.g., the number and placement of virtual view screens 216-230, virtual plants 240, 242 and virtual artwork 244) that correspond to distinctive visual features of the associated real-world physical spaces. The resulting visualization of the virtual area 200 allows a user to see multiple concurrent independent conversations and other interactions that are occurring in different physical spaces in a single view in which the interactions are organized according to a spatial metaphor that allows the user to quickly learn who is meeting with whom and the contexts of those meetings (as defined by the zones in which the meetings are occurring). In addition, the objects 238-242 in the virtual conference rooms 204-208 provide interfaces for communicants in the virtual area 200 to interact with the associated virtual presence apparatus and thereby be bridged into the corresponding physical spaces.
  • FIG. 13 shows an example of a method by which the server network node 42 manages communications between virtual area zones and multiple respective real-world locations via respective physical apparatus. In accordance with this method, the server network node 42 administers zones of one or more virtual areas in a virtual communications environment (FIG. 13, block 190). Each of respective ones of the zones defines a respective persistent context for realtime communications between client network nodes of respective communicants who are present in the zone. In the process of administering the zones, the server network node 42 administers realtime communications between the respective network nodes of co-present communicants in respective ones of the zones. For each of multiple physical apparatus in respective real-world locations, the server network node 42 establishes a respective presence for the physical apparatus in a respective one of the zones based on mappings between the respective real-world location and the respective zone, and creates a respective object that represents the physical apparatus in the respective zone and is associated with a respective interface for communicant interaction with the physical apparatus (FIG. 13, block 192). The server network node 42 transmits to each of one or more of the respective client network nodes a respective specification of a visualization of a spatial layout of the zones, graphical representations of the objects in their respective zones of presence, and graphical representations of avatars representing communicants in their respective zones of presence (FIG. 13, block 194). The server network node 42 provisions respective data stream connections between respective ones of the physical apparatus and respective ones of the client network nodes (FIG. 13, block 196).
  • In some examples, the server network node 42 establishes the respective presence of each of respective ones of the physical apparatus in response to receipt of a respective login request that is generated by and identifies the respective physical apparatus.
  • For each of one or more of the physical apparatus, the server network node 42 publishes in the respective zone of presence of the physical apparatus one or more physical space data streams that include respective representations of human perceptible physical stimuli in the respective real-world location. In the process of provisioning the respective data stream connections, the server network node 42 provisions data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes for transforming into human perceptible stimuli. Each of one or more of the client network nodes typically publishes one or more respective client data streams. In this case, the server network node 42 provisions data stream connections for transmitting respective ones of the published client data streams to respective ones of the client network nodes for transforming into human perceptible stimuli, and transmitting respective ones of the published client data streams to respective ones of the physical apparatus for transforming into human perceptible stimuli in the respective real-world locations.
  • In some examples, a particular one of the physical apparatus is operable to perform a respective function in its respective real-world location in response to data transmitted by a particular one of the network nodes on a respective one of the data stream connections that is provisioned based on a request from the particular network node referencing the object representing the particular physical apparatus. In some examples, the particular physical apparatus sends notifications of events relating to the respective function that is performable by the particular physical apparatus. Based on a notification of an event relating to the respective function that is performable by the particular physical apparatus, the server network node 42 sends a notification of the event to the particular one of the client network nodes of a respective one of the communicants. For example, the particular physical apparatus may include a printer, in which case the server network node 42 may send to the particular network node a notification that a document has been printed. The particular physical apparatus may include a facsimile machine, in which case the server network node 42 may send to the particular network node a notification of an incoming facsimile. The particular physical apparatus may include a telephony device, in which case the server network node 42 may send to the particular network node a notification of an incoming telephone call.
  • The server network node 42 may provision a variety of different data stream connections between respective ones of the client network nodes and the physical apparatus. For example, the particular physical apparatus may include a printer and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to print a document. The particular physical apparatus may include a facsimile machine and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to one of send a facsimile and receive a facsimile. The particular physical apparatus may include a telephony device and, based on a request from a particular client network node referencing the object representing the particular physical apparatus, the server network node 42 may provision at least one data stream connection for the particular network node to one of place an outgoing telephone call and receive an incoming telephone call.
  • FIG. 14 shows an example of a network communications environment 300 that includes network resources that may be connected over a network 302 to an example of a client network node 304 by the virtual environment creator 18 according to mappings 305 between the network resources and zones of one or more virtual areas. The virtual environment creator 18 sends a specification to the client network node 304 for generating an example of a graphical user interface 306 that includes a spatial visualization that partitions the network resources into zones according to the mappings 305. In this example, the visualization shows a first zone 344 (i.e., a Work Office zone) that is mapped to a real-world work office 314 of a user (Ed in this example) and a second zone 346 (i.e., a Home Office zone) that is mapped to the user's real-world home office 322. In some examples, the Work Office zone 344 and the Home Office zone 346 are zones of a common virtual area (e.g., the user's Business virtual area). In other examples, the Work Office zone 344 and the Home Office zone 346 are zones of different virtual areas (e.g., a Work virtual area and a Home virtual area).
  • In the example shown in FIG. 14, the network resources include: a facsimile machine 308, a telephony device 310 (e.g., a SIP phone), and a network video camera 312 that are located in the user's work office 314; and a printer 316, a telephony device 318, and a scanner 320 that are located in the user's home office 322.
  • The graphical user interface 306 includes a toolbar 326, a viewer panel 328 of the type described above in connection with FIGS. 8-11, a people panel 330, and a chat panel 331.
  • The viewer panel 328 includes a canvas area for displaying visual content. For example, the viewer panel 328 displays virtual area visualizations that are rendered by the virtual area enabled communications application 20, network resource content that is rendered by a web browser application such as Microsoft® Internet Explorer®, application sharing content that is being shared by the user or another communicant in the virtual area, and visual content received from virtual presence apparatus in the respective real-world locations that are linked to virtual area zones that are associated with the user. In the illustrated example, the viewer panel 328 is operating in a map view mode that shows respective visualizations of the Work Office zone 344 and the Home Office zone 346. The visualization of the Work Office zone 344 includes graphical representations of viewscreen objects 338, 340, a table object 354, a telephony object 356, (on the left-hand side from top to bottom) objects 357 that respectively represent the facsimile machine 308, the phone 310, and the camera 312 in the real-world work office 314, and graphical representations of the communicants who are present in the virtual Work Office zone 344. The visualization of the Home Office zone 346 includes graphical representations of the viewscreen object 342, a table object 358, a telephony object 360, (on the left-hand side from top to bottom) objects 361 that respectively represent the printer 316, the phone 318, and the scanner 320 in the real-world home office 322, and graphical representations of the communicants who are present in the virtual Home Office zone 346. Each of the telephony objects 356-360 supports shared dial-in and dial-out telephony communications with one or more public switched telephony (PSTN) devices 303 over a PSTN 305, as described in U.S. patent application Ser. No. 13/165,729, filed Jun. 21, 2011.
  • The toolbar 326 includes a headphone control 332 for toggling on and off the local speakers of the client network node, a microphone control 334 for toggling on and off the local microphone of the client network node, and one or more view screen buttons 336 for setting the viewer panel 328 to show content being shared in connection with respective view screen objects 338, 340, 342 in the user's current zone of presence (e.g., the Work Office zone 344 or the Home Office zone 346).
  • The people panel 330 depicts the realtime availabilities and activities of some of or all the user's contacts across the different communication contexts defined by the Work Office zone 344 and the Home Office zone 346. In the example shown in FIG. 14, the people panel 330 shows Ed′ contacts segmented into a Work Office section 348, a Home Office section 350, and a Contacts section 352. The Work Office section 348 shows graphical representations, respective states, and realtime activities of the communicants who are present in the Work Office zone 344 (i.e., Ed, Paul, and David); the Home Office section 350 shows graphical representations, respective states, and realtime activities of the communicants who are present in the Home Office zone 346 (i.e., Josh and Matt); and the Contacts section 352 shows all or a selected portion of Ed's contacts who are not represented in any of the other sections 348, 350. Examples of the people panel 330 are described in U.S. patent application Ser. No. 13/209,812, filed Aug. 15, 2011, and U.S. patent application Ser. No. 12/354,709, filed Jan. 15, 2009.
  • The chat panel 331 shows a chat interface for a persistent virtual chat area for interactions occurring in connection with the user's current zone of presence. In the example shown in FIG. 14, the user (Ed) is present in the Work Office zone 344; therefore, the chat panel 331 shows the persistent virtual chat area for text chat and other interactions occurring in the Work Office zone 344. Examples of the chat panel 331 are described in U.S. patent application Ser. No. 13/209,812, filed Aug. 15, 2011, and U.S. patent application Ser. No. 12/354,709, filed Jan. 15, 2009.
  • The virtual environment creator 18 manages communications between communicants who are present in the virtual Work Office and Home Office zones 344, 346 and the physical apparatus 308-312, 316-320 in their respective real-world locations in accordance with the mappings 305 between the zones 344, 346 and the physical apparatus 308-312, 316-320. In this way, communicants in the virtual Work Office zone 344 are able to interact with any of the facsimile machine 308, the phone 310, and the camera 312 in the real-world Work Office 314 via the interfaces provided in connection with the graphical representations 357 of those apparatus in the visualization of the virtual Work Office zone 344; similarly, communicants in the virtual Home Office zone 344 are able to interact with any of the printer 316, the phone 318, and the scanner 320 in the real-world Home Office 322 via the interfaces provided in connection with the graphical representations 361 of those apparatus in the visualization of the virtual Home Office zone 346.
  • In addition, the virtual environment creator 18 bridges real-world notifications (e.g., notifications of events, alerts, and the like) that are generated by the physical apparatus 308-312, 316-320 in their respective real- world locations 314, 322 into the virtual zones 314, 322 and bridges responses to those notifications received in connection with the virtual zones 314, 322 into the physical real- world locations 314, 322 of the physical apparatus 308-312, 316-320. For example, in the illustrated example, the facsimile machine 308 generates in the real-world Work Office 314 a notification that a facsimile was received from a particular fax number. The facsimile machine 308 also sends a fax receipt notification to the virtual environment creator 18. The virtual environment creator 18 relays the fax receipt notification 370 to the user and to other communicants who are present in the virtual Work Office zone 344. The user may click the View Fax button 372 in the notification window to cause the received facsimile to be displayed in the viewer panel 328. In another example, the phone 318 generates in the real-world Home Office 322 a notification of an incoming call from a particular phone number. The phone 318 also sends an incoming call notification to the virtual environment creator 18. The virtual environment creator 18 relays the incoming call receipt notification 374 to the user and to other communicants who are present in the virtual Home Office zone 346. The user may click the Answer Call button 376 in the notification window to cause the user's headset to be connected to receive the incoming call.
  • In addition to providing notifications of events, alerts and the like, the virtual environment creator 18 also provides visual cues indicating the states and realtime activities of the real-world physical apparatus 308-312, 316-320. In some examples, the “on” or “off” states of the real-world physical apparatus 308-312, 316-320 are indicated by having two different presentations of their respective graphical representations 357, 361 (e.g., a first or brighter coloration when a physical apparatus in turned-on, and a darker or dimmed coloration when the physical apparatus is turned-off). In some examples, the current activity states of the real-world physical apparatus 308-312, 316-320 are indicated by having two different presentations of their respective graphical representations 357, 361 (e.g., using a static graphical representation of a physical apparatus when the physical apparatus current is inactive, and using a dynamic graphical representation—e.g., a blinking of the coloration of the graphical representation—when the physical apparatus currently is active).
  • In these ways, the network resources in the real-world Work Office and Home Office area available both physically (via their respective physical interfaces in their respective real-world locations) and virtually (via their respective virtual interfaces in their respective virtual locations). In addition, the presentation of the virtual representations of the physical apparatus according to the spatial metaphor shown in FIG. 14 provides a way for users to organize their network resources that is particularly more intuitive and effective than traditional non-spatial directory-based visualizations of network resources.
  • In some examples, the virtual environment creator 18 administers virtual areas based on signals received from intelligent personal gear that is associated with communicants. In these examples, the personal gear is able to infer information about a communicant's state (e.g., a headset is on the communicant's head, the communicant is proximate his client network node, and the communicant is located in a particular physical space), determine state event changes based on those inferences, and report those state event changes to the virtual environment creator 18, which reflects them in the virtual area representation.
  • In some of these examples, the server network node 42 administers a virtual area in a virtual communications environment. The virtual area includes one or more zones, where each zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone. For each zone, the server network node administers realtime communications between the respective network nodes of communicants who are co-present in the zone. For each of respective ones of the communicants who are present in the virtual area, the server network node 42 transmits a respective specification of a visualization of the virtual area that includes graphical representations of the one or more zones and avatars that respectively represent the communicants in the one or more zones in which they respectively have presence. From sensing apparatus co-located with a particular one of the communicants in a physical space, the server network node 42 receives state information describing information relating to a physical state of the particular communicant. Based on the state information, the server network node updates the specification of the visualization of the virtual area and the avatars and transmits the updated specification to each of respective ones of the communicants who are present in the virtual area.
  • In some examples, the state information describes the current real-world location of the particular communication, and in the process of updating the visualization specification the server network node 42 provides an indication of the current real-world location of the particular communicant in the visualization. In some of these examples, the process of updating the visualization specification includes locating the avatar representing the particular communicant in a zone of the virtual area associated with the current real-world location of the particular communicant. In some of these examples, the process of updating the visualization specification includes providing a descriptive label indicative of the current real-world location of the particular communicant in association with the graphical representation of the avatar representing the particular communicant.
  • In some examples, the state information describes the state of the particular communicant in relation to a physical device that is associated with the particular communicant, and the server network node 42 updates the graphical representation of the avatar representing the particular communicant based on the state of the particular communicant in relation to the physical device. In some of these examples, the physical device is a headset. Based on a determination that the state information indicates that the particular communicant is wearing the headset, the server network node 42 includes a graphical representation of a headset with the graphical representation of the avatar of the particular communicant.
  • In some examples, the state information describes a physical relationship between the particular communicant and another one of the communicants who present in the virtual area. This information may be obtained from a variety of different detection apparatus that are able to detect when the communicants are in the same physical space or when one communicant is within a certain distance of the other communicant. In some of these examples, the server network node 42 updates the visualization specification by updating the graphical representations of the avatars of the particular communicant and the other communicant to reflect the physical relationship between the particular communicant and the other communicant. For example, based on a determination that the state information indicates that the particular communicant and the other communicant are co-located in a shared real-world location, the server network node 42 may include respective indications that the particular communicant and the other communicant are physically co-located with the graphical representations of the avatars of the particular communicant and the other communicant.
  • III. Conclusion
  • Other embodiments are within the scope of the claims.

Claims (57)

1. A method, comprising:
administering a virtual area in a virtual communications environment, wherein the virtual area comprises a zone that defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone;
establishing a respective presence in the zone for each of a particular physical apparatus and a particular communicant associated with a particular network node;
creating an object representing the particular physical apparatus in the virtual area and an avatar representing the particular communicant in the virtual area, wherein the object is associated with an interface for interacting with the particular physical apparatus;
provisioning at least one data stream connection between the particular physical apparatus and the particular network node.
2. The method of claim 1, further comprising:
establishing a respective presence for each of one or more other communicants in the zone;
for each of the other communicants, creating a respective avatar representing the other communicant in the virtual area; and
administering realtime communications between the respective network nodes of the communicants present in the zone.
3. The method of claim 2, further comprising transmitting to the particular communicant and each of the other communicants a respective specification of a visualization of graphical representations of the object and the avatars in the virtual area.
4. The method of claim 3, wherein the graphical representation of the object comprises a brand associated with the particular physical apparatus.
5. The method of claim 1, wherein the establishing of the presence for the particular physical apparatus is based on a globally unique identifier of the particular physical apparatus.
6. The method of claim 1, wherein the establishing of the presence for the particular physical apparatus is performed responsive to receipt of a login request identifying the particular physical apparatus.
7. The method of claim 6, wherein the login request is generated by the particular physical apparatus.
8. The method of claim 1, wherein the particular physical apparatus comprises at least one of a printer, a scanner, a facsimile machine, and a telephony device.
9. The method of claim 1, wherein the particular physical apparatus is associated with the particular network node.
10. The method of claim 9, wherein the particular physical apparatus is an integral component of the particular network node.
11. The method of claim 10, wherein the particular physical apparatus is a memory component of the particular network node.
12. The method of claim 9, wherein the particular physical apparatus is a peripheral device linked to the particular network node.
13. The method of claim 9, wherein the establishing of the presence for the particular physical apparatus is performed responsive to receipt of a login request from the particular network node.
14. The method of claim 13, wherein the establishing of the presence for the particular communicant is performed responsive to receipt of a login request from the particular network node.
15. The method of claim 9, wherein the creating comprises associating the object with the particular communicant, and further comprising generating a specification of a visualization of the virtual area showing an association between a graphical representation of the object and the particular communicant, and transmitting the specification to the particular network node.
16. The method of claim 15, wherein the visualization of the virtual area shows the graphical representation of the object associated with a graphical representation of the avatar representing the particular communicant.
17. The method of claim 15, wherein the visualization of the virtual area shows the graphical representation of the object associated with a location in the virtual area that is assigned to the particular communicant.
18. The method of claim 1, wherein the zone of the virtual area serves as a termination point for one or more data streams that represent physical stimuli in a physical space occupied by the particular physical apparatus, the data streams are published by the particular physical apparatus in the zone, communicants who are present in the zone respectively are able to subscribe to one or more of the data streams, and the provisioning comprises provisioning data stream connections for the data streams that are subscribed to by respective ones of the communicants who are present in the zone.
19. The method of claim 1, wherein the provisioning comprises provisioning an audio data stream connection for transmitting audio data from a microphone source of the particular physical apparatus to a sound rendering sink of the particular network node, and provisioning an audio data stream connection for transmitting audio data from a microphone source of the particular network node to a sound rendering sink of the particular physical apparatus.
20. The method of claim 1, wherein the provisioning comprises provisioning an image data stream connection for transmitting image data from a camera source of the particular physical apparatus to an image renderer sink associated with an image output device of the particular network node.
21. The method of claim 1, wherein the provisioning comprises provisioning an image data stream connection for transmitting image data from the particular network node to an image renderer sink associated with an image output device of the particular physical apparatus.
22. The method of claim 1, wherein the provisioning comprises provisioning a control data stream connection for transmitting control data from the particular network node to a camera control sink associated with a camera of the particular physical apparatus.
23. The method of claim 1, wherein the provisioning comprises provisioning a control data stream connection for transmitting control data from the particular network node to a pointer control sink associated with a pointer of the particular physical apparatus.
24. The method of claim 1, wherein the provisioning is performed in response to a request from the particular network node to subscribe to data published by the particular physical apparatus.
25. The method of claim 1, wherein the provisioning is performed automatically upon entry of the particular communicant into the zone.
26. The method of claim 1, wherein the particular physical apparatus publishes data streams of different data stream types, and the provisioning comprises provisioning the particular network node to receive data streams of different data stream types that are published by the particular physical apparatus.
27. The method of claim 1, further comprising:
establishing a respective presence in the zone for each of one or more other particular physical apparatus;
for each of the other particular physical apparatus, creating a respective other object representing the other particular physical apparatus in the virtual area; and
for each of the other particular physical apparatus, provisioning at least one data stream connection between the other particular physical apparatus and the particular network node.
28. The method of claim 27, wherein the provisioning is performed in response to one or more requests from the particular network node to subscribe to data published by one or more of the other particular physical apparatus.
29. The method of claim 1, wherein the virtual area comprises multiple zones, and each of respective ones of the zones defines a respective context for realtime communications between network nodes of respective communicants who are present in the zone.
30. The method of claim 29, further comprising transmitting to the particular communicant and other communicants present in the virtual area a respective specification of a visualization of a spatial layout of the zones of the virtual area and graphical representations of the object and avatars representing communicants in respective ones of the zones of the virtual area.
31. The method of claim 30, wherein the particular physical apparatus is located in a physical space, and further comprising locating the object representing the particular physical apparatus in a particular one of the zones of the virtual area according to a mapping between the physical space and the particular zone.
32. The method of claim 31, wherein the mapping associates an identifier of the physical space with an identifier of the particular zone.
33. The method of claim 32, wherein the mapping additionally associates an identifier of the particular physical apparatus with the identifier of the physical space.
34. The method of claim 32, further comprising, in the visualization, labeling the particular zone with a label that connotes a name associated with the physical space.
35. A method, comprising:
transforming human perceptible physical stimuli in a physical space into physical space data streams of different respective data stream types;
publishing respective ones of the physical space data streams in a zone of a virtual area in a virtual communications environment, wherein the zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone;
establishing a respective presence in the zone for each of multiple communicants associated with respective client network nodes, each of one or more of the client network nodes publishing one or more respective client data streams;
provisioning data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes, transmitting respective ones of the published client data streams to respective ones of the client network nodes, and transmitting respective ones of the published client data streams to the physical space; and
transforming the published client data streams transmitted to the physical space into human perceptible physical stimuli in the physical space.
36. A method, comprising by a computer system:
in a zone of a virtual area in a virtual communications environment, publishing physical space data streams of different respective data stream types comprising respective representations of human perceptible physical stimuli in a physical space, wherein the zone defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone;
establishing a respective presence in the zone for each of multiple communicants associated with respective client network nodes, each of one or more of the client network nodes publishing one or more respective client data streams;
provisioning data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the client network nodes for transforming into human perceptible stimuli, transmitting respective ones of the published client data streams to respective ones of the client network nodes for transforming into human perceptible stimuli, and transmitting respective ones of the published client data streams into the physical space for transforming into human perceptible stimuli in the physical space.
37. The method of claim 36, further comprising transforming human perceptible physical stimuli in a physical space into physical space data streams of different respective data stream types, and transforming the published client data streams transmitted into the physical space into human perceptible physical stimuli in the physical space.
38. A method, comprising:
administering zones of one or more virtual areas in a virtual communications environment, wherein each of respective ones of the zones defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone, and the administering comprises administering realtime communications between the respective network nodes of co-present communicants in respective ones of the zones;
for each of multiple physical apparatus in respective real-world locations,
establishing a respective presence for the physical apparatus in a respective one of the zones based on mappings between the respective real-world location and the respective zone, and
creating a respective object that represents the physical apparatus in the respective zone and is associated with a respective interface for communicant interaction with the physical apparatus;
transmitting to each of one or more of the respective network nodes a respective specification of a visualization of a spatial layout of the zones, graphical representations of the objects in their respective zones of presence, and graphical representations of avatars representing communicants in their respective zones of presence;
provisioning respective data stream connections between respective ones of the physical apparatus and respective ones of the network nodes.
39. The method of claim 38, wherein for each of one or more of the physical apparatus, the establishing of the presence of the physical apparatus is performed responsive to receipt of a login request that identifies the particular physical apparatus and is generated by the particular physical apparatus.
40. The method of claim 38, further comprising for each of one or more of the physical apparatus, in the respective zone of presence of the physical apparatus publishing one or more physical space data streams comprising respective representations of human perceptible physical stimuli in the respective real-world location, wherein the provisioning comprises provisioning data stream connections for transmitting respective ones of the published physical space data streams to respective ones of the network nodes for transforming into human perceptible stimuli.
41. The method of claim 40, wherein:
each of one or more of the network nodes publishes one or more respective client data streams; and
the provisioning comprises provisioning data stream connections for transmitting respective ones of the published client data streams to respective ones of the client network nodes for transforming into human perceptible stimuli, and transmitting respective ones of the published client data streams to respective ones of the physical apparatus for transforming into human perceptible stimuli in the respective real-world locations.
42. The method of claim 38, wherein a particular one of the physical apparatus is operable to perform a respective function in its respective real-world location in response to data transmitted by a particular one of the network nodes on a respective one of the data stream connections provisioned based on a request from the particular network node referencing the object representing the particular physical apparatus.
43. The method of claim 42, further comprising based on a notification of an event relating to the respective function performable by the particular physical apparatus, sending a notification of the event to the particular one of the network nodes of a respective one of the communicants.
44. The method of claim 43, wherein the particular physical apparatus comprises a printer, and the sending comprises sending to the particular network node a notification that a document has been printed.
45. The method of claim 43, wherein the particular physical apparatus comprises a facsimile machine, and the sending comprises sending to the particular network node a notification of an incoming facsimile.
46. The method of claim 43, wherein the particular physical apparatus comprises a telephony device, and the sending comprises sending to the particular network node a notification of an incoming telephone call.
47. The method of claim 42, wherein the particular physical apparatus comprises a printer, and further comprising based on a request from the particular network node referencing the object representing the particular physical apparatus, provisioning at least one data stream connection for the particular network node to print a document.
48. The method of claim 42, wherein the particular physical apparatus comprises a facsimile machine, and further comprising based on a request from the particular network node referencing the object representing the particular physical apparatus, provisioning at least one data stream connection for the particular network node to one of send a facsimile and receive a facsimile.
49. The method of claim 42, wherein the particular physical apparatus comprises a telephony device, and further comprising based on a request from the particular network node referencing the object representing the particular physical apparatus, provisioning at least one data stream connection for the particular network node to one of place an outgoing telephone call and receive an incoming telephone call.
50. A method, comprising:
administering a virtual area in a virtual communications environment, wherein the virtual area comprises one or more zones each of which defines a respective persistent context for realtime communications between network nodes of respective communicants who are present in the zone, and the administering comprises for each zone administering realtime communications between the respective network nodes of communicants who are co-present in the zone;
for each of respective ones of the communicants who are present in the virtual area, transmitting a respective specification of a visualization of the virtual area comprising graphical representations of the one or more zones and avatars respectively representing the communicants in the one or more zones in which they respectively have presence;
from sensing apparatus co-located with a particular one of the communicants in a physical space, receiving state information describing information relating to a physical state of the particular communicant;
based on the state information, updating the specification of the visualization of the virtual area and the avatars and transmitting the updated specification to each of respective ones of the communicants who are present in the virtual area.
51. The method of claim 50, wherein the state information describes the current real-world location of the particular communication, and the updating comprises providing an indication of the current real-world location of the particular communicant in the visualization.
52. The method of claim 51, wherein the updating comprises locating the avatar representing the particular communicant in a zone of the virtual area associated with the current real-world location of the particular communicant.
53. The method of claim 51, wherein the updating comprises providing a descriptive label indicative of the current real-world location of the particular communicant in association with the graphical representation of the avatar representing the particular communicant.
54. The method of claim 50, wherein the state information describes the state of the particular communicant in relation to a physical device associated with the particular communicant, and the updating comprises updating the graphical representation of the avatar representing the particular communicant based on the state of the particular communicant in relation to the physical device.
55. The method of claim 53, wherein the physical device is a headset; and based on a determination that the state information indicates that the headset is being worn by the particular communicant, the updating comprises including a graphical representation of a headset with the graphical representation of the avatar of the particular communicant.
56. The method of claim 50, wherein the state information describes a physical relationship between the particular communicant and another one of the communicants present in the virtual area, and the updating comprises updating the graphical representations of the avatars of the particular communicant and the other communicant to reflect the physical relationship between the particular communicant and the other communicant.
57. The method of claim 56, wherein based on a determination that the state information indicates that the particular communicant and the other communicant are co-located in a shared real-world location, the updating comprises including with the graphical representations of the avatars of the particular communicant and the other communicant respective indications that the particular communicant and the other communicant are physically co-located.
US13/554,084 2009-01-15 2012-07-20 Communicating between a virtual area and a physical space Abandoned US20130174059A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US13/554,084 US20130174059A1 (en) 2011-07-22 2012-07-20 Communicating between a virtual area and a physical space
US13/954,742 US9319357B2 (en) 2009-01-15 2013-07-30 Context based virtual area creation
US14/056,192 US9288242B2 (en) 2009-01-15 2013-10-17 Bridging physical and virtual spaces
US15/070,551 US9602447B2 (en) 2009-01-15 2016-03-15 Context based virtual area creation
US15/460,125 US9942181B2 (en) 2009-01-15 2017-03-15 Context based virtual area creation
US15/950,067 US10608969B2 (en) 2009-01-15 2018-04-10 Context based virtual area creation
US16/814,702 US20200213256A1 (en) 2009-01-15 2020-03-10 Context Based Virtual Area Creation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201161510698P 2011-07-22 2011-07-22
US201261637190P 2012-04-23 2012-04-23
US13/554,084 US20130174059A1 (en) 2011-07-22 2012-07-20 Communicating between a virtual area and a physical space

Publications (1)

Publication Number Publication Date
US20130174059A1 true US20130174059A1 (en) 2013-07-04

Family

ID=47556705

Family Applications (7)

Application Number Title Priority Date Filing Date
US13/554,084 Abandoned US20130174059A1 (en) 2009-01-15 2012-07-20 Communicating between a virtual area and a physical space
US13/554,051 Active 2033-06-28 US9182883B2 (en) 2009-01-15 2012-07-20 Communicating between a virtual area and a physical space
US14/930,472 Active US9575625B2 (en) 2009-01-15 2015-11-02 Communicating between a virtual area and a physical space
US15/437,335 Active US9851863B2 (en) 2009-01-15 2017-02-20 Communicating between a virtual area and a physical space
US15/853,831 Active US10838572B2 (en) 2011-07-22 2017-12-24 Communicating between a virtual area and a physical space
US17/092,741 Pending US20210055851A1 (en) 2011-07-22 2020-11-09 Communicating between a Virtual Area and a Physical Space
US17/092,701 Active 2034-02-02 US11960698B2 (en) 2011-07-22 2020-11-09 Communicating between a virtual area and a physical space

Family Applications After (6)

Application Number Title Priority Date Filing Date
US13/554,051 Active 2033-06-28 US9182883B2 (en) 2009-01-15 2012-07-20 Communicating between a virtual area and a physical space
US14/930,472 Active US9575625B2 (en) 2009-01-15 2015-11-02 Communicating between a virtual area and a physical space
US15/437,335 Active US9851863B2 (en) 2009-01-15 2017-02-20 Communicating between a virtual area and a physical space
US15/853,831 Active US10838572B2 (en) 2011-07-22 2017-12-24 Communicating between a virtual area and a physical space
US17/092,741 Pending US20210055851A1 (en) 2011-07-22 2020-11-09 Communicating between a Virtual Area and a Physical Space
US17/092,701 Active 2034-02-02 US11960698B2 (en) 2011-07-22 2020-11-09 Communicating between a virtual area and a physical space

Country Status (3)

Country Link
US (7) US20130174059A1 (en)
TW (1) TWI533198B (en)
WO (2) WO2013016165A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376411A1 (en) * 2013-06-19 2014-12-25 Canon Kabushiki Kaisha Communication apparatus, method of controlling the same, and storage medium
US20150100892A1 (en) * 2013-10-08 2015-04-09 Ge Global Research System and method for providing user interface for user-specified context
US9288242B2 (en) 2009-01-15 2016-03-15 Social Communications Company Bridging physical and virtual spaces
US20160156569A1 (en) * 2014-11-28 2016-06-02 Igor, Inc. Node and Method of Assigning Node to Space
US20170201721A1 (en) * 2014-09-30 2017-07-13 Hewlett Packard Enterprise Development Lp Artifact projection
US10510111B2 (en) * 2013-10-25 2019-12-17 Appliance Computing III, Inc. Image-based rendering of real spaces
USD881936S1 (en) * 2017-04-11 2020-04-21 Maschinenfabrik Reinhausen Gmbh Display screen or portion thereof with graphical user interface
US11381413B2 (en) * 2020-01-08 2022-07-05 Disney Enterprises, Inc. Audio-orientated immersive experience of an event
US11385763B2 (en) 2017-04-27 2022-07-12 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US20220244683A1 (en) * 2021-02-02 2022-08-04 Kyocera Document Solutions Inc. Integration of printing device to a smart space
US11418906B2 (en) 2017-04-27 2022-08-16 Snap Inc. Selective location-based identity communication
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US11842411B2 (en) * 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
WO2024057642A1 (en) * 2022-09-14 2024-03-21 キヤノン株式会社 Information processing device, information processing method, program, and storage medium

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9401937B1 (en) 2008-11-24 2016-07-26 Shindig, Inc. Systems and methods for facilitating communications amongst multiple users
US8902272B1 (en) 2008-11-24 2014-12-02 Shindig, Inc. Multiparty communications systems and methods that employ composite communications
US9344745B2 (en) 2009-04-01 2016-05-17 Shindig, Inc. Group portraits composed using video chat systems
US8779265B1 (en) 2009-04-24 2014-07-15 Shindig, Inc. Networks of portable electronic devices that collectively generate sound
TWI533198B (en) 2011-07-22 2016-05-11 社交通訊公司 Communicating between a virtual area and a physical space
US9965564B2 (en) 2011-07-26 2018-05-08 Schneider Electric It Corporation Apparatus and method of displaying hardware status using augmented reality
US10742698B2 (en) * 2012-05-29 2020-08-11 Avaya Inc. Media contention for virtualized devices
US10109075B2 (en) * 2013-03-15 2018-10-23 Elwha Llc Temporal element restoration in augmented reality systems
US20140366091A1 (en) * 2013-06-07 2014-12-11 Amx, Llc Customized information setup, access and sharing during a live conference
US9679331B2 (en) * 2013-10-10 2017-06-13 Shindig, Inc. Systems and methods for dynamically controlling visual effects associated with online presentations
US10271010B2 (en) 2013-10-31 2019-04-23 Shindig, Inc. Systems and methods for controlling the display of content
US10284813B2 (en) 2014-03-17 2019-05-07 Microsoft Technology Licensing, Llc Automatic camera selection
US9749585B2 (en) * 2014-03-17 2017-08-29 Microsoft Technology Licensing, Llc Highlighting unread messages
US10178346B2 (en) 2014-03-17 2019-01-08 Microsoft Technology Licensing, Llc Highlighting unread messages
US9888207B2 (en) 2014-03-17 2018-02-06 Microsoft Technology Licensing, Llc Automatic camera selection
US9348526B2 (en) * 2014-03-28 2016-05-24 Scale Computing, Inc. Placement engine for a block device
US10095987B2 (en) * 2014-04-25 2018-10-09 Ebay Inc. Integrating event-planning services into a payment system
US9733333B2 (en) 2014-05-08 2017-08-15 Shindig, Inc. Systems and methods for monitoring participant attentiveness within events and group assortments
US20150338939A1 (en) * 2014-05-23 2015-11-26 Microsoft Technology Licensing, Llc Ink Modes
US20160071319A1 (en) * 2014-09-09 2016-03-10 Schneider Electric It Corporation Method to use augumented reality to function as hmi display
WO2016122580A1 (en) * 2015-01-30 2016-08-04 Hewlett Packard Enterprise Development Lp Room capture and projection
WO2016122582A1 (en) * 2015-01-30 2016-08-04 Hewlett Packard Enterprise Development Lp Relationship preserving projection of digital objects
CN107408354B (en) * 2015-03-05 2019-12-20 日本电气方案创新株式会社 Action evaluation device, action evaluation method, and computer-readable storage medium
US9699411B2 (en) * 2015-05-09 2017-07-04 Ricoh Company, Ltd. Integration of videoconferencing with interactive electronic whiteboard appliances
WO2017040808A1 (en) * 2015-09-03 2017-03-09 Contact Effect, Inc. Management of tenant locations in multi-tenant environments
US9965841B2 (en) 2016-02-29 2018-05-08 Schneider Electric USA, Inc. Monitoring system based on image analysis of photos
US9934431B2 (en) * 2016-07-27 2018-04-03 Konica Minolta Laboratory U.S.A., Inc. Producing a flowchart object from an image
US10133916B2 (en) 2016-09-07 2018-11-20 Steven M. Gottlieb Image and identity validation in video chat events
US10715746B2 (en) * 2017-09-06 2020-07-14 Realwear, Inc. Enhanced telestrator for wearable devices
US11222081B2 (en) 2017-11-27 2022-01-11 Evoqua Water Technologies Llc Off-line electronic documentation solutions
CN112204931A (en) * 2018-05-25 2021-01-08 利玛格有限公司 Method, apparatus and computer readable medium for real-time digital synchronization of data
JP7226836B2 (en) * 2018-11-06 2023-02-21 日本電気株式会社 Display control device, presentation system, display control method, and program
US11831814B2 (en) * 2021-09-03 2023-11-28 Meta Platforms Technologies, Llc Parallel video call and artificial reality spaces
US11921970B1 (en) 2021-10-11 2024-03-05 Meta Platforms Technologies, Llc Coordinating virtual interactions with a mini-map

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189701A1 (en) * 2003-03-25 2004-09-30 Badt Sig Harold System and method for facilitating interaction between an individual present at a physical location and a telecommuter
US20070233785A1 (en) * 2006-03-30 2007-10-04 International Business Machines Corporation Communicating using collaboration spaces
US20070234405A1 (en) * 2006-01-30 2007-10-04 Dai Nippon Printing Co., Ltd. System using electronic devices connected to network
US20080263460A1 (en) * 2007-04-20 2008-10-23 Utbk, Inc. Methods and Systems to Connect People for Virtual Meeting in Virtual Reality
US20090083374A1 (en) * 2006-05-03 2009-03-26 Cloud Systems, Inc. System and method for automating the management, routing, and control of multiple devices and inter-device connections
US20090089685A1 (en) * 2007-09-28 2009-04-02 Mordecai Nicole Y System and Method of Communicating Between A Virtual World and Real World
US20090172557A1 (en) * 2008-01-02 2009-07-02 International Business Machines Corporation Gui screen sharing between real pcs in the real world and virtual pcs in the virtual world
US20090254842A1 (en) * 2008-04-05 2009-10-08 Social Communication Company Interfacing with a spatial virtual communication environment
US20100100851A1 (en) * 2008-10-16 2010-04-22 International Business Machines Corporation Mapping a real-world object in a personal virtual world
US20100146118A1 (en) * 2008-12-05 2010-06-10 Social Communications Company Managing interactions in a network communications environment
US20100229106A1 (en) * 2009-03-06 2010-09-09 Trion World Network, Inc. Synthetic environment character data sharing
US20100245536A1 (en) * 2009-03-30 2010-09-30 Microsoft Corporation Ambulatory presence features
US20110029885A1 (en) * 2009-07-30 2011-02-03 International Business Machines Corporation Confidential Presentations in Virtual Worlds
US7958453B1 (en) * 2006-09-29 2011-06-07 Len Bou Taing System and method for real-time, multi-user, interactive and collaborative environments on the web
US20110271208A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Location-Aware Conferencing With Entertainment Options
US20110271332A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Participant Authentication via a Conference User Interface
US20120030289A1 (en) * 2010-07-30 2012-02-02 Avaya Inc. System and method for multi-model, context-sensitive, real-time collaboration
US20120096399A1 (en) * 2010-10-15 2012-04-19 Xerox Corporation Web/cloud hosted publish and subscribe service
US20120204120A1 (en) * 2011-02-08 2012-08-09 Lefar Marc P Systems and methods for conducting and replaying virtual meetings
US8285638B2 (en) * 2005-02-04 2012-10-09 The Invention Science Fund I, Llc Attribute enhancement in virtual world environments
US20140149599A1 (en) * 2012-11-29 2014-05-29 Ricoh Co., Ltd. Unified Application Programming Interface for Communicating with Devices and Their Clouds

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4280135A (en) 1979-06-01 1981-07-21 Schlossberg Howard R Remote pointing system
US5689641A (en) * 1993-10-01 1997-11-18 Vicor, Inc. Multimedia collaboration system arrangement for routing compressed AV signal through a participant site without decompressing the AV signal
US5933597A (en) * 1996-04-04 1999-08-03 Vtel Corporation Method and system for sharing objects between local and remote terminals
US7167897B2 (en) * 1996-05-08 2007-01-23 Apple Computer, Inc. Accessories providing a telephone conference application one or more capabilities independent of the teleconference application
US6999945B1 (en) * 1998-01-29 2006-02-14 Automated Business Companies Multiple customer and multiple location PC service provider system
US6094189A (en) 1998-04-17 2000-07-25 Quillen; Wendell A. Visual echo remote laser pointer
EP1085432B1 (en) 1999-09-20 2008-12-03 NCR International, Inc. Information retrieval and display
US20060184886A1 (en) 1999-12-22 2006-08-17 Urbanpixel Inc. Spatial chat in a multiple browser environment
KR100404285B1 (en) 2000-02-09 2003-11-03 (주) 고미드 2d/3d wed browsing method and recording medium storing the method
US20010019337A1 (en) 2000-03-03 2001-09-06 Jong Min Kim System for providing clients with a three dimensional virtual reality
US6501739B1 (en) * 2000-05-25 2002-12-31 Remoteability, Inc. Participant-controlled conference calling system
US7356563B1 (en) * 2002-06-06 2008-04-08 Microsoft Corporation Methods of annotating a collaborative application display
US7215324B2 (en) 2002-10-22 2007-05-08 Mitsubishi Electric Research Laboratories, Inc. Automatic indicator system and method
US20040236830A1 (en) * 2003-05-15 2004-11-25 Steve Nelson Annotation management system
JP4085924B2 (en) * 2003-08-04 2008-05-14 ソニー株式会社 Audio processing device
US7092002B2 (en) * 2003-09-19 2006-08-15 Applied Minds, Inc. Systems and method for enhancing teleconferencing collaboration
US20100166161A1 (en) * 2005-09-01 2010-07-01 Vishal Dhawan System and methods for providing voice messaging services
US7995713B2 (en) * 2006-04-03 2011-08-09 Agere Systems Inc. Voice-identification-based signal processing for multiple-talker applications
US8643736B2 (en) * 2006-12-27 2014-02-04 Verizon Patent And Licensing Inc. Method and apparatus for participating in a virtual community for viewing a remote event over a wireless network
US8086461B2 (en) * 2007-06-13 2011-12-27 At&T Intellectual Property Ii, L.P. System and method for tracking persons of interest via voiceprint
US8063905B2 (en) * 2007-10-11 2011-11-22 International Business Machines Corporation Animating speech of an avatar representing a participant in a mobile communication
US7769806B2 (en) 2007-10-24 2010-08-03 Social Communications Company Automated real-time data stream switching in a shared virtual area communication environment
US8140340B2 (en) * 2008-01-18 2012-03-20 International Business Machines Corporation Using voice biometrics across virtual environments in association with an avatar's movements
US20090215469A1 (en) 2008-02-27 2009-08-27 Amit Fisher Device, System, and Method of Generating Location-Based Social Networks
US8473851B2 (en) * 2008-02-27 2013-06-25 Cisco Technology, Inc. Multi-party virtual desktop
US8605863B1 (en) 2008-03-18 2013-12-10 Avaya Inc. Method and apparatus for providing state indication on a telephone call
US9514444B2 (en) * 2009-01-15 2016-12-06 Sococo, Inc. Encapsulating virtual area based communicant assemblies
EP2279472A4 (en) * 2008-04-05 2013-11-20 Social Communications Co Shared virtual area communication environment based apparatus and methods
GB2471628B (en) 2008-04-09 2012-09-19 Hewlett Packard Development Co Remote-controlled pointing
TWI418993B (en) * 2008-06-27 2013-12-11 Ind Tech Res Inst System and method for establishing personal social network, trusted network and social networking system
US8250141B2 (en) * 2008-07-07 2012-08-21 Cisco Technology, Inc. Real-time event notification for collaborative computing sessions
US20100214243A1 (en) * 2008-07-15 2010-08-26 Immersion Corporation Systems and Methods For Interpreting Physical Interactions With A Graphical User Interface
US20100037151A1 (en) 2008-08-08 2010-02-11 Ginger Ackerman Multi-media conferencing system
US20100100487A1 (en) 2008-10-16 2010-04-22 International Business Machines Corporation Virtual world contest, auction, election, sales method for notification and interaction with the real world
US9853922B2 (en) 2012-02-24 2017-12-26 Sococo, Inc. Virtual area communications
US8600731B2 (en) * 2009-02-04 2013-12-03 Microsoft Corporation Universal translator
US8340267B2 (en) * 2009-02-05 2012-12-25 Microsoft Corporation Audio transforms in connection with multiparty communication
US20100246571A1 (en) 2009-03-30 2010-09-30 Avaya Inc. System and method for managing multiple concurrent communication sessions using a graphical call connection metaphor
US20100306670A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Gesture-based document sharing manipulation
US8659639B2 (en) * 2009-05-29 2014-02-25 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US8806331B2 (en) * 2009-07-20 2014-08-12 Interactive Memories, Inc. System and methods for creating and editing photo-based projects on a digital network
US8578038B2 (en) 2009-11-30 2013-11-05 Nokia Corporation Method and apparatus for providing access to social content
JP2013545154A (en) 2010-09-10 2013-12-19 ワイフェアラー・インコーポレーテッド RF fingerprint for content location
US8607295B2 (en) * 2011-07-06 2013-12-10 Symphony Advanced Media Media content synchronized advertising platform methods
US8542264B2 (en) * 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8738772B2 (en) 2011-05-02 2014-05-27 Mitel Networks Corporation Regulating use of a mobile computing device for a user at a selected location
TWI533198B (en) 2011-07-22 2016-05-11 社交通訊公司 Communicating between a virtual area and a physical space
US9075561B2 (en) * 2011-07-29 2015-07-07 Apple Inc. Systems, methods, and computer-readable media for managing collaboration on a virtual work of art
US8682973B2 (en) 2011-10-05 2014-03-25 Microsoft Corporation Multi-user and multi-device collaboration
CN102521407B (en) * 2011-12-28 2015-04-01 谢勇 Method for document collaboration among users
US8949159B2 (en) * 2012-01-20 2015-02-03 Avaya Inc. System and method for automatic merging of real and virtual environments
US20130222266A1 (en) 2012-02-24 2013-08-29 Dan Zacharias GÄRDENFORS Method and apparatus for interconnected devices
US8892123B2 (en) 2012-03-07 2014-11-18 Microsoft Corporation Identifying meeting attendees using information from devices
US8576995B1 (en) 2012-10-30 2013-11-05 Google Inc. System and method for connecting an endpoint to an active multimedia communications session on a data network by setting metadata associated with a telephone call
US20150033140A1 (en) * 2013-07-23 2015-01-29 Salesforce.Com, Inc. Providing active screen sharing links in an information networking environment

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189701A1 (en) * 2003-03-25 2004-09-30 Badt Sig Harold System and method for facilitating interaction between an individual present at a physical location and a telecommuter
US8285638B2 (en) * 2005-02-04 2012-10-09 The Invention Science Fund I, Llc Attribute enhancement in virtual world environments
US20070234405A1 (en) * 2006-01-30 2007-10-04 Dai Nippon Printing Co., Ltd. System using electronic devices connected to network
US20070233785A1 (en) * 2006-03-30 2007-10-04 International Business Machines Corporation Communicating using collaboration spaces
US20090083374A1 (en) * 2006-05-03 2009-03-26 Cloud Systems, Inc. System and method for automating the management, routing, and control of multiple devices and inter-device connections
US7958453B1 (en) * 2006-09-29 2011-06-07 Len Bou Taing System and method for real-time, multi-user, interactive and collaborative environments on the web
US20080263460A1 (en) * 2007-04-20 2008-10-23 Utbk, Inc. Methods and Systems to Connect People for Virtual Meeting in Virtual Reality
US20090089685A1 (en) * 2007-09-28 2009-04-02 Mordecai Nicole Y System and Method of Communicating Between A Virtual World and Real World
US20090172557A1 (en) * 2008-01-02 2009-07-02 International Business Machines Corporation Gui screen sharing between real pcs in the real world and virtual pcs in the virtual world
US20090254842A1 (en) * 2008-04-05 2009-10-08 Social Communication Company Interfacing with a spatial virtual communication environment
US20100100851A1 (en) * 2008-10-16 2010-04-22 International Business Machines Corporation Mapping a real-world object in a personal virtual world
US20100146118A1 (en) * 2008-12-05 2010-06-10 Social Communications Company Managing interactions in a network communications environment
US20100229106A1 (en) * 2009-03-06 2010-09-09 Trion World Network, Inc. Synthetic environment character data sharing
US20100245536A1 (en) * 2009-03-30 2010-09-30 Microsoft Corporation Ambulatory presence features
US20110029885A1 (en) * 2009-07-30 2011-02-03 International Business Machines Corporation Confidential Presentations in Virtual Worlds
US20110271208A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Location-Aware Conferencing With Entertainment Options
US20110271332A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Participant Authentication via a Conference User Interface
US20120030289A1 (en) * 2010-07-30 2012-02-02 Avaya Inc. System and method for multi-model, context-sensitive, real-time collaboration
US20120096399A1 (en) * 2010-10-15 2012-04-19 Xerox Corporation Web/cloud hosted publish and subscribe service
US20120204120A1 (en) * 2011-02-08 2012-08-09 Lefar Marc P Systems and methods for conducting and replaying virtual meetings
US20140149599A1 (en) * 2012-11-29 2014-05-29 Ricoh Co., Ltd. Unified Application Programming Interface for Communicating with Devices and Their Clouds

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9288242B2 (en) 2009-01-15 2016-03-15 Social Communications Company Bridging physical and virtual spaces
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US11657438B2 (en) 2012-10-19 2023-05-23 Sococo, Inc. Bridging physical and virtual spaces
US10356136B2 (en) 2012-10-19 2019-07-16 Sococo, Inc. Bridging physical and virtual spaces
US9444933B2 (en) * 2013-06-19 2016-09-13 Canon Kabushiki Kaisha Communication apparatus, method of controlling the same, and storage medium
US20140376411A1 (en) * 2013-06-19 2014-12-25 Canon Kabushiki Kaisha Communication apparatus, method of controlling the same, and storage medium
US20150100892A1 (en) * 2013-10-08 2015-04-09 Ge Global Research System and method for providing user interface for user-specified context
US9870545B2 (en) * 2013-10-08 2018-01-16 General Electric Company System and method for providing user interface cards
US11783409B1 (en) 2013-10-25 2023-10-10 Appliance Computing III, Inc. Image-based rendering of real spaces
US11610256B1 (en) 2013-10-25 2023-03-21 Appliance Computing III, Inc. User interface for image-based rendering of virtual tours
US11062384B1 (en) 2013-10-25 2021-07-13 Appliance Computing III, Inc. Image-based rendering of real spaces
US11948186B1 (en) 2013-10-25 2024-04-02 Appliance Computing III, Inc. User interface for image-based rendering of virtual tours
US10592973B1 (en) 2013-10-25 2020-03-17 Appliance Computing III, Inc. Image-based rendering of real spaces
US11449926B1 (en) 2013-10-25 2022-09-20 Appliance Computing III, Inc. Image-based rendering of real spaces
US10510111B2 (en) * 2013-10-25 2019-12-17 Appliance Computing III, Inc. Image-based rendering of real spaces
US20170201721A1 (en) * 2014-09-30 2017-07-13 Hewlett Packard Enterprise Development Lp Artifact projection
US20160156569A1 (en) * 2014-11-28 2016-06-02 Igor, Inc. Node and Method of Assigning Node to Space
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11876762B1 (en) 2016-10-24 2024-01-16 Snap Inc. Generating and displaying customized avatars in media overlays
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
USD881936S1 (en) * 2017-04-11 2020-04-21 Maschinenfabrik Reinhausen Gmbh Display screen or portion thereof with graphical user interface
US11392264B1 (en) 2017-04-27 2022-07-19 Snap Inc. Map-based graphical user interface for multi-type social media galleries
US11474663B2 (en) 2017-04-27 2022-10-18 Snap Inc. Location-based search mechanism in a graphical user interface
US11893647B2 (en) 2017-04-27 2024-02-06 Snap Inc. Location-based virtual avatars
US11451956B1 (en) 2017-04-27 2022-09-20 Snap Inc. Location privacy management on map-based social media platforms
US11782574B2 (en) 2017-04-27 2023-10-10 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11418906B2 (en) 2017-04-27 2022-08-16 Snap Inc. Selective location-based identity communication
US11385763B2 (en) 2017-04-27 2022-07-12 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11842411B2 (en) * 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11843473B2 (en) * 2020-01-08 2023-12-12 Disney Enterprises, Inc. Audio-orientated immersive experience of an event
US20220311635A1 (en) * 2020-01-08 2022-09-29 Disney Enterprises, Inc. Audio-Orientated Immersive Experience of an Event
US11381413B2 (en) * 2020-01-08 2022-07-05 Disney Enterprises, Inc. Audio-orientated immersive experience of an event
US20220244683A1 (en) * 2021-02-02 2022-08-04 Kyocera Document Solutions Inc. Integration of printing device to a smart space
WO2024057642A1 (en) * 2022-09-14 2024-03-21 キヤノン株式会社 Information processing device, information processing method, program, and storage medium

Also Published As

Publication number Publication date
US20130024785A1 (en) 2013-01-24
WO2013016165A1 (en) 2013-01-31
TWI533198B (en) 2016-05-11
US20180121054A1 (en) 2018-05-03
US20210055850A1 (en) 2021-02-25
US9182883B2 (en) 2015-11-10
US20160062597A1 (en) 2016-03-03
WO2013016161A1 (en) 2013-01-31
US9851863B2 (en) 2017-12-26
US11960698B2 (en) 2024-04-16
US9575625B2 (en) 2017-02-21
US20210055851A1 (en) 2021-02-25
US10838572B2 (en) 2020-11-17
US20170160900A1 (en) 2017-06-08
TW201308195A (en) 2013-02-16

Similar Documents

Publication Publication Date Title
US11960698B2 (en) Communicating between a virtual area and a physical space
US11785056B2 (en) Web browser interface for spatial communication environments
US11489893B2 (en) Bridging physical and virtual spaces
US9813463B2 (en) Phoning into virtual communication environments
US11380020B2 (en) Promoting communicant interactions in a network communications environment
US9077549B2 (en) Creating virtual areas for realtime communications
US8930472B2 (en) Promoting communicant interactions in a network communications environment
US20120246582A1 (en) Interfacing with a spatial virtual communications environment
US20090288007A1 (en) Spatial interfaces for realtime networked communications
US20240087180A1 (en) Promoting Communicant Interactions in a Network Communications Environment
US20230339816A1 (en) Visual Communications

Legal Events

Date Code Title Description
AS Assignment

Owner name: SOCIAL COMMUNICATIONS COMPANY, OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VAN WIE, DAVID;REEL/FRAME:037219/0418

Effective date: 20151123

AS Assignment

Owner name: SOCIAL COMMUNICATIONS COMPANY, OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BRODY, PAUL J.;REEL/FRAME:039698/0925

Effective date: 20160910

AS Assignment

Owner name: SOCOCO, INC., OREGON

Free format text: CHANGE OF NAME;ASSIGNOR:SOCIAL COMMUNICATIONS COMPANY;REEL/FRAME:040017/0709

Effective date: 20151020

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION