WO2000075860A1 - Electronic writing/display apparatus and respective method of operation - Google Patents

Electronic writing/display apparatus and respective method of operation Download PDF

Info

Publication number
WO2000075860A1
WO2000075860A1 PCT/EP1999/003921 EP9903921W WO0075860A1 WO 2000075860 A1 WO2000075860 A1 WO 2000075860A1 EP 9903921 W EP9903921 W EP 9903921W WO 0075860 A1 WO0075860 A1 WO 0075860A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture elements
location
active point
reference system
active
Prior art date
Application number
PCT/EP1999/003921
Other languages
French (fr)
Inventor
Roberto Battiti
Alessandro Garofalo
Original Assignee
Soffix, S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soffix, S.R.L. filed Critical Soffix, S.R.L.
Priority to PCT/EP1999/003921 priority Critical patent/WO2000075860A1/en
Publication of WO2000075860A1 publication Critical patent/WO2000075860A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0428Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual

Definitions

  • the present invention relates to electronic writing/display apparatus .
  • Different modes of interacting and communicating are converging toward integrated systems, with the provision of apparatus for the presentation of information (monitor and display screens, overhead projectors, computer controlled video projectors, loudspeakers) and for the interaction between the participants and the system (keyboard and mouse interfaces, video cameras, laser and infra-red pens, surface-contact detectors, microphones, etc.).
  • the transmission and management of information exchanged between the various subsystems of the integrated system is often handled by a computer, connected to the different apparatuses and capable of controlling them.
  • US-A-5 274 362 describes a method of determining horizontal X and vertical Y coordinates in an electronic blackboard system having two electronically conductive surfaces, at least one of which is flexible enough to permit contact between the first and the second surface, with electrodes ar- ranged on the two surfaces at right angles to each other. Contact between the first and the second surface is determined by applying currents and measuring voltage differences at selected electrodes on the two surfaces. In addition, a separate self-calibration mode is activated if no contact is determined by the usual mechanism of the system.
  • US-A-4 803 564 describes an electro-mechanical system to read information written on an existing writing board that may be attached to a wall . The system is comprised of a photoelectric conversion device mounted on a driving unit moving horizontally. The driving unit moves on a guide rail secure do angle irons fixed to the information writing board .
  • US-A-5 305 114 describes apparatus which reads a writing on a writing sheet fitted on a blackboard framework and which electronically copies the writing onto a recording medium.
  • the apparatus includes a foldable framework, the writing sheet, a stand to support the framework and a copying device .
  • US-A-4 858 021 describes a photoelectric device supported by a horizontal guide rail which is turn supported by the blackboard. A drive motor and roller drive the device by direct contact of the roller with the blackboard surface .
  • US-A-3 761 620 represents an example of a graphical input device, where a two-dimensional matrix of semiconductors is arranged in an ordered array as a flat light emitting and light sensing device. A penlight is used to activate the light sensing semiconductors to achieve a graphical input.
  • EP-A-0 372 467 describes an electrostatic blackboard with an image display function for displaying a visible toner image, including a copying apparatus for transferring the visible image on the blackboard do a large-size paper shee .
  • 0-A-97/06963 describes a writing board with a mechanism to erase what has been written by writing instruments using an ink. The erasing mechanism operates an erase switch to convey the recording medium and actuate a pump. The cleaning liquid in a storage container is circulated through the cleaner via tubes attached to surface of the recording medium being conveyed.
  • a first drawback is related to the "bothersome" presence of technology.
  • the interaction tools tend to distract the concentration of the user from the purposes of use (interaction, information exchange, knowledge enrichment) toward the technological means.
  • a training phase is required; in other cases a third person (a so-called director) is required to manage the proper acquisition and presentation of multimedia information.
  • Installation and calibration may also represent a source of concern. Quite frequently, installation and calibration by qualified technical personnel is required before use of the system by final users is made possible. This is the case when large and complex systems are installed, including boards to be mounted with high accuracy, high-precision lenses, etc. In other cases, systems based on traditional computer vision techniques must be calibrated through the precise knowledge of the relative positions and orienta- tions of the visual sensor and display units.
  • Fragility of the devices involved and the necessity of a controlled environment need sometimes to be taken into ac- count.
  • the devices involved are quite sensitive (for example, some optical pens have a high probability of being damaged if dropped by accident) .
  • the use of some electronic blackboards is made difficult because of the constraints on the physical environment, for example they need a large entrance door, or a large space for their installation.
  • the present invention aims at solving the problems outlined in the foregoing by means of a system having the features set forth in the annexed claims.
  • the invention also relates to the respective method of operation.
  • the system of the invention is comprised of two optical sensors, a projector and a computer connected to the projector and to the optical sen- sors.
  • the surface in ques- tion may be simply represented e.g. by a screen such as a traditional projection screen or simply a wall surface already available at the location where the system of the invention is used.
  • the tracer element may simply be e.g. a stick, a pen (including a light pen) already available to the user of the system or even just by the finger of the user himself or herself.
  • the optical sensors are arranged in order to monitor that part of the surface (in the following referred to as the active surface) on which the image is projected by the projector.
  • the computer is preferably provided by hardware/software modules for self-calibration, acquisition and processing of the images from the optical sensors and controlling the projector.
  • interaction with the system occurs through the movement of the operator's hand.
  • a pointed extremity for example a finger of the hand, representing an image distinguishable over the background of the active surface, touches or approaches that surface
  • the coordinates Px and Py of the contact point are calculated in a continuous manner by the system comprised of the optical sensors and the computer, and thus made available to be used as input data for the different applications installed on the computer.
  • the coordinates of the active writing point can be stored into a random-access memory (RAM) of the computer.
  • some specific active zones of the active surface can be associated to commands to be executed where these zones are touched: in this case the finger of the hand can be used as a substitute for peripheral such as a mouse used for the selection in a computer-controlled system.
  • the invention thus substantially mitigates the disadvantages outlined in the foregoing.
  • the intrusive pres- ence of technology is dispensed with, whilst interaction of the user with the system becomes more natural.
  • Specific technical competence and training are no longer required: the user simply activates the system and initiates interaction therewith with the capability of exclusively focusing onto the training/information purposes of the session and not onto the technical means .
  • the installation phase is simplified (placement of the projector and the sensors associated therewith as well as connection to the computer, without the need of precise measurements) and calibration is completely automated.
  • the apparatus of the invention is not overly sensitive and can be used in quite diverse environments.
  • a surface not necessarily a flat one
  • the sensors that monitor that surface in order to detect the location of tracer member.
  • the flexibility of use of the invention is high.
  • the same apparatus can be used on a desk, with the active surface being a part of the desk surface or an electronic drawing board, through a traditional white board, through a fold- able panel for the projection or, simply, through a piece of smooth but not necessarily flat wall surface.
  • use with a display unit is possible, to obtain a way of operation similar to touch- screen operation based on electro- magnetic principles. The portability is enhanced by the low weight of the single components.
  • FIG. 1 schematically depicts interaction between the user and a system according to the invention
  • - figure 2 grammatically shows the matrix of pixels in a projector for use in the system of the invention
  • - figure 3 including four sections designated 3a to 3d, respectively, shows definition of certain reference systems for use in the invention
  • - figure 4 illustrates location of the active writing point at the active surface in the system of the invention
  • - figure 7 shows determination of the active writing point in the system of the invention
  • - figure 8 schematically shows one way of eliminating interference between motion of the active writing point and motion of the projected image
  • - figure 9 shows an alternative approach for interaction between the user and the system of the invention
  • - figure 10 shows a further alternative approach for interaction between the user and the system
  • Apparatus for electronic writing/display (e.g. a so-called “electronic blackboard”) according to the invention is designated 1 overall in figure 1.
  • Apparatus 1 is intended for interaction with an operator H capable of generating graphical information (i.e. writing and/or drawing symbols/images) on a surface 7 by means of a pointed member F such as e.g. a pen, a light-pen, a stick or simply a finger.
  • a pointed member F such as e.g. a pen, a light-pen, a stick or simply a finger.
  • Surface 7 is shown herein as being a flat or substantially flat rectangular surface but may be notionally of any shape enabling writing or drawing and display thereon.
  • Apparatus 1 further includes a projector 6 for projecting graphical information onto surface 7 under the control of a processing unit 8 such as a computer.
  • a processing unit 8 such as a computer.
  • Optical sensors 3 and 5 are arranged in order to "frame" surface 7 and generate respective signals indicate of the position of pointed member F (i.e. the tracer member) as explained in greater detail in the following.
  • sensors 3 and 5 are preferably located at the upper corners of surface 7, still preferably by resorting to pods, brack- ets or arms 2 and 4 ensuring that sensors 3 and 5 at least slightly protrude towards operator H thus being positioned at a certain distance from surface 7.
  • the image is projected by the optics of the system into a sensor matrix of dots (picture elements or pixels) composed of elementary surface portions characterized by a single level of intensity (one for each color in the case of color-sensitive sensors) .
  • the cap- tioned sensor matrix can be conveniently comprised of a CCD element of the kind currently used in solid-state cameras.
  • the sensed matrix of intensity values is then transmitted in the form of a respective signal to computer 8.
  • projector 6 is controlled by computer 8 in order to define a matrix of intensity values, one for each pixel of the image. These values are then transformed into the projected light beams by the electronic and optical sections of the projector.
  • a simplified illustration of the operation projector 6 is provided in figure 2. All of the foregoing corresponds to current technology, not requiring detailed description herein.
  • Three reference systems, illustrated in figures 3b to 3d, respectively, are contemplated for use within the framework of the invention.
  • the first system (L - figure 3b) is associated to the pixel matrix of the left optical sensor 3.
  • the second system (R - figure 3d) to the matrix of the right optical sensor 5.
  • the third system (P - figure 3c) is associated to the projection matrix of projector 6.
  • a single pixel in the cited matrices is therefore characterized by two coordinates in the corresponding reference system.
  • a physical point in the three-dimensional space, in the working space visible by both optical sensors 3, 5 and such that it can be illuminated by the light beam originating from projector 6, is therefore associated to three points in the three reference systems described in the foregoing. The physical point will thus be projected by the optical system of the left optical sensor 3 onto a point with coordinates (Lx, Ly) in the reference system L.
  • the same point will have two coordinates (Rx, Ry) in the reference system R.
  • the same point will be repered by two coordinates (Px, Py) in the system P of projector 6.
  • Px and Py are the coordinates of the pixel of the matrix to which a maximum intensity value is to be associated if a light beam originating from the lens of the projector 6 and passing through the given physical point is needed.
  • Basic principles of the invention In the following it will be assumed that the physical point is visible by both optical sensors 3, 5 and that it can be reached by a light beam originated from projector 6 without any obstacles therebetween. This point will be referred to in the following as the active point AP.
  • Each active point in the working space of the invention has associated therewith six numerical values: Rx, Ry, Lx, Ly, Px, Py.
  • the values are calculated - in a known manner - starting from the signals generated by sensors 3 and 5 and stored in a memory area such as a RAM of computer 8.
  • the associations between active points and the values Rx, Ry, Lx, Ly, Px, Py are also subject to small inaccuracies due to the finite precisions of the processing steps executed (in a digital manner) in the optical sensors 3 and 5 and/or in computer 8. For example, round-off errors are generated if the six coordinates are represented with integer values (therefore eliminating the digits after the floating point in the number) . Standard techniques are however available for dealing with the inaccuracies/errors thus introduced.
  • the exact intersection of two lines in the three-dimensional space is substituted with the point where the distance between the two lines is at a minimum value (actually the minimum distance criterium determines two points, one on each line, and then one calcu- lates the point at the middle of the segment connecting the two previous points as the point of approximated intersection) .
  • the invention is based on the determination of the event given by contact between the active point (i.e. the writing point) and the active surface 7 and how the coordinates Px and Py in the pixel matrix of the projector 6 are calculated after the coordinates (Lx, Ly) and (Rx, Ry) of the two images of the writing point are acquired by the two optical sensors 3 and 5.
  • the existence of certain mathematical relationships i.e. functions
  • the system of the invention determines the presence or absence of contact (or close proximity, see below) between the active writing point AP and the active surface 7.
  • That information can thus be made explicit through the definition of certain mathematical functions and through the construction of approximations of these functions after starting from a set of examples, i.e. from a set of associations between input and output values, obtained during a initial self -calibration phase.
  • a simplified presentation of the foregoing may be based on the reduction to a simplified bi-dimensional model where the active point of interest is given by a black and a pointwise object on a white background and, for the sake of clarity, the horizontal plane passing through the working space will be considered also assuming that the optical sensors 3 and 5 are uni-dimensional, such that a pixel position is determined only by the coordinate Lx (for the left-hand sensor) and Rx (for right-hand sensor) .
  • the situation is schematically illustrated in figure 4. Once the coordinate Lx is fixed, the possible positions of the active point will be given by a straight line originating from the left-hand optical sensor with a given direc- tion. The specific direction depends on Lx and on the optical system.
  • the reference systems L and R are arranged in such a way that a movement of the active point in the right direction will imply an increase of the values Lx and Rx. If, in addition to the coordinate Lx, the coordinate Rx associated with the same active point is also known, the physical location of the point can be determined from the intersection of the two straight lines (consider for example point PI in figure 4 , obtained by the intersection between the continuous straight line originating from L and one of the dashed lines originating from R) .
  • CONTACT is implemented by a hardware/software module in computer 8 which, starting from the value of Lx, calculates the value Rx .
  • computer 8 Given an active point, computer 8 obtains the coordinates Lx and Rx of the point starting from the signals generated by optical sensors 3 and 5.
  • a threshold value (defined as contact threshold) and to determine the contact on the base of a comparison of the difference Delta with this threshold.
  • the value of the threshold can be determined when the invention is constructed, depending on the precision of the optical sensors (number of pixels) , and depending on the request of the user. It can also be made selectively adjustable in order to render a condition of proximity or close proximity equivalent to contact proper (see also below) .
  • the method just described can be easily generalized to the multidimensional case of the application, therefore considering also the coordinates along the y axes of the pixel matrices of the optical sensors .
  • the functions of interest that calculate the coordinates in the reference system of an optical sensor starting from the coordinates of the other sensor and from the requirement that the active point should be at contact with the surface, are the following:
  • CONTACTRx CONTACTRy the terminal part of the name is a mne- motic label whose meaning is as follows: CONTACTLx calcu- lates the coordinate Lx (henceforth the label) starting from the coordinates of the right-hand sensor, CONTACTLy calculates the coordinate Ly starting from the coordinates of the right-hand sensor, etc.
  • the system To determine the presence or absence of contact (or close proximity) , the system considers all four optical coordinates (Lx, Ly, Rx, Ry) associated with the active point, calculates one of the above functions defined in the foregoing, and compares the difference between the value obtained by the function and the value of the corresponding coordinate with the value of the contact threshold.
  • the difference calculated by the system is Lx - CONTACTLx (Rx, Ry) .
  • one of the possible uses of the system of the invention is to realize a so-called virtual pen, i.e. the user moves a tracer member (e.g. a finger of the hand or a stick with a color different from the background) thus defining an active writing point and a projector projects a light beam in correspondence to the position subsequently identified by the tracer member.
  • a tracer member e.g. a finger of the hand or a stick with a color different from the background
  • the notation indicates which coordinate is calculated (Px or Py) and which are the input coordinates (those of the left optical sensor or those of the right optical sensor) . Let us assume that the system has stored in memory a list of coordinates
  • a machine-learning system can use the above examples to generalize the association between inputs and outputs, in this way producing an approximation of the desired function.
  • the above functions may be realized through a mechanism of machine learning from examples based on the use of neural nets.
  • these functions are constructed in an automated way during a preliminary setup phase .
  • projector 6 projects one after another, under control from computer 8, a series of images comprised e.g. of a single contrasting point against the background of surface 7, e.g. e single black point on a white background.
  • the positions of the black point in the different images are obtained by varying the coordinates Px and Py of the pixel matrix of projector 6 so that they assume values corresponding to a set of points that covers the matrix in a uniform way.
  • the pixel matrix of the projector (figure 5b) is comprised of 512 x 512 pixels, with values Px and Py ranging from 0 to 511, the values considered for the cou- pies (Px, Py) could be (0,0), (0,16), (0,32), (16, 0),
  • Each image is acquired by the optical sensors 3 and 5 and transmitted to computer 8, which, starting from the pixel matrices acquired by both optical sensors, calculates the coordinates (Lx, Ly) and (Rx, Ry) of the pixel with the lowest intensity presence in the pixel matrix associated with the left and right sensors, corresponding to the image of the point projected (see figures 5a and 5c) .
  • Computer 8 stores the series of data (Px, Py, Lx, Ly, Rx, Ry) e.g. in a RAM area provided therein. After all the images have been projected, these data are used by a hardware/software component to construct the required func- tions .
  • the required functions can be calculated with traditional fitting techniques based on polynomials with a degree sufficiently high in order to ensure that non linearities in the system are compensated.
  • a method is known, e.g. from W.H. Press, B. P. Flan- nery, S.A. Teukolsky and W.T. Vetterling. Numerical Recipes in C, Cambridge University Press, 1988.
  • the required functions are approximated through flexible representations, also known as "network of functions" or "neural networks". These representations are constructed starting from a set of examples by machine learning mechanisms . Some examples of these networks of functions are illustrated in figures 6a and 6b. It is known that neural nets can be used to realize compu- tational architectures that compute the solution to a problem (in the instant case the problem is that of determining the above described functions) starting from a set of in- put-output associations that have been stored into memory and used for an automated training phase.
  • Training is executed by minimizing the above function with respect to its parameters ("weights") x: when smaller values are obtained the obtained output values and the target values tend to become similar. It is also important to ensure that the total number of parameter is limited, so that the network can generalize in an appropriate way to new cases, not considered during training.
  • the one- step- secant method OSS is a variation of what is called one-step (memory-less) Broyden-Fletcher-Goldfarb- Shanno method.
  • the one-step method requires only vectors computed from gradients g of the function f.
  • a ⁇ c — ( H ⁇ , vIvA s c Sc , y ⁇ 9c .
  • R ._ s ⁇ —yc j s ⁇ y c 1 s ⁇ —y c , &c —
  • the search direction is the negative gradient at the beginning of learning and it is restarted to -g c every N steps (N being the number of weights in the network) .
  • the one-step secant algorithm can thus be described in the form of the program excerpt reproduced in the table below. TABLE 1 - The one-step secant algorithm
  • d t is the directional derivative of E along d.
  • G ⁇ cr is a constant equai to 0.5, used to multiply the directional derivative.
  • MAX_TRIALS is equal to 10.
  • L ⁇ cr is a constant equal to 1.1.
  • L decr is a constant equal to 0.5.
  • the learning rate is decreased by L dec , after each unsuccessful trial.
  • Quadratic interpolation is not wasting computation, in fact, after the firs trial one has exactly the information that is needed to fit a parabola: the value of E 0 and E' 0 at the initial point and the value of E ⁇ at the trial point .
  • the parabola P (x) is:
  • the ⁇ m ⁇ n that minimizes the parabola is less than ⁇ .
  • computer 8 determines the values of the internal parameters of the neural network, parameters that are going to remain fixed for the subsequent use of the system: see for example the back-propagation technique of D. E. Rumelhart, G.E. Hinton and R. J. Williams "Learning internal representations by error propagation" in D . E. Rumelhart and J. L. McClelland (Eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition . Vol . 1 : Foundations, MIT Press, 1986 or the one- step secant technique de ⁇ scribed in the foregoing.
  • the system After the preliminary self-calibration phase is completed, the system enters a mode of operation corresponding to proper identification and tracking of the active writing point AP.
  • the active point AP may be represented by different entities depending on the different applications.
  • the interaction is mediated through the moving finger of the operator, and the optical sensors 3, 5 are thus capable of distinguishing the intensity (or the color - in the case of color-sensitive optical sensors) of the finger against the background of surface 7.
  • the optical sensors are preferably located in the upper zone of the active surface, supported by pods, brackets or arms 2, 4 protruding towards the operator whilst computer 8 can identify, for each optical sensor, the point of lowest intensity (below a suitable threshold) that is in the highest position in the pixel matrix: see figure 7. From the images of the left and right sensors 3 and 5, the system derives therefore the four coordinates (Lx, Ly, Rx, Ry) corresponding to the active point.
  • identification of active point AP may be made simpler and more reliable by the use of suitable indicators, e.g. by using, as the tracer member F, an object, such as a pen of a color which is not present in other parts of the working space.
  • a factor to be considered and dispensed with is the possible interference between the motion of the writing point and the motion of the image projected onto active surface 7.
  • the use of the system to realize a virtual pen can be considered: the user traces with a tracer element (such as a pen or a point extremity or a finger) a trajectory onto active surface 7 and, at the same time (with a delay which is not perceived by the user) , projector 6 projects the image of this trajectory onto the same active surface.
  • a tracer element such as a pen or a point extremity or a finger
  • Active point AP is identified (through the determination of the low-intensity point with the highest y coordinates or of the moving point with the highest coordinates) whilst the optical sensors may not distinguish between images of a physical object (i.e. the writing point) and images projected by projector 6 (the projected light beam impinging onto active surface 7) .
  • the system may thus be possibly misled in the identification of the active point, by considering as the active point one of the points of the writing projected.
  • the projected points can be of low intensity (e.g. writing of black color) in the upper portion of the image and in motion. For the sake of example let us consider in particular the projection of the last point just "written" by the user. Under the circumstances, the system may become unstable be- cause of the following mechanism.
  • the active point AP identified by the system is the last point written on active surface 7 by the light beam projected by projector 6 with coordinates Pxl and Px2. Because of the possible inaccuracies in determination of the point, it is possible that the point is determined with certain errors, i.e. epsilon 1 and epsilon 2, leading to (Pxl + epsilon 1) and (Px2 + epsilon2) .
  • computer 8 causes projector 6 to project a minimum intensity beam with the given coordinates (assuming that writing is with a black color) . Therefore, the mechanism causes a new black point to be written on active surface 7.
  • the tracking module for determining active point AP will then detect a displacement (from Pxl and Px2 to Pxl + epsilon 1 and Px2 + epsilon 2) , as causing a new black point to be written etc.
  • a displacement from Pxl and Px2 to Pxl + epsilon 1 and Px2 + epsilon 2 , as causing a new black point to be written etc.
  • the user can observe that the trajectory of writing becomes uncontrolled: the light of projector 6 writes additional points without being prompted to do so by the user .
  • interference between the writing point and the image written by projector 6 may be dispensed with decou- pling the written image from the image used to determine active point AP.
  • computer 8 maintains in its memory modules to images, denoted Imagel and Image2.
  • Imagel contains the trace of the writing trajectory, while Image2 contains a pixel matrix with uniform intensity values.
  • Two synchronization signals determine the projection by projector 6 of the two images in two different time frames.
  • Imagel is projected most of the times, while Image2 is pro- jected only while the image is being captured by optical sensors 3 and 5. Given the uniformity of image 2, the brevity of its perma- nence on active surface 7 and the characteristics of temporal integration of the human visual system, the user will see in a conscious way only Imagel .
  • the images captured by the optical sensors 3 and 5 during projection of Image2 will not contain any sign of the projecting writing: therefore, the determination of active point AP will not be misled by any interference.
  • the permanence time of Image2 depends on the acquisition time of the sensors. In any case, optical sensors available commercially and having acquisition times lower than a hundredth of a second were found to be thoroughly satisfactory for use within the system of the invention.
  • An alternative solution provides for interference being dispensed with by distinguishing the projected writing from active point AP for example by projecting the writing with a color and/or an intensity level that are different from the color and/or the intensity level of tracer member F, i.e. the pen.
  • display means can be comprised of a liquid-crystal panel, or a display panel based on a different technology, or by a traditional computer monitor (i.e. a cathode ray tube) with appropriate size and dimensions.
  • computer 8 transmits the pixel matrix (intensity level for the possible values of the coordinates Px and Py) to the display unit instead of projector 6.
  • Optical sensors such as sensors 3 and 5 can be substituted by other means for determining the position of a tracer member, for example by detecting pressure from a pen's tip, light from a light-emitting pen or by detecting electrical changes as in "touch-screen" systems.
  • calibration of the system can be executed by projecting an image consisting of a number of calibration points (for exam- pie black points on light background) and by clicking with the pen at the positions where the different points are projected.
  • calibration can be exe- cuted by projecting the same calibration pattern and observing an image of the calibration pattern and of the display border .
  • the optical sensing means can frame a display positioned on a table, at whatever position chosen by the user.
  • the apparatus together with a writing tablet is possible, where the image is projected onto the tablet, as illustrated in figure 11.
  • the position- detecting sensors may be included in the tablet (whereby no optical sensors are needed) .
  • the image projected onto the tablet after automated calibration of the system, is used as a feedback signal for the user, avoiding the need to look at a separate display (such as a computer monitor) .
  • Some tablets admit the possibility of using a sheet of paper on their surface in order to provide visual feedback to the writing person.
  • the invention offers the possibility to have a feedback for operations such as moving objects or "cut and paste” operations, or projecting multimedia content, which cannot be achieved with a static sheet of paper.
  • the writing tablet can contain dynamic sensing capability, including pressure, tilt and height of the pen.
  • the pen can have multiple but- tons and be either corded or cordless. Calibration can be effected as it was described previously in a manner that avoids the use of visual sensors. In this last case, the position of the reference points projected by the projector in the reference system of the writing tablet is given by clicking on the tablet with a suitable pen, acquiring and storing the position and proceeding as it was described by machine learning techniques or by polynomial fitting tech- niques .
  • the use of more than two sensors is possible, to increase the spatial resolution in the case of sensors with a limited number of pixels.
  • the sensor can be mounted on a mov- ing support, so that their orientation and position can be controlled through motors controlled by computer 8. In this way, the same sensor can vary its position and/or rotation angles with respect to active surface 7.
  • this can be equipped with op- tical systems such as the focus and the magnification can be controlled by computer 8.
  • single optical sensor 3 can be placed on the ground facing upwards, so that the acquired image is a strip corresponding to an area closed to the active surface. If the ceiling of the room is white, the presence of an object with a lower intensity permits to determine that the active point (for example a hand or a pointed extremity) is touching the active surface 7. Only one coordinate is available, so that the mechanism is suitable for interacting with a one- dimensional projected strip, like a one-dimensional toolbar placed horizontally.
  • the connections between the different components of the system (computer 8, projector 6, optical sensors 3, 5) can be realized with wireless technologies, e.g. through infrared or radio signals.
  • the use of traditional image processing and enhancement techniques for the images captured by the optical sensors can be added, to reduce the effect of statistical noise in the sensors, to ameliorate the distribution of intensity values of the image, to identify edges, to identify moving parts, etc.
  • the accuracy achieved during the calibration phase can be increased through the projection of patterns different from simple black points for examples black circles of a certain radius .
  • the system can therefore determine the position of the center of these patterns, captured by the optical sensors, with interpolation techniques, therefore obtaining a degree of positional accuracy better than the dimension of a single pixel.
  • Function approximation techniques different from the neural networks can be used for constructing the functions defined for location of the active point. For example, lookup-tables or polynomials interpolating between the data points obtained during projection of the calibration patterns can be used.

Abstract

Apparatus for displaying on a surface (7) graphical information written by means of a tracer member (F) defining an active point includes a projector (6), optical sensors (3, 5) for detecting the location of the active point at said surface (7) as well as data processing means (8) which control operation of the projector (6) to cause the projector (6) itself to produce picture elements at the respective points of said surface (7) univocally corresponding to locations of the tracer member (F) as detected by the optical sensor means (3, 5). The processing means (8) are arranged to permit self-calibration of the system thereby avoiding man-made measurements, adjustment operations or the installation of large and rigid devices. Writing and interacting with the system is performed in a natural way, possibly using simply a finger as the tracer member (F), therefor not requiring extensive training and technical competence. Visual feedback is provided to the user (H) by the beam of light projected by projector (6).

Description

Electronic writing/display apparatus and respective method of operation Field of the invention
The present invention relates to electronic writing/display apparatus .
Background of the invention
Increasingly complex technologies have been introduced in recent years for use in audio/optical instrumentation supporting interactive sessions for teaching, educational ac- tivities, seminars, team cooperation etc.
Different modes of interacting and communicating are converging toward integrated systems, with the provision of apparatus for the presentation of information (monitor and display screens, overhead projectors, computer controlled video projectors, loudspeakers) and for the interaction between the participants and the system (keyboard and mouse interfaces, video cameras, laser and infra-red pens, surface-contact detectors, microphones, etc.). The transmission and management of information exchanged between the various subsystems of the integrated system is often handled by a computer, connected to the different apparatuses and capable of controlling them.
In particular, devices known as electronic blackboards (this designation, as used herein, being intended also to include apparatus suitable for desk use, e.g. in connection with a writing tablet) have been introduced. In these systems, traditional blackboards (blackboard and chalk, washable surface and markers) are superseded by electronic instruments for writing and displaying information. An advan- tage of these electronic instruments is that, in addition to reproducing the traditional use of blackboard, they are intrinsically adapted for use in multimedia presentation, e.g. in interactive sessions. Description of the prior art The following patents are exemplary of the state of the art in the field of electronic blackboards. US-A-5 274 362 describes a method of determining horizontal X and vertical Y coordinates in an electronic blackboard system having two electronically conductive surfaces, at least one of which is flexible enough to permit contact between the first and the second surface, with electrodes ar- ranged on the two surfaces at right angles to each other. Contact between the first and the second surface is determined by applying currents and measuring voltage differences at selected electrodes on the two surfaces. In addition, a separate self-calibration mode is activated if no contact is determined by the usual mechanism of the system. US-A-4 803 564 describes an electro-mechanical system to read information written on an existing writing board that may be attached to a wall . The system is comprised of a photoelectric conversion device mounted on a driving unit moving horizontally. The driving unit moves on a guide rail secure do angle irons fixed to the information writing board .
US-A-5 305 114 describes apparatus which reads a writing on a writing sheet fitted on a blackboard framework and which electronically copies the writing onto a recording medium. The apparatus includes a foldable framework, the writing sheet, a stand to support the framework and a copying device . US-A-4 858 021 describes a photoelectric device supported by a horizontal guide rail which is turn supported by the blackboard. A drive motor and roller drive the device by direct contact of the roller with the blackboard surface . US-A-3 761 620 represents an example of a graphical input device, where a two-dimensional matrix of semiconductors is arranged in an ordered array as a flat light emitting and light sensing device. A penlight is used to activate the light sensing semiconductors to achieve a graphical input. EP-A-0 372 467 describes an electrostatic blackboard with an image display function for displaying a visible toner image, including a copying apparatus for transferring the visible image on the blackboard do a large-size paper shee . Finally, 0-A-97/06963 describes a writing board with a mechanism to erase what has been written by writing instruments using an ink. The erasing mechanism operates an erase switch to convey the recording medium and actuate a pump. The cleaning liquid in a storage container is circulated through the cleaner via tubes attached to surface of the recording medium being conveyed.
The solutions considered in the foregoing are not satisfactory because of some problems that limit their use in cer- tain situations and environments.
In particular, a number of specific drawbacks may be identified.
A first drawback is related to the "bothersome" presence of technology. In some cases, a certain degree of technical experience is required by the users of these systems. The interaction tools (remote control, optical pens, etc.) tend to distract the concentration of the user from the purposes of use (interaction, information exchange, knowledge enrichment) toward the technological means. In some cases, a training phase is required; in other cases a third person (a so-called director) is required to manage the proper acquisition and presentation of multimedia information. These effects are sometimes difficult to identify by technically- oriented people but constitute a serious limit to the dif- fusion of these technologies with non-technical users.
Installation and calibration may also represent a source of concern. Quite frequently, installation and calibration by qualified technical personnel is required before use of the system by final users is made possible. This is the case when large and complex systems are installed, including boards to be mounted with high accuracy, high-precision lenses, etc. In other cases, systems based on traditional computer vision techniques must be calibrated through the precise knowledge of the relative positions and orienta- tions of the visual sensor and display units.
Fragility of the devices involved and the necessity of a controlled environment need sometimes to be taken into ac- count. In some cases the devices involved are quite sensitive (for example, some optical pens have a high probability of being damaged if dropped by accident) . The use of some electronic blackboards is made difficult because of the constraints on the physical environment, for example they need a large entrance door, or a large space for their installation.
Finally, lack of flexibility and transportability may represent factors militating against the current use of the equipment concerned. Some systems are rigid, for example, with rigid dimensions of the blackboard surface, with a single modality of interaction (for example: magnetic sensors) , and cannot be adapted rapidly to different physical environment. Transportation of some kind of apparatus is made difficult because of their size, weight and complexity of installation. Summary of the invention
The present invention aims at solving the problems outlined in the foregoing by means of a system having the features set forth in the annexed claims. The invention also relates to the respective method of operation.
In one possible embodiment, the system of the invention is comprised of two optical sensors, a projector and a computer connected to the projector and to the optical sen- sors.
In that sense, it will be promptly appreciated that neither a dedicated writing/display surface, nor a similarly dedicated writing tool or tracer member need necessarily be included in the system of the invention. The surface in ques- tion may be simply represented e.g. by a screen such as a traditional projection screen or simply a wall surface already available at the location where the system of the invention is used. Similarly, the tracer element may simply be e.g. a stick, a pen (including a light pen) already available to the user of the system or even just by the finger of the user himself or herself. The optical sensors are arranged in order to monitor that part of the surface (in the following referred to as the active surface) on which the image is projected by the projector. The computer is preferably provided by hardware/software modules for self-calibration, acquisition and processing of the images from the optical sensors and controlling the projector. In one possible embodiment of the invention, interaction with the system occurs through the movement of the operator's hand. When a pointed extremity, for example a finger of the hand, representing an image distinguishable over the background of the active surface, touches or approaches that surface, the coordinates Px and Py of the contact point are calculated in a continuous manner by the system comprised of the optical sensors and the computer, and thus made available to be used as input data for the different applications installed on the computer. In one possible application (designated "virtual pen") the coordinates of the active writing point can be stored into a random-access memory (RAM) of the computer. In this way, a registration of the trajectory executed by the point is obtained corresponding e.g. to graphical symbols or handwriting proper. This registration can be transmitted to the projector which projects it onto the active surface during the process of writing thus realizing a virtual pen: the user has the sensation of writing with the light of the projector onto the active surface.
In other applications, some specific active zones of the active surface can be associated to commands to be executed where these zones are touched: in this case the finger of the hand can be used as a substitute for peripheral such as a mouse used for the selection in a computer-controlled system.
The invention thus substantially mitigates the disadvantages outlined in the foregoing. Through the solution proposed herein, the intrusive pres- ence of technology is dispensed with, whilst interaction of the user with the system becomes more natural. Specific technical competence and training are no longer required: the user simply activates the system and initiates interaction therewith with the capability of exclusively focusing onto the training/information purposes of the session and not onto the technical means . Thanks to availability of self-calibration, the installation phase is simplified (placement of the projector and the sensors associated therewith as well as connection to the computer, without the need of precise measurements) and calibration is completely automated. The apparatus of the invention is not overly sensitive and can be used in quite diverse environments. In the preferred embodiment of the invention, what is required is primarily a surface (not necessarily a flat one) with an approximately uniform color onto which the image of the projector is projected and the sensors that monitor that surface in order to detect the location of tracer member. The flexibility of use of the invention is high. The same apparatus can be used on a desk, with the active surface being a part of the desk surface or an electronic drawing board, through a traditional white board, through a fold- able panel for the projection or, simply, through a piece of smooth but not necessarily flat wall surface. Finally, use with a display unit is possible, to obtain a way of operation similar to touch- screen operation based on electro- magnetic principles. The portability is enhanced by the low weight of the single components. Brief description of the drawings
Some possible embodiments of the invention will now be described, for exemplary and non-limiting purposes only, in connection with the annexed drawings, wherein:
- figure 1 schematically depicts interaction between the user and a system according to the invention,
- figure 2 grammatically shows the matrix of pixels in a projector for use in the system of the invention, - figure 3, including four sections designated 3a to 3d, respectively, shows definition of certain reference systems for use in the invention, - figure 4 illustrates location of the active writing point at the active surface in the system of the invention,
- figure 5, comprised of three sections designated 5a to 5c, respectively, illustrates self-calibration of the system according to the invention,
- figure 6 describes a possible use of neural networks within the framework of the invention,
- figure 7 shows determination of the active writing point in the system of the invention, - figure 8 schematically shows one way of eliminating interference between motion of the active writing point and motion of the projected image,
- figure 9 shows an alternative approach for interaction between the user and the system of the invention, - figure 10 shows a further alternative approach for interaction between the user and the system, and
- figure 11 shows a still further alternative approach to interaction between the user and the system of the invention. Detailed description of preferred embodiments of the invention
Apparatus for electronic writing/display (e.g. a so-called "electronic blackboard") according to the invention is designated 1 overall in figure 1. Apparatus 1 is intended for interaction with an operator H capable of generating graphical information (i.e. writing and/or drawing symbols/images) on a surface 7 by means of a pointed member F such as e.g. a pen, a light-pen, a stick or simply a finger. Surface 7 is shown herein as being a flat or substantially flat rectangular surface but may be notionally of any shape enabling writing or drawing and display thereon.
Apparatus 1 further includes a projector 6 for projecting graphical information onto surface 7 under the control of a processing unit 8 such as a computer.
Optical sensors 3 and 5 are arranged in order to "frame" surface 7 and generate respective signals indicate of the position of pointed member F (i.e. the tracer member) as explained in greater detail in the following. To that end, sensors 3 and 5 are preferably located at the upper corners of surface 7, still preferably by resorting to pods, brack- ets or arms 2 and 4 ensuring that sensors 3 and 5 at least slightly protrude towards operator H thus being positioned at a certain distance from surface 7.
In the optical sensors 3, 5 the image is projected by the optics of the system into a sensor matrix of dots (picture elements or pixels) composed of elementary surface portions characterized by a single level of intensity (one for each color in the case of color-sensitive sensors) . The cap- tioned sensor matrix can be conveniently comprised of a CCD element of the kind currently used in solid-state cameras. The sensed matrix of intensity values is then transmitted in the form of a respective signal to computer 8. In a similar way, projector 6 is controlled by computer 8 in order to define a matrix of intensity values, one for each pixel of the image. These values are then transformed into the projected light beams by the electronic and optical sections of the projector. A simplified illustration of the operation projector 6 is provided in figure 2. All of the foregoing corresponds to current technology, not requiring detailed description herein. Three reference systems, illustrated in figures 3b to 3d, respectively, are contemplated for use within the framework of the invention.
The first system (L - figure 3b) is associated to the pixel matrix of the left optical sensor 3. The second system (R - figure 3d) to the matrix of the right optical sensor 5. Finally, the third system (P - figure 3c) is associated to the projection matrix of projector 6. A single pixel in the cited matrices is therefore characterized by two coordinates in the corresponding reference system. A physical point in the three-dimensional space, in the working space visible by both optical sensors 3, 5 and such that it can be illuminated by the light beam originating from projector 6, is therefore associated to three points in the three reference systems described in the foregoing. The physical point will thus be projected by the optical system of the left optical sensor 3 onto a point with coordinates (Lx, Ly) in the reference system L. In a thoroughly equivalent way, the same point will have two coordinates (Rx, Ry) in the reference system R. Finally, the same point will be repered by two coordinates (Px, Py) in the system P of projector 6. Px and Py are the coordinates of the pixel of the matrix to which a maximum intensity value is to be associated if a light beam originating from the lens of the projector 6 and passing through the given physical point is needed. Basic principles of the invention In the following it will be assumed that the physical point is visible by both optical sensors 3, 5 and that it can be reached by a light beam originated from projector 6 without any obstacles therebetween. This point will be referred to in the following as the active point AP. Each active point in the working space of the invention has associated therewith six numerical values: Rx, Ry, Lx, Ly, Px, Py. As an example, the active point AP shown in figure 3 has associated therewith the numerical values Lx=9, Ly=7, Rx=3, Ry=6, Px=10, Py=6. In general, these values will be floating point values, such as Lx=9.35, Ly=7.42, especially if interpolation techniques are used to obtain values with an accuracy below the size of a single pixel . The values are calculated - in a known manner - starting from the signals generated by sensors 3 and 5 and stored in a memory area such as a RAM of computer 8.
These values are related by mathematical relationships. In general, given six arbitrary numerical values, it is not always true that there is an active point represented in the three different systems L, R and P with the given nu- merical samples. This fact is obvious if one considers that a couple of values determines a straight line in the three- dimensional space. For example, Rx and Ry determine the straight line of points in the three-dimensional space associated to Rx and Ry by the projective transformation executed by the optical system of the right-hand optical sensor. Therefore, two couples of values, for example (Rx, Ry) and (Lx, Ly) determine in a unique way two straight lines. If the two couples correspond to an active point, this point is determined by the intersection of the two straight lines. Therefore, the two remaining values (in this case Px and Py) cannot be chosen arbitrarily: in fact they are uniquely determined from the given four values .
The associations between active points and the values Rx, Ry, Lx, Ly, Px, Py are also subject to small inaccuracies due to the finite precisions of the processing steps executed (in a digital manner) in the optical sensors 3 and 5 and/or in computer 8. For example, round-off errors are generated if the six coordinates are represented with integer values (therefore eliminating the digits after the floating point in the number) . Standard techniques are however available for dealing with the inaccuracies/errors thus introduced. For example, the exact intersection of two lines in the three-dimensional space is substituted with the point where the distance between the two lines is at a minimum value (actually the minimum distance criterium determines two points, one on each line, and then one calcu- lates the point at the middle of the segment connecting the two previous points as the point of approximated intersection) .
The invention is based on the determination of the event given by contact between the active point (i.e. the writing point) and the active surface 7 and how the coordinates Px and Py in the pixel matrix of the projector 6 are calculated after the coordinates (Lx, Ly) and (Rx, Ry) of the two images of the writing point are acquired by the two optical sensors 3 and 5. In a first step of the following description, the existence of certain mathematical relationships (i.e. functions) will be assumed to exist between the coordinates Lx, Ly, Rx, Ry, Px, Py which functions are necessary for operation of the system. Subsequently, it will be demonstrated how the transformation from the coordinates of the reference system of the optical sensors (Lx, Ly, Rx, Ry) and the projection reference system of the projector (Px, Py) can be established. Finally, the solution of the invention will be shown to be capable of deriving those functions through a mechanism of automated calibration, without the need of manual measurement operations once the optical sensors 3 and 5 and projector 6 have been installed. This fact differentiates the invention from the classical method of stereo vision assuming the knowledge of the relative distances and rotation angles of the optical sensors, or where it is necessary to use a calibration pattern or structure of known dimensions in order to derive the three-dimensional position of an active point observed by two optical sensors, with respect to a fixed reference system. Stated otherwise, the person installing the system of the invention does not need any sort of "meter stick" . As a first step it will be explained how the system of the invention determines the presence or absence of contact (or close proximity, see below) between the active writing point AP and the active surface 7. Once the image of the active point AP in the working space is acquired by the sensing means represented in the preferred embodiment by the two optical sensors 3 and 5, one already has in an implicit way the information about the position of the point. In other words, a unique list of values (Lx, Ly, Rx, Ry, Px, Py) is associated to each ac- tive point in the working space of the system and it is therefore possible to invert the function, passing from the list to the coordinates in a fixed reference system. That information can thus be made explicit through the definition of certain mathematical functions and through the construction of approximations of these functions after starting from a set of examples, i.e. from a set of associations between input and output values, obtained during a initial self -calibration phase.
A simplified presentation of the foregoing may be based on the reduction to a simplified bi-dimensional model where the active point of interest is given by a black and a pointwise object on a white background and, for the sake of clarity, the horizontal plane passing through the working space will be considered also assuming that the optical sensors 3 and 5 are uni-dimensional, such that a pixel position is determined only by the coordinate Lx (for the left-hand sensor) and Rx (for right-hand sensor) . The situation is schematically illustrated in figure 4. Once the coordinate Lx is fixed, the possible positions of the active point will be given by a straight line originating from the left-hand optical sensor with a given direc- tion. The specific direction depends on Lx and on the optical system. In addition, it will be assumed that the reference systems L and R are arranged in such a way that a movement of the active point in the right direction will imply an increase of the values Lx and Rx. If, in addition to the coordinate Lx, the coordinate Rx associated with the same active point is also known, the physical location of the point can be determined from the intersection of the two straight lines (consider for example point PI in figure 4 , obtained by the intersection between the continuous straight line originating from L and one of the dashed lines originating from R) .
Now, if the value Lx is fixed, there will be only a single value Rx (assuming that the active surface is sufficiently smooth) such that the active point corresponding to the in- tersection is in contact with the active surface. This is the case of point P2 in figure 4. In other words, using a mathematical term, there exists a function, herein designated CONTACT, that, given a point Lx associates to it a single point Rx = CONTACT (Lx) . This value is given by the Rx coordinate (in the reference system R) of a point along the straight line determined by Lx, and in contact with the active surface . In the preferred embodiment of the invention the function
CONTACT is implemented by a hardware/software module in computer 8 which, starting from the value of Lx, calculates the value Rx . Let us consider the problem of determining the presence or absence of a contact. Given an active point, computer 8 obtains the coordinates Lx and Rx of the point starting from the signals generated by optical sensors 3 and 5. Computer 8 calculates the value Rxx = CONTACT (Lx), calculates the difference Delta = (Rx - Rxx) . If this difference is greater than zero, the active point is located to the right with respect to the point in contact with the surface along the straight line determined by Lx, and therefore beyond surface 7. If the difference is less than zero, is located to the left and therefore in front of the surface, towards the user. If this difference is zero, the point is at contact.
Because of the possible small inaccuracies introduced by the system when calculating the coordinates Lx and Rx, and because it is physically impossible for the active point to be located beyond the active surface 7 (at least in the case of a rigid surface, like a wall) it is sufficient to introduce a threshold value (defined as contact threshold) and to determine the contact on the base of a comparison of the difference Delta with this threshold. The value of the threshold can be determined when the invention is constructed, depending on the precision of the optical sensors (number of pixels) , and depending on the request of the user. It can also be made selectively adjustable in order to render a condition of proximity or close proximity equivalent to contact proper (see also below) . The method just described can be easily generalized to the multidimensional case of the application, therefore considering also the coordinates along the y axes of the pixel matrices of the optical sensors .
The functions of interest, that calculate the coordinates in the reference system of an optical sensor starting from the coordinates of the other sensor and from the requirement that the active point should be at contact with the surface, are the following:
CONTACTLx CONTACTLy
CONTACTRx CONTACTRy In the foregoing, the terminal part of the name is a mne- motic label whose meaning is as follows: CONTACTLx calcu- lates the coordinate Lx (henceforth the label) starting from the coordinates of the right-hand sensor, CONTACTLy calculates the coordinate Ly starting from the coordinates of the right-hand sensor, etc.
To determine the presence or absence of contact (or close proximity) , the system considers all four optical coordinates (Lx, Ly, Rx, Ry) associated with the active point, calculates one of the above functions defined in the foregoing, and compares the difference between the value obtained by the function and the value of the corresponding coordinate with the value of the contact threshold.
As an example, considering the function CONTACTLx, the difference calculated by the system is Lx - CONTACTLx (Rx, Ry) . In different embodiments of the invention, it is possible to choose one of the four functions, or to consider more functions (including all of the four of them) and to determine the absence/presence of the contact through an average of the Delta differences. In addition, it is possible to require that all four comparisons with the threshold are positive, or at least a certain number of these comparisons is positive, in order to identify a contact. In this way a greater flexibility is available when the system is realized, depending on the precision characteristics of the optical sensors and the requirements of the user. As indicated in the foregoing, one of the possible uses of the system of the invention is to realize a so-called virtual pen, i.e. the user moves a tracer member (e.g. a finger of the hand or a stick with a color different from the background) thus defining an active writing point and a projector projects a light beam in correspondence to the position subsequently identified by the tracer member. In this case, the user has the illusion of writing directly with the light of the projector onto the active surface.
For this use it is necessary to pass from the coordinates of the active point (in this case of the writing point) in the reference systems R and L of the optical sensors to the coordinates in the reference system P of the projector. Because the writing system is activated only when contact (or proximity) between the writing point and the active surface is detected, only the knowledge of a couple of coordinates is required, for example (Lx, Ly) , or (Rx, Ry) , to determine in a unique way the values Px and Py. The invention has therefore the possibility of calculating the corresponding functions, denoted as:
PROJECTPxL PROJECTPyL PROJECTPxR PROJECTPyR
The notation indicates which coordinate is calculated (Px or Py) and which are the input coordinates (those of the left optical sensor or those of the right optical sensor) . Let us assume that the system has stored in memory a list of coordinates
Llx Lly Rlx Rly Plx Ply L2x L2y R2x R2y P2x P2y L3x L3y R3x R3y P3x P3y
corresponding to points 1, 2, 3, ... on the projection surface (how these coordinates can be obtained is illustrated in the following section dedicated to the automated calibration process) . Let us consider one of the functions that are needed for the proper operation of the system, like Project PxL. A smooth approximation of the function can be obtained by starting from examples of input values (Lx, Ly) and output value Px, corresponding to the different points: input : Llx Lly output : Plx input: L2x L2y output: P2x input: L3x L3y output: P3x
A machine-learning system can use the above examples to generalize the association between inputs and outputs, in this way producing an approximation of the desired function. As explained in greater detail in the following, the above functions may be realized through a mechanism of machine learning from examples based on the use of neural nets. Automated calibration through machine learning In the foregoing, some functions were assumed to exist in order to detect positioning of the active point at the active surface (functions CONTACTLx, CONTACTLy, CONTACTRx and CONTACTRy) and transform the coordinates of the optical sensors in the reference systems L or R into the coordinates of the pixel matrix of the projector in the reference system P (functions PROJECTPxL, PROJECTPyL, PROJECTPxR, and PROJECTPyR) .
In a preferred embodiment of the invention, these functions are constructed in an automated way during a preliminary setup phase . During that phase (schematically depicted in figures 5a, 5b and 5c) , projector 6 projects one after another, under control from computer 8, a series of images comprised e.g. of a single contrasting point against the background of surface 7, e.g. e single black point on a white background. The positions of the black point in the different images are obtained by varying the coordinates Px and Py of the pixel matrix of projector 6 so that they assume values corresponding to a set of points that covers the matrix in a uniform way. As an example, if the pixel matrix of the projector (figure 5b) is comprised of 512 x 512 pixels, with values Px and Py ranging from 0 to 511, the values considered for the cou- pies (Px, Py) could be (0,0), (0,16), (0,32), (16, 0),
(16,16), (16, 32) ..., ..., (511, 511). In this way 32 x 32
= 1024 points are obtained. Each image is acquired by the optical sensors 3 and 5 and transmitted to computer 8, which, starting from the pixel matrices acquired by both optical sensors, calculates the coordinates (Lx, Ly) and (Rx, Ry) of the pixel with the lowest intensity presence in the pixel matrix associated with the left and right sensors, corresponding to the image of the point projected (see figures 5a and 5c) .
After projection of a single image with a black point with coordinate (Px, Py) in the reference system of projector 6, coordinates that are known to the program that generates the images, the corresponding coordinates (Lx, Ly) and (Rx, Ry) are therefore obtained.
Computer 8 stores the series of data (Px, Py, Lx, Ly, Rx, Ry) e.g. in a RAM area provided therein. After all the images have been projected, these data are used by a hardware/software component to construct the required func- tions .
In a first possible embodiment, the required functions can be calculated with traditional fitting techniques based on polynomials with a degree sufficiently high in order to ensure that non linearities in the system are compensated. Such a method is known, e.g. from W.H. Press, B. P. Flan- nery, S.A. Teukolsky and W.T. Vetterling. Numerical Recipes in C, Cambridge University Press, 1988.
In a second possible embodiment, the required functions are approximated through flexible representations, also known as "network of functions" or "neural networks". These representations are constructed starting from a set of examples by machine learning mechanisms . Some examples of these networks of functions are illustrated in figures 6a and 6b. It is known that neural nets can be used to realize compu- tational architectures that compute the solution to a problem (in the instant case the problem is that of determining the above described functions) starting from a set of in- put-output associations that have been stored into memory and used for an automated training phase.
A fast and effective technique for the training phase (the "One-Step Secant" or OSS method) is described in R. Bat- titi, "First-and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method", Neural Computation, Vol. 4, No. 2, pp. 141-166, 1992.
One starts by considering a function E to minimize given by the sum over all training examples of the squared differ- ence between output values obtained by the network (values depending on the connection weights x) and target output values (obtained from the database of examples) . If the examples are numbered from 1 to P, the output neurons from 1 to N, the value obtained on output neuron j, for example p, is denoted as 0_p(x) the target output value for the same neuron and example is noted as T_p(x), the function is:
P N
E(x)= Σ ∑ (0.p(x) - T:p(x))2 (I) P=i j=ι
Training is executed by minimizing the above function with respect to its parameters ("weights") x: when smaller values are obtained the obtained output values and the target values tend to become similar. It is also important to ensure that the total number of parameter is limited, so that the network can generalize in an appropriate way to new cases, not considered during training. The one- step- secant method OSS is a variation of what is called one-step (memory-less) Broyden-Fletcher-Goldfarb- Shanno method.
The one-step method requires only vectors computed from gradients g of the function f. The new search direction at a given iteration p+ is obtained as: p+ = - gc + Acsc + Bcyc (II) where the two scalars Ac and Bc are the following combination of scalar products of the previously defined vectors sc, gc and yc (last step, gradient and difference of gradients) : A Λ c = — ( HΛ , vIvA sc Sc , yΪ9c . R ._ s ψ —yc j — sψyc 1 s ψ —yc , &c —
Figure imgf000020_0001
(III) The search direction is the negative gradient at the beginning of learning and it is restarted to -gc every N steps (N being the number of weights in the network) . The one-step secant algorithm can thus be described in the form of the program excerpt reproduced in the table below. TABLE 1 - The one-step secant algorithm
procedure ossjminimize begin begin_or_restart
Initialize the learning rate ε := i0~5.
Initialize the average learning rate ε := 10~5.
Set wcurτ := random initial weights iterations := 1 while convergence_criterion_is_not_satisfted do begin if iterations_is_a_multiple_of_N then begin_or_restart iterations := iterations + 1 d =find_search_direction (see equation
Mfastjinejsearch (d) = FALSE then begin_or_restart end end
procedure begin_or restart begin
Find the current energy value. ε := ε d :=-g fast_line_search (d) end function/αyr_/ e__;eαrcι (d) comment
Search for a new weight vector starting from the current configurationNσι„ and moving along direction d. dt is the directional derivative of E along d. Gβcr is a constant equai to 0.5, used to multiply the directional derivative. The constant MAX_TRIALS is equal to 10.
Lιπcr is a constant equal to 1.1. Ldecr is a constant equal to 0.5. begin di =g-d
If d is not a descent direction (dj > 0), reset it to the negative gradient g: if d, >0then begin d:=-g; d,: = g-d end Save the value of E and the gradient corresponding to w^:
"-jived -= E"> ε .— ^na • ε ok := FALSE; trials :=0 repeat begin trials := trials + 1 ws=wαπr + £-d
E:=£( ) if E < (Ejaved + Gdecr d, ε) then ok := TRUE else begin imizer (EMved, dlf f)
Figure imgf000021_0001
if E < (Ej.ved + Gdκr • d, • ε^i) then begin ok := TRUE ε .— f'quad end else
£ :=Ldecr -ε end end until ok = TRUE or trials > MAX_TRIALS if ok = TRUE then begin
— • p := ε d
g := V^E( ) (Only the backward pass of backpropagation is needed at this point) ε := 0. • ε + 0.1 • ε end fast_line_search := ok; end The fast one-dimensional minimization along the direction pc is significant in obtaining an efficient algorithm. The one-dimensional search is based on the "backtracking" strategy. The last successful learning rate λ is increased (λ r- λ x 1.1) and the first tentative step is executed. To use the same notation as that of Table 1, E ("energy") represents the function to be optimized. If the new value E is not below the "upper- limiting" curve, then a new tentative step is tried by using successive quadratic interpolations until the requirement is met. The learning rate is decreased by Ldec, after each unsuccessful trial. Quadratic interpolation is not wasting computation, in fact, after the firs trial one has exactly the information that is needed to fit a parabola: the value of E0 and E'0 at the initial point and the value of Eλ at the trial point . The parabola P (x) is:
Figure imgf000022_0001
and the minimizer λΛ1_a is
- E'„
^m ~
,r E. - E0 -Λ.E'n 2 (1 - Gd (V)
if the "gradient-multiplier" Gdecr in Table 1 is 0.5, the λmιn that minimizes the parabola is less than λ.
After the self -calibration phase based on machine learning, computer 8 determines the values of the internal parameters of the neural network, parameters that are going to remain fixed for the subsequent use of the system: see for example the back-propagation technique of D. E. Rumelhart, G.E. Hinton and R. J. Williams "Learning internal representations by error propagation" in D . E. Rumelhart and J. L. McClelland (Eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition . Vol . 1 : Foundations, MIT Press, 1986 or the one- step secant technique de¬ scribed in the foregoing. The flexibility of the approximation realized with neural networks permits to realize functions with non-linear terms, so that the possible lack of linearity caused by low precision of the optical sensors, or by the possible curvature of the active surface can be compensated. The same re- suit cannot be obtained by using traditional techniques based on linear or affine transformations.
After the preliminary self-calibration phase is completed, the system enters a mode of operation corresponding to proper identification and tracking of the active writing point AP.
In practice, the active point AP may be represented by different entities depending on the different applications. For example, in an application the interaction is mediated through the moving finger of the operator, and the optical sensors 3, 5 are thus capable of distinguishing the intensity (or the color - in the case of color-sensitive optical sensors) of the finger against the background of surface 7. The optical sensors are preferably located in the upper zone of the active surface, supported by pods, brackets or arms 2, 4 protruding towards the operator whilst computer 8 can identify, for each optical sensor, the point of lowest intensity (below a suitable threshold) that is in the highest position in the pixel matrix: see figure 7. From the images of the left and right sensors 3 and 5, the system derives therefore the four coordinates (Lx, Ly, Rx, Ry) corresponding to the active point. These coordinates can be used for identifying the location of the active point AP at the active surface, i.e. actual contact or close proximity. In that respect, in the detailed description provided in the foregoing, reference has been made primarily, for the sake of simplicity, to contact between active point P and active surface 7. However, those skilled in the art will promptly appreciate that actual contact is not an absolute requirement for determining the location of active point AP at the active surface 7. Proper definitions of e.g. the threshold functions considered in the foregoing permits to achieve the same results described simply by placing the tracer element in proximity of active surface 7, without actually "touching" that surface. The degree of proximity required can be made adjustable by rendering the captioned threshold functions correspondingly adjustable. Whilst in systems for desk and/or tablet use such as shown, e.g. in figures 10 and 11 actual contact of tracer member F with active surface 7 is usually preferred (as this achieves a closer simulation of a traditional writing action) , in sys- terns intended for use with e.g. large projection screens or wall projections, adjusting the threshold functions for proximity values of, say, some centimeters may represent a useful option. This is particularly true for those applications where projection takes place on an existing screen or surface (e.g. a wall) and/or direct contact by the tracer member (such as a user's finger) may give rise to a desirable effect such as deformation and/or staining of the projection surface. Also, in the case a light pen is used as the tracer member, "contact" is evidently to be understood as the virtual contact deriving from projection on active surface 7 of contrasting spot (constituting the "active point") produced by the light pen onto active surface 7. In the case the background of the working space (i.e. ac- tive surface 7) is not easily filterable, for example in the case of background with texture, it is possible to use a movement-detection module to distinguish the static from the moving parts of the image. This module executes the differences, pixel by pixel, between two images in differ- ent time frames. Values different from zero indicate the presence of a moving object. Such techniques are well-known in the art, e.g. in radar techniques to determine the posi- tion of moving targets and do not require to be described in detail here. In any case, irrespective of the different method adopted for distinguishing tracer member F over active surface 7 processing of the signals representing posi- tion coordinates is identical as described in the foregoing.
Similarly, identification of active point AP may be made simpler and more reliable by the use of suitable indicators, e.g. by using, as the tracer member F, an object, such as a pen of a color which is not present in other parts of the working space.
A factor to be considered and dispensed with is the possible interference between the motion of the writing point and the motion of the image projected onto active surface 7.
As an example, the use of the system to realize a virtual pen can be considered: the user traces with a tracer element (such as a pen or a point extremity or a finger) a trajectory onto active surface 7 and, at the same time (with a delay which is not perceived by the user) , projector 6 projects the image of this trajectory onto the same active surface.
Active point AP is identified (through the determination of the low-intensity point with the highest y coordinates or of the moving point with the highest coordinates) whilst the optical sensors may not distinguish between images of a physical object (i.e. the writing point) and images projected by projector 6 (the projected light beam impinging onto active surface 7) . The system may thus be possibly misled in the identification of the active point, by considering as the active point one of the points of the writing projected. In fact, also the projected points can be of low intensity (e.g. writing of black color) in the upper portion of the image and in motion. For the sake of example let us consider in particular the projection of the last point just "written" by the user. Under the circumstances, the system may become unstable be- cause of the following mechanism.
Let us assume that the active point AP identified by the system is the last point written on active surface 7 by the light beam projected by projector 6 with coordinates Pxl and Px2. Because of the possible inaccuracies in determination of the point, it is possible that the point is determined with certain errors, i.e. epsilon 1 and epsilon 2, leading to (Pxl + epsilon 1) and (Px2 + epsilon2) . During operation as a virtual pen, computer 8 causes projector 6 to project a minimum intensity beam with the given coordinates (assuming that writing is with a black color) . Therefore, the mechanism causes a new black point to be written on active surface 7. The tracking module for determining active point AP will then detect a displacement (from Pxl and Px2 to Pxl + epsilon 1 and Px2 + epsilon 2) , as causing a new black point to be written etc. In the worst case, the user can observe that the trajectory of writing becomes uncontrolled: the light of projector 6 writes additional points without being prompted to do so by the user .
Of course, that effect is undesirable and must be dispensed with for correct operation of the system.
Essentially, interference between the writing point and the image written by projector 6 may be dispensed with decou- pling the written image from the image used to determine active point AP.
In a first possible embodiment, as illustrated in figure 8, computer 8 maintains in its memory modules to images, denoted Imagel and Image2. Imagel contains the trace of the writing trajectory, while Image2 contains a pixel matrix with uniform intensity values. Two synchronization signals determine the projection by projector 6 of the two images in two different time frames.
Imagel is projected most of the times, while Image2 is pro- jected only while the image is being captured by optical sensors 3 and 5. Given the uniformity of image 2, the brevity of its perma- nence on active surface 7 and the characteristics of temporal integration of the human visual system, the user will see in a conscious way only Imagel .
Conversely, the images captured by the optical sensors 3 and 5 during projection of Image2 will not contain any sign of the projecting writing: therefore, the determination of active point AP will not be misled by any interference. The permanence time of Image2 depends on the acquisition time of the sensors. In any case, optical sensors available commercially and having acquisition times lower than a hundredth of a second were found to be thoroughly satisfactory for use within the system of the invention.
An alternative solution provides for interference being dispensed with by distinguishing the projected writing from active point AP for example by projecting the writing with a color and/or an intensity level that are different from the color and/or the intensity level of tracer member F, i.e. the pen.
Alternative embodiments of the invention Possible alternative embodiments of the invention provide for substituting projector 6 with other display means positioned in the working space and seen by optical sensors as illustrated in figure 9. For example, display means can be comprised of a liquid-crystal panel, or a display panel based on a different technology, or by a traditional computer monitor (i.e. a cathode ray tube) with appropriate size and dimensions. In this case, computer 8 transmits the pixel matrix (intensity level for the possible values of the coordinates Px and Py) to the display unit instead of projector 6.
Optical sensors such as sensors 3 and 5 can be substituted by other means for determining the position of a tracer member, for example by detecting pressure from a pen's tip, light from a light-emitting pen or by detecting electrical changes as in "touch-screen" systems. In this case, calibration of the system can be executed by projecting an image consisting of a number of calibration points (for exam- pie black points on light background) and by clicking with the pen at the positions where the different points are projected.
If a single optical sensor is used, calibration can be exe- cuted by projecting the same calibration pattern and observing an image of the calibration pattern and of the display border .
Desk use of the apparatus of the invention is possible, where the image is projected onto a portion of a desk as illustrated in figure 10. Alternatively, the optical sensing means can frame a display positioned on a table, at whatever position chosen by the user.
The use of the apparatus together with a writing tablet is possible, where the image is projected onto the tablet, as illustrated in figure 11. In this case, the position- detecting sensors may be included in the tablet (whereby no optical sensors are needed) . The image projected onto the tablet, after automated calibration of the system, is used as a feedback signal for the user, avoiding the need to look at a separate display (such as a computer monitor) . Some tablets admit the possibility of using a sheet of paper on their surface in order to provide visual feedback to the writing person.
The invention offers the possibility to have a feedback for operations such as moving objects or "cut and paste" operations, or projecting multimedia content, which cannot be achieved with a static sheet of paper. The writing tablet can contain dynamic sensing capability, including pressure, tilt and height of the pen. The pen can have multiple but- tons and be either corded or cordless. Calibration can be effected as it was described previously in a manner that avoids the use of visual sensors. In this last case, the position of the reference points projected by the projector in the reference system of the writing tablet is given by clicking on the tablet with a suitable pen, acquiring and storing the position and proceeding as it was described by machine learning techniques or by polynomial fitting tech- niques .
The use of more than two sensors is possible, to increase the spatial resolution in the case of sensors with a limited number of pixels. The sensor can be mounted on a mov- ing support, so that their orientation and position can be controlled through motors controlled by computer 8. In this way, the same sensor can vary its position and/or rotation angles with respect to active surface 7. In addition, in the case of optical sensors, this can be equipped with op- tical systems such as the focus and the magnification can be controlled by computer 8.
The use of a single sensor is possible to provide a simplified apparatus as illustrated in figure 12. For example, single optical sensor 3 can be placed on the ground facing upwards, so that the acquired image is a strip corresponding to an area closed to the active surface. If the ceiling of the room is white, the presence of an object with a lower intensity permits to determine that the active point (for example a hand or a pointed extremity) is touching the active surface 7. Only one coordinate is available, so that the mechanism is suitable for interacting with a one- dimensional projected strip, like a one-dimensional toolbar placed horizontally. The connections between the different components of the system (computer 8, projector 6, optical sensors 3, 5) can be realized with wireless technologies, e.g. through infrared or radio signals.
The use of traditional image processing and enhancement techniques for the images captured by the optical sensors can be added, to reduce the effect of statistical noise in the sensors, to ameliorate the distribution of intensity values of the image, to identify edges, to identify moving parts, etc. The accuracy achieved during the calibration phase can be increased through the projection of patterns different from simple black points for examples black circles of a certain radius . The system can therefore determine the position of the center of these patterns, captured by the optical sensors, with interpolation techniques, therefore obtaining a degree of positional accuracy better than the dimension of a single pixel. Function approximation techniques different from the neural networks can be used for constructing the functions defined for location of the active point. For example, lookup-tables or polynomials interpolating between the data points obtained during projection of the calibration patterns can be used.

Claims

1. Apparatus for displaying on a surface (7) graphical information generated by means of a tracer member (F) defining an active point (AP) , characterized in that it in- eludes :
- sensor means (3, 5) adapted to be associated with said surface (7) to define at least one respective spatial reference system (Lx, Ly; Rx, Ry) at said surface (7) and detect the location of said active point (AP) within said spatial reference system to generate signals indicative of said location,
- display means (6) adapted for producing at said surface (7) picture elements located at respective points (Px, Py) of a respective display reference system, and - processing means (8) sensitive to said signals generated by said sensor means (3, 5) for controlling said display means (6) to cause said display means to produce picture elements at respective points (Px, Py) of said respective display reference system univocally corresponding to the location of said active point (AP) at said surface (7) as detected by said sensor means (3, 5), whereby said picture elements convey said graphical information.
2. The apparatus of claim 1, characterized in that said processing means (8) are arranged to memorize a sequence of subsequent locations of said active point (AP) at said surface (7) defining a writing trajectory of said tracer member (F) and for controlling said display means (6) to produce a corresponding sequence of picture elements at a respective sequence of points (Px, Py) of said respective display reference system reproducing said writing trajectory at said surface (7) .
3. The apparatus of claim 1 or claim 2, characterized in that said processing means (8) are arranged in order to:
- cause said display means (6) to produce at said sur- face (7) calibration graphical information comprised of pre-defined patterns of said picture elements,
- cause said sensor means (3, 5) to detect at least some of the picture elements of said pre-defined patterns and generate respective calibration signals conveying information as to the location of said at least some picture elements within said at least one spatial reference system (Lx, Ly, Rx, Ry) , and
- correlating said calibration graphical information and said respective calibration signals to derive input- output maps relating said at least one spatial reference system of said sensor means (3, 5) to said respective dis- play reference system of said display means (6) .
4. The apparatus of claim 3, characterized in that said processing means (8) are arranged to correlate said calibration graphical information and said respective calibration signals by means of fitting polynomials.
5. The apparatus of claim 3, characterized in that said processing means (8) are arranged to correlate said calibration graphical information and said respective calibration signals by neural networks.
6. The apparatus of any of claims 1 to 5, characterized in that said processing means (8) are arranged to identify the location of said active point (AP) at said surface (7) starting from signals indicative of motion of said active point (AP) with respect to said surface (7) .
7. The apparatus of any of claims 1 to 6 , characterized in that said processing means (8) are provided to de- correlate said sensor means (3, 5) and said display means (6) thus preventing said sensor means (3, 5) from detecting any picture elements produced by said display means (6) at said surface (7) as indicative of the location of said ac- tive point (AP) .
8. The apparatus of claim 7, characterized in that said processing means (8) are arranged to activate said display means (6) at times different from the times where said sensor means (3, 5) detect the location or said active point (AP) at said surface (7) .
9. The apparatus of claim 7, characterized in that at least one of said display means (6) and said sensor means (3, 5) are arranged in order that said picture elements at said surface (7) are produced with at least one of a color and an intensity level different from at least a respective one of the color and the intensity level of said active point (AP) as detected by said sensor means (3, 5) .
10. The apparatus of any of claims 1 to 9, characterized in that said processing means (8) are arranged to compare said signals generated by said sensor means (3, 5) with at least one threshold value in order to determine, depending on the result of comparison, the location of said active point (AP) at said surface (7) .
11. The apparatus of claim 9, characterized in that said at least one threshold is made selectively adjustable.
12. The apparatus of any of claims 1 to 11, characterized in that it includes at least two distinct sensor means (3,
5) adapted to be associated with two respective regions of said surface (7) to define said at least one spatial reference system (Lx, Ly; Rx, Ry) as a multi-dimensional reference system.
13. The apparatus of claim 1 or claim 11, characterized in that said sensor means (3, 5) have associated therewith support means (2, 4) for arranging said sensor means (3, 5) at least partially protruding with respect to said surface (7) .
14. The apparatus of any of claims 1 to 13, characterized in that said sensor means (3, 5) are selected from the group consisting of:
- optical sensors sensitive to the image of said tracer member (F) against said surface (7) , - radiation sensors sensitive to radiation emitted by said tracer member (F) ,
- pressure sensors sensitive to pressure exerted by said tracer member (F) against said surface, and
- electrical sensors sensitive to electrical changes induced at said surface (7) by said tracer member (F) .
15. The apparatus of any of claims 1 to 14, characterized in that said display means are selected from the group con- sisting of:
- projector means (6) for projecting said picture elements at said surface (7) , and
- active surface panel means provided with surface modifying means to produce said picture elements as respective local modifications of appearance of said active surface .
16. The apparatus of claim 15, characterized in that said active surface panel is selected from the group consisting of a liquid crystal panel and a cathode ray tube screen.
17. A method for displaying graphical information, characterized in that it includes the steps of:
- providing a surface (7) as well as a tracer member (F) defining an active point (AP) for generating said graphical information,
- defining at least one respective spatial reference system (Lx, Ly; Rx, Ry) at said surface (7) and detect the location of said active point (AP) within said spatial reference system to generate signals indicative of said loca- tion,
- producing at said surface (7) picture elements located at respective points (Px, Py) of a respective display reference system, univocally corresponding to the location of said active point (AP) at said surface (7) , whereby said picture elements convey said graphical information.
18. The method of claim 17, characterized in that it includes the steps of :
- memorizing a sequence of subsequent locations of said active point (AP) at said surface (7) defining a writ- ing trajectory of said tracer member (F) , and
- producing a corresponding sequence of picture elements at points (Px, Py) of said respective display reference system reproducing said writing trajectory at said surface (7) .
19. The method of claim 17 or claim 18, characterized in that it includes the steps of :
- producing at said surface (7) calibration graphical information comprised of pre-defined patterns of said picture elements,
- detecting at least some of the picture elements of said pre-defined patterns, - generating respective calibration signals conveying information as to the location of said at least some picture elements within said at least one spatial reference system (Lx, Ly, Rx, Ry) , and
- correlating said calibration graphical information and said respective calibration signals to derive input- output maps relating said at least one spatial reference system to said display reference system.
20. The method of claim 19, characterized in that it includes the step of correlating said calibration graphical information and said respective calibration signals by means of fitting polynomials.
21. The method of claim 19, characterized in that it includes the step of correlating said calibration graphical information and said respective calibration signals by neu- ral networks .
22. The method of any of claims 17 to 21, characterized in that the location of said active point (F) at said surface is identified starting from signals indicative of motion of said active point (AP) with respect to said surface (7) .
23. The method of any of claims 17 to 22, characterized in that it includes the step of de-correlating detection of said location and production of said picture elements thus preventing detection of any picture elements produced as indicative of the location of said active point (AP) .
24. The method of claim 23, characterized in that said picture elements are produced at times different from the times where the location at said surface (7) is detected.
25. The method of claim 23, characterized in that said picture elements are produced with at least one of a color and an intensity level different from at least a respective one of the color and the intensity level of said active point (AP) as detected at said active surface (7) .
26. The method of any of claims 17 to 25, characterized in that said signals indicative of said location are compared with at least one threshold value in order to determine, depending on the result of comparison, the location of said active point (AP) at said surface (7) .
27. The method of claim 26, characterized in that said at least one threshold is made selectively adjustable.
28. The method of any of claims 17 to 27, characterized in that it includes the steps of defining said spatial refer- ence system (Lx, Ly; Rx, Ry) as a multi-dimensional reference system.
29. The method of claim 17 or claim 28, characterized in that the location of said active point (AP) is detected by sensor means (3, 5) at least partially protruding with re- spect to said surface (7) .
PCT/EP1999/003921 1999-06-08 1999-06-08 Electronic writing/display apparatus and respective method of operation WO2000075860A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP1999/003921 WO2000075860A1 (en) 1999-06-08 1999-06-08 Electronic writing/display apparatus and respective method of operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP1999/003921 WO2000075860A1 (en) 1999-06-08 1999-06-08 Electronic writing/display apparatus and respective method of operation

Publications (1)

Publication Number Publication Date
WO2000075860A1 true WO2000075860A1 (en) 2000-12-14

Family

ID=8167324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP1999/003921 WO2000075860A1 (en) 1999-06-08 1999-06-08 Electronic writing/display apparatus and respective method of operation

Country Status (1)

Country Link
WO (1) WO2000075860A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2472922A (en) * 2009-08-19 2011-02-23 Compurants Ltd A combined table and computer-controlled projector unit
US20160316186A1 (en) * 2015-04-21 2016-10-27 Dell Products L.P. Continuous Calibration of an Information Handling System Projected User Interface
US10139854B2 (en) 2015-04-21 2018-11-27 Dell Products L.P. Dynamic display resolution management for an immersed information handling system environment
US11243640B2 (en) 2015-04-21 2022-02-08 Dell Products L.P. Information handling system modular capacitive mat with extension coupling devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0571702A2 (en) * 1992-05-26 1993-12-01 Takenaka Corporation Hand pointing type input unit and wall computer module
EP0622722A2 (en) * 1993-04-30 1994-11-02 Rank Xerox Limited Interactive copying system
EP0626636A2 (en) * 1993-03-16 1994-11-30 Hitachi, Ltd. Information processing system
US5528263A (en) * 1994-06-15 1996-06-18 Daniel M. Platzker Interactive projected video image display system
EP0866419A2 (en) * 1997-03-21 1998-09-23 Takenaka Corporation Pointing device using the image of the hand

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0571702A2 (en) * 1992-05-26 1993-12-01 Takenaka Corporation Hand pointing type input unit and wall computer module
EP0626636A2 (en) * 1993-03-16 1994-11-30 Hitachi, Ltd. Information processing system
EP0622722A2 (en) * 1993-04-30 1994-11-02 Rank Xerox Limited Interactive copying system
US5528263A (en) * 1994-06-15 1996-06-18 Daniel M. Platzker Interactive projected video image display system
EP0866419A2 (en) * 1997-03-21 1998-09-23 Takenaka Corporation Pointing device using the image of the hand

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2472922A (en) * 2009-08-19 2011-02-23 Compurants Ltd A combined table and computer-controlled projector unit
GB2472922B (en) * 2009-08-19 2013-09-25 Compurants Ltd A combined table and computer-controlled projector unit
US20160316186A1 (en) * 2015-04-21 2016-10-27 Dell Products L.P. Continuous Calibration of an Information Handling System Projected User Interface
US10139854B2 (en) 2015-04-21 2018-11-27 Dell Products L.P. Dynamic display resolution management for an immersed information handling system environment
US11106314B2 (en) * 2015-04-21 2021-08-31 Dell Products L.P. Continuous calibration of an information handling system projected user interface
US11243640B2 (en) 2015-04-21 2022-02-08 Dell Products L.P. Information handling system modular capacitive mat with extension coupling devices

Similar Documents

Publication Publication Date Title
EP1611503B1 (en) Auto-aligning touch system and method
JP3885458B2 (en) Projected image calibration method and apparatus, and machine-readable medium
JP4822643B2 (en) Computer presentation system and method with optical tracking of a wireless pointer
US8890812B2 (en) Graphical user interface adjusting to a change of user&#39;s disposition
ES2279823T3 (en) TACTILE SYSTEM BASED ON CAMERAS.
US8120596B2 (en) Tiled touch system
US5764217A (en) Schematic guided control of the view point of a graphics processing and display system
US6704000B2 (en) Method for remote computer operation via a wireless optical device
KR101016136B1 (en) Method and system for aligning an array of projectors
US7372456B2 (en) Method and apparatus for calibrating an interactive touch system
EP2296080A2 (en) Camera-based touch system
EP1550940A2 (en) Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region.
JPH11513483A (en) Method and apparatus for determining position and orientation
US10437342B2 (en) Calibration systems and methods for depth-based interfaces with disparate fields of view
WO2000075860A1 (en) Electronic writing/display apparatus and respective method of operation
Vorozcovs et al. The hedgehog: a novel optical tracking method for spatially immersive displays
Sánchez Salazar Chavarría et al. Interactive 3D touch and gesture capable holographic light field display with automatic registration between user and content
Medien Implementation of a low cost marker based infrared optical tracking system
Darcis et al. Poselab: A levenberg-marquardt based prototyping environment for camera pose estimation
CN111752376A (en) Labeling system based on image acquisition
CN212781988U (en) High-precision touch interaction system for multi-surface splicing projection
WO1998047406A2 (en) Interactive desk
Karahan et al. A New 3D Line of Gaze Estimation Method with Simple Marked Targets and Glasses
Supe et al. REAL TIME INTERACTIONS FOR PROJECTION BASED ON CALIBRATION METHOD
Kato et al. Visual interface from uncalibrated cameras for uncalibrated displays

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
WA Withdrawal of international application