US20070077987A1 - Gaming object recognition - Google Patents
Gaming object recognition Download PDFInfo
- Publication number
- US20070077987A1 US20070077987A1 US11/381,473 US38147306A US2007077987A1 US 20070077987 A1 US20070077987 A1 US 20070077987A1 US 38147306 A US38147306 A US 38147306A US 2007077987 A1 US2007077987 A1 US 2007077987A1
- Authority
- US
- United States
- Prior art keywords
- card
- module
- region
- image
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/32—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/32—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
- G07F17/3202—Hardware aspects of a gaming system, e.g. components, construction, architecture thereof
- G07F17/3216—Construction aspects of a gaming system, e.g. housing, seats, ergonomic aspects
- G07F17/322—Casino tables, e.g. tables having integrated screens, chip detection means
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/32—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
- G07F17/3225—Data transfer within a gaming system, e.g. data sent between gaming machines and users
- G07F17/3232—Data transfer within a gaming system, e.g. data sent between gaming machines and users wherein the operator is informed
Definitions
- Casinos propose a wide variety of gambling activities to accommodate players and their preferences. Some of those activities reward strategic thinking while others are impartial, but each one of them obeys a strict set of rules that favours the casino over its clients.
- U.S. patent application Ser. No. 11/052,941, titled “Automated Game Monitoring”, by Tran, discloses a method of recognizing a playing card positioned on a table within an overhead image. The method consists in detecting the contour of the card, validating the card from its contour, detecting adjacent corners of the card, projecting the boundary of the card based on the adjacent corners, binarizing pixels within the boundary, and counting the number of pips to identify the value of the card. While such a method is practical for recognizing a solitary playing card, or at least one that is not significantly overlapped by other objects, it may not be applicable in cases where the corner or central region of the card is undetectable due to the presence of overlapping objects. It also does not provide a method of distinguishing face cards. Furthermore, it does not provide a method of extracting a region of interest encompassing a card identifying symbol when only a partial card edge is available or when card corners are not available.
- the method proposes determining a central moment of individual playing cards to determine a rotation angle. This approach of determining a rotation angle is not appropriate for overlapping cards forming a card hand. They propose counting the number of pips in the central region of the card to identify number cards. This approach of pip counting will not be feasible when a card is significantly overlapped by another object.
- the neural network is trained using a scaled image of the card symbol.
- a possible disadvantage of trying to directly recognize images of a symbol using a neural network is that it may have insufficient recognition accuracy especially under conditions of stress such as image rotation, noise, insufficient resolution and lighting variations.
- Card recognition is particularly instrumental in detecting inconsistencies on a game table, particularly those resulting from illegal procedures. However, such detection is yet to be entirely automated and seamless as it requires some form of human intervention.
- MP Bacc a product marketed by Bally Gaming for detecting an inconsistency within a game of Baccarat consists of a card shoe reader for reading bar-coded cards as they are being dealt, a barcode reader built into a special table for reading cards that were dealt, as well as a software module for comparing data provided by the card reader and discard rack.
- the software module verifies that the cards that have been removed from the shoe correspond to those that have been inserted into the barcode reader on the table. It also verifies that the order in which the cards have been removed from the shoe corresponds to the order in which they were placed in the barcode reader.
- One disadvantage of this system is that it requires the use of bar-coded cards and barcode readers to be present in the playing area. The presence of such devices in the playing area may be intrusive to players. Furthermore, dealers may need to be trained to use the special devices and therefore the system does not appear to be seamless or natural to the existing playing environment.
- An exemplary embodiment is directed to a system for identifying a gaming object on a gaming table comprising at least one overhead camera for capturing an image of the table; a detection module for detecting a feature of the object on the image; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; and an identity module trained to recognize the object from the transformed region.
- At least one factor attributable to casino and table game environments and gaming objects impedes reliable recognition of said object by said statistical classifier when trained to recognize said object from said region of interest without transformation by said feature space module.
- Another embodiment is directed to a method of identifying a value of a playing card placed on a game table comprising: capturing an image of the table; detecting at least one feature of the playing card on the image; delimiting a target region of the image according to the feature, wherein the target region overlaps a region of interest, and the region of interest describes the value; scanning the target region for a pattern of contrasting points; detecting the pattern; delimiting the region of interest of the image according to a position of the pattern; and analyzing the region of interest to identify the value.
- Another embodiment is directed to a system for detecting an inconsistency with respect to playing cards dealt on a game table comprising: a card reader for determining an identity of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table; a recognition module for determining an identity of each card positioned on the table from the images; and a tracking module for comparing the identity determined by the card reader with the identity determined by the recognition module, and detecting the inconsistency.
- FIG. 1 is an overhead view of a card game
- FIG. 2 is a side plan view of an imaging system
- FIG. 3 is a side plan view of an overhead imaging system
- FIG. 4 is a top plan view of a lateral imaging system
- FIG. 5 is an overhead view of a gaming table containing RFID detectors
- FIG. 6 is a block diagram of the components of an exemplary embodiment of a system for tracking gaming objects
- FIG. 7 is a plan view of card hand representations
- FIG. 8 is a flowchart of a first embodiment of an IP module
- FIG. 9 is an overhead view of a gaming table with proximity detection sensors
- FIG. 10 is a plan view of a card position relative to proximity detection sensors
- FIG. 11 illustrates an overhead image of a card hand where the corners of a card are undetectable
- FIG. 12 is a flowchart describing the preferred method of extracting a region of interest from a card edge
- FIG. 13 illustrates an application of the preferred method of extracting a region of interest from a card edge
- FIG. 14 is a flowchart describing another method for extracting a region of interest from a card edge
- FIG. 15 illustrates an application of another method of extracting a region of interest from a card edge
- FIG. 16 is a block diagram of the preferred system for identifying a gaming object on a gaming table
- FIG. 17 illustrates an example of a feature space that may be used for recognition purposes
- FIG. 18 is a flowchart describing a method of detecting inconsistencies with respect to playing cards dealt on a game table
- FIG. 19 illustrates a first application of the method of detecting inconsistencies with respect to playing cards dealt on a game table
- FIG. 20 illustrates a second application a method of detecting inconsistencies with respect to playing cards dealt on a game table
- FIG. 21 illustrates a third a method of detecting inconsistencies with respect to playing cards dealt on a game table
- FIG. 22 illustrates a Feed Forward Neural Network
- FIG. 23 illustrates Haar feature classifiers
- FIG. 24 is a flowchart describing a method of calibrating an imaging system within the context of table game tracking.
- FIG. 25 illustrates a combination of weak classifiers into one strong classifier as achieved through a boosting module.
- FIG. 1 an overhead view of a card game is shown generally as 10 . More specifically, FIG. 1 is an example of a blackjack game in progress.
- a gaming table is shown as feature 12 .
- Feature 14 is a single player and feature 16 is the dealer.
- Player 14 has three cards 18 dealt by dealer 16 within dealing area 20 .
- the dealer's cards are shown as feature 22 .
- dealer 16 utilizes a card shoe 24 to deal cards 18 and 22 and places them in dealing area 20 .
- Within gaming table 12 there are a plurality of betting regions 26 in which a player 14 may place a bet.
- a bet is placed through the use of chips 28 .
- Chips 28 are wagering chips used in a game, examples of which are plaques, jetons, wheelchecks, Radio Frequency Identification Device (RFID) embedded wagering chips and optically encoded wagering chips.
- RFID Radio Frequency Identification Device
- Feature 32 is an imaging system, which is utilized by the present invention to provide overhead imaging and optional lateral imaging of game 10 .
- An optional feature is a player identity card 34 , which may be utilized by the present invention to identify a player 14 .
- Chips 28 can be added to a betting region 26 during the course of the game as per the rules of the game being played.
- the dealer 16 then initiates the game by dealing the playing cards 18 , 22 .
- Playing cards can be dealt either from the dealer's hand, or from a card dispensing mechanism such as a shoe 24 .
- the shoe 24 can take different embodiments including non-electromechanical types and electromechanical types.
- the shoe 24 can be coupled to an apparatus (not shown) to read, scan or image cards being dealt from the shoe 24 .
- the dealer 16 can deal the playing cards 18 , 22 into dealing area 20 .
- the dealing area 20 may have a different shape or a different size than shown in FIG. 1 .
- the dealing area 20 under normal circumstances, is clear of foreign objects and usually only contains playing cards 18 , 22 , the dealer's body parts and predetermined gaming objects such as chips, currency, player identity card 34 and dice.
- a player identity card 34 is an identity card that a player 14 may possess, which is used by the player to provide identity data and assist in obtaining complimentary (“comps”) points from a casino.
- a player identity card 34 may be used to collect comp points, which in turn may be redeemed later on for comps.
- Dealers may have dealer identity cards (not shown) similar to player identity cards that dealers use to register themselves at the table.
- playing cards 18 , 22 may appear, move, or be removed from the dealing area 20 by the dealer 16 .
- the dealing area 20 may have specific regions outlined on the table 12 where the cards 18 , 22 are to be dealt in a certain physical organization otherwise known as card sets or “card hands”, including overlapping and non-overlapping organizations.
- gaming objects chips, cards, card hands, currency bills, player identity cards, dealer identity cards, lammers and dice are collectively referred to as gaming objects.
- the term “gaming region” is meant to refer to any section of gaming table 12 including the entire gaming table 12 .
- Imaging system 32 comprises overhead imaging system 40 and optional lateral imaging system 42 .
- Imaging system 32 can be located on or beside the gaming table 12 to image a gaming region from a top view and/or from a lateral view.
- Overhead imaging system 40 can periodically image a gaming region from a planar overhead perspective.
- the overhead imaging system 40 can be coupled to the ceiling or to a wall or any location that would allow an approximate top view of the table 12 .
- the optional lateral imaging system 42 can image a gaming region from a lateral perspective.
- Imaging systems 40 and 42 are connected to a power supply and a processor (not shown) via wiring 44 which runs through tower 46 .
- the imaging system 32 utilizes periodic imaging to capture a video stream at a specific number of frames over a specific period of time, such as for example, thirty frames per second. Periodic imaging can also be used by an imaging system 32 when triggered via software or hardware means to capture an image upon the occurrence of a specific event. An example of a specific event would be if a stack of chips were placed in a betting region 26 . An optical chip stack or chip detection method utilizing overhead imaging system 40 can detect this event and can send a trigger to lateral imaging system 42 to capture an image of the betting region 26 . In an alternative embodiment overhead imaging system 40 can trigger an RFID reader to identify the chips. Should there be a discrepancy between the two means of identifying chips the discrepancy will be flagged.
- Overhead imaging system 40 comprises one or more imaging devices 50 and optionally one or more lighting sources (if required) 52 which are each connected to wiring 44 .
- Each imaging device 50 can periodically produce images of a gaming region.
- Charged Coupling Device (CCD) sensors, Complementary Metal Oxide Semiconductor (CMOS) sensors, line scan imagers, area-scan imagers and progressive scan imagers are examples of imaging devices 50 .
- Imaging devices 50 may be selective to any frequency of light in the electromagnetic spectrum, including ultra violet, infra red and wavelength selective. Imaging devices 50 may be color or grayscale.
- Lighting sources 52 may be utilized to improve lighting conditions for imaging. Incandescent, fluorescent, halogen, infra red and ultra violet light sources are examples of lighting sources 52 .
- An optional case 54 encloses overhead imaging system 40 and if so provided, includes a transparent portion 56 , as shown by the dotted line, so that imaging devices 50 may view a gaming region.
- Lateral imaging system 42 comprises one or more imaging devices 50 and optional lighting sources 52 as described with reference to FIG. 3 .
- An optional case 60 encloses lateral imaging system 42 and if so provided includes a transparent portion 62 , as shown by the dotted line, so that imaging devices 50 may view a gaming region.
- overhead imaging system 40 and lateral imaging system 42 are not meant by the inventors to restrict the configuration of the devices to the examples shown. Any number of imaging devices 50 may be utilized and if a case is used to house the imaging devices 50 , the transparent portions 56 and 62 may be configured to scan the desired gaming regions.
- a calibration module assigns parameters for visual properties of the gaming region.
- FIG. 24 is a flowchart describing the operation of the calibration module as applied to the overhead imaging system.
- the calibration process can be: manual, with human assistance; fully automatic; or semi automatic.
- a first step 4800 consists in waiting for an image of the gaming region from the overhead imager(s).
- the next step 4802 consists in displaying the image to allow the user to select the area of interest where gaming activities occur.
- the area of interest can be a box encompassing the betting boxes, the dealing arc, and the dealer's chip tray.
- step 4804 coefficients for perspective correction are calculated.
- Such correction consists in an image processing technique whereby an image can be warped to any desired view point. Its application is particularly useful if the overhead imagers are located in the signage and the view of the gaming region is slightly warped. A perfectly overhead view point would be best for further image analysis. A checkerboard or markers on the table may be utilized to assist with calculating the perspective correction coefficients.
- step 4806 the resulting image is displayed to allow the user to select specific points or regions of interest within the gaming area. For instance, the user may select the position of betting spots and the region encompassing the dealer's chip tray. Other specific regions or points within the gaming area may be selected.
- camera parameters such as shutter value, gain value(s) are calculated and white balancing operations are performed.
- Numerous algorithms are publicly available to one skilled in the art for performing camera calibration.
- step 4810 additional camera calibration is performed to adjust the lens focus and aperture.
- an image of the table layout, clear of any objects on its surface is captured and saved as a background image.
- Such an image may be for detecting objects on the table.
- the background image may be continuously captured at various points during system operation in order to have a most recent background image.
- step 4814 while the table surface is still clear of objects additional points of interest such as predetermined markers are captured.
- the calibration parameters are stored in memory.
- the calibration concepts may be applied for the lateral imaging system as well as other imaging systems.
- continuous calibration checks may be utilized to ensure that the initially calibrated environment remains relevant. For instance a continuous brightness check may be performed periodically, and if it fails, an alert may be asserted through a feedback device indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction, and region of interest definition.
- a continuous brightness check may be applied periodically and if the brightness check fails, an alert may be asserted through one of the feedback devices indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction and the regions of interest.
- a white sheet similar in shade to a playing card surface may be placed on the table during calibration in order to determine the value of the white sheet at various points on the gaming table and consequently the lighting conditions at these various points.
- the recorded values may be subsequently utilized to determine threshold parameters for detecting positions of objects on the table.
- FIG. 5 is an overhead view of a gaming table containing RFID detectors 70 .
- the values of the chips 28 can be detected by the RFID detector 70 .
- the same technology may be utilized to detect the values of RFID chips within the chip tray 30 .
- IP module 80 identifies the value and position of cards on the gaming table 12 .
- Intelligent Position Analysis and Tracking module (IPAT module) 84 performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression.
- the Game Tracking module (GT module) 86 processes data from the IPAT module 84 and keeps track of game events and game status.
- the GT module 86 can optionally obtain input from Bet Recognition module 88 .
- Bet Recognition module 88 identifies the value of wagers placed at the game.
- Player Tracking module 90 keeps track of patrons and players that are participating at the games.
- An optional dealer tracking module can keep track of the dealer dealing at the table.
- Surveillance module 92 records video data from imaging system 32 and links game event data to recorded video.
- Surveillance module 92 provides efficient search and replay capability by way of linking game event time stamps to the recorded video.
- Analysis and Reporting module 94 analyzes the gathered data in order to generate reports on players, tables and casino personnel.
- Example reports include reports statistics on game related activities such as profitability, employee efficiency and player playing patterns. Events occurring during the course of a game can be analyzed and appropriate actions can be taken such as player profiling, procedure violation alerts or fraud alerts.
- Modules 80 to 94 communicate with one another through a network 96 .
- a 100 Mbps Ethernet Local Area Network or Wireless Network can be used as a digital network.
- the digital network is not limited to the specified implementations, and can be of any other type, including local area network (LAN), Wide Area Network (WAN), wired or wireless Internet, or the World Wide Web, and can take the form of a proprietary extranet.
- Controller 98 such as a processor or multiple processors can be employed to execute modules 80 to 94 and to coordinate their interaction amongst themselves, with the imaging system 32 and with input/output devices 100 , optional shoe 24 and optional RFID detectors 70 . Further, controller 98 utilizes data stored in database 102 for providing operating parameters to any of the modules 80 to 94 . Modules 80 to 94 may write data to database 102 or collect stored data from database 102 . Input/Output devices 100 such as a laptop computer, may be used to input operational parameters into database 102 . Examples of operational parameters are the position coordinates of the betting regions 26 on the gaming table 12 , position coordinates of the dealer chip tray 30 , game type and game rules.
- a card or card hand is first identified by an image from the imaging system 32 as a blob 110 .
- a blob may be any object in the image of a gaming area but for the purposes of this introduction we will refer to blobs 110 that are cards and card hands.
- the outer boundary of blob 110 is then traced to determine a contour 112 which is a sequence of boundary points forming the outer boundary of a card or a card hand.
- digital imaging thresholding is used to establish thresholds of grey.
- the blob 110 would be white and bright on a table.
- ROI 118 regions of interest 118 , which identify a specific card.
- ROI 118 has been shown to be the rank and suit of a card an alternative ROI could be used to identify the pip pattern in the centre of a card. From the information obtained from ROIs 118 it is possible to identify cards in a card hand 120 .
- IP module 80 may be implemented in a number of different ways.
- overhead imaging system 32 located above the surface of the gaming table provides overhead images.
- An overhead image need not be at precisely ninety degrees above the gaming table 12 . In one embodiment it has been found that seventy degrees works well to generate an overhead view.
- An overhead view enables the use of two dimensional Cartesian coordinates of a gaming region.
- One or more image processing algorithms process these overhead images of a gaming region to determine the identity and position of playing cards on the gaming table 12 .
- step 140 initialization and calibration of global variables occurs. Examples of calibration are manual or automated setting of camera properties for an imager 32 such as shutter value, gain levels and threshold levels.
- a different threshold may be stored for each pixel in the image or different thresholds may be stored for different regions of the image.
- the threshold values may be dynamically calculated from each image. Dynamic determination of a threshold would calculate the threshold level to be used for filtering out playing cards from a darker table background.
- step 142 the process waits to receive an overhead image of a gaming region from overhead imaging system 40 .
- a thresholding algorithm is applied to the overhead image in order to differentiate playing cards from the background to create a threshold image.
- a background subtraction algorithm may be combined with the thresholding algorithm for improved performance. Contrast information of the playing card against the background of the gaming table 12 can be utilized to determine static or adaptive threshold parameters. Static thresholds are fixed while dynamic thresholds may vary based upon input such as the lighting on a table. The threshold operation can be performed on a gray level image or on a color image. Step 144 requires that the surface of game table 12 be visually contrasted against the card.
- a threshold may not be effective for obtaining the outlines of playing cards.
- the output of the thresholded image will ideally show the playing cards as independent blobs 110 . This may not always be the case due to issues of motion or occlusion. Other bright objects such as a dealer's hand may also be visible as blobs 110 in the thresholded image. Filtering operations such as erosion, dilation and smoothing may optionally be performed on the thresholded image in order to eliminate noise or to smooth the boundaries of a blob 110 .
- a contour 112 corresponding to each blob 110 is detected.
- a contour 112 can be a sequence of boundary points of the blob 110 that more or less define the shape of the blob 110 .
- the contour 112 of a blob 110 can be extracted by traversing along the boundary points of the blob 110 using a boundary following algorithm. Alternatively, a connected components algorithm may also be utilized to obtain the contour 112 .
- step 148 shape analysis is performed in order to identify contours that are likely not cards or card hands and eliminate these from further analysis. By examining the area of a contour 112 and the external boundaries, a match may be made to the known size and/or dimensions of cards. If a contour 112 does not match the expected dimensions of a card or card hand it can be discarded.
- step 150 line segments 114 forming the card and card hand boundaries are extracted.
- One way to extract line segments is to traverse along the boundary points of the contour 112 and test the traversed points with a line fitting algorithm.
- Another potential line detection algorithm that may be utilized is a Hough Transform.
- line segments 114 forming the card or card hand boundaries are obtained.
- straight line segments 114 of the card and card hand boundaries may be obtained in other ways.
- straight line segments 114 can be obtained directly from an edge detected image.
- an edge detector such as the Laplace edge detector can be applied to the source image to obtain an edge map of the image from which straight line segments 114 can be detected.
- one or more corners 116 of cards can be obtained from the detected straight line segments 114 .
- Card corners 116 may be detected directly from the original image or thresholded image by applying a corner detector algorithm such as for example, using a template matching method using templates of corner points.
- the corner 116 may be detected by traversing points along contour 112 and fitting the points to a corner shape. Corner points 116 , and line segments 114 are then utilized to create a position profile for cards and card hands, i.e. where they reside in the gaming region.
- card corners 116 are utilized to obtain a Region of Interest (ROI) 118 encompassing a card identifying symbol, such as the number of the card, and the suit.
- ROI Region of Interest
- a card identifying symbol can also include features located in the card such as the arrangement of pips on the card, or can be some other machine readable code.
- Corners of a card are highly indicative of a position of a region of interest. For this very reason, they constitute the preferred reference points for extracting regions of interest. Occasionally, corners of a card may be undetectable within an amalgam of overlapping gaming objects, such as a card hand.
- the present invention provides a method of identifying such cards by extracting a region of interest from any detected card feature that may constitute a valid reference point.
- FIG. 11 illustrates an overhead image of a card hand 3500 comprised of cards 3502 , 3504 , 3506 , and 3508 .
- the card 3504 overlaps the card 3502 and is overlapped by the card 3506 such that corners of the card 3504 are not detectable.
- the overhead image is analyzed to obtain the contour of the card hand 3500 .
- line segments 3510 , 3512 , 3514 , 3516 , 3518 , 3520 , 3522 , and 3524 forming the contour of the card hand 3500 are extracted.
- the detected line segments are thereafter utilized to detect convex corners 3530 , 3532 , 3534 , 3536 , 3538 , and 3540 .
- index corner refers to a corner of a card in the vicinity of which a region of interest is located.
- blade corner refers to a corner of a card that is not an index corner.
- the corner 3530 is the first one to be considered.
- a sample of pixels drawn within the contour, in the vicinity of the corner 3530 is analyzed in order to determine whether the corner 3530 is an index corner.
- a sufficient number of contrasting pixels are detected and the corner 3530 is identified as an index corner. Consequently, a region of interest is projected and extracted according to the position of the corner 3530 , as well as the width, height, and offset of regions of interests from index corners.
- corner 3532 is identified as an index corner and a corresponding region of interest is projected and extracted.
- Corner 3534 is the third to be considered. Corner 3534 is identified as a blank corner. Due to their coordinates, the corners 3532 and 3534 are identified as belonging to a same card, and consequently, the corner 3534 is dismissed from further analysis.
- corner 3536 is identified as an index corner and a corresponding region of interest is projected and extracted.
- the corners 3538 and 3540 are the last ones to be considered. Due to their coordinates, the corners 3530 , 3538 and 3540 are identified as belonging to a same card, and consequently, the corners 3538 and 3540 are dismissed from further analysis.
- the extracted line segments 3510 , 3512 , 3514 , 3516 , 3518 , 3520 , 3522 , and 3524 forming the contour of the card hand 3500 are utilized according to a method provided by the present invention.
- FIG. 12 a flowchart describing the preferred method for extracting a region of interest from a card edge segment is provided. It must be noted that a partial card edge segment may suffice for employing this method.
- step 3600 two scan line segments are determined.
- the scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset according to a predetermined offset of the region of interest from a corresponding card edge. The second of the scan line segments is offset from the first scan line segment according to the predetermined width of the rank and suit symbols.
- step 3602 pixel rows delimited by the scan line segments are scanned, and for each of the rows a most contrasting color or brightness value is recorded.
- step 3604 the resulting sequence of most contrasting color or brightness values, referred to as a contrasting value scan line segment, is analyzed to identify regions that may correspond to a card rank and suit.
- the analysis may be performed according to pattern matching or pattern recognition algorithms.
- the sequence of contrasting color values is convolved with a mask of properties expected from rank characters and suit symbols.
- the mask may consist of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols.
- the result of the convolution will give rise to peaks where a sequence of the set of contrasting color values corresponds to the expected properties described by the mask.
- step 3606 the resulting peaks are detected, and the corresponding regions of interests are extracted.
- FIG. 13 illustrates an analysis of the line segment 3510 according to the preferred embodiment of the invention.
- the scan line segments 3700 and 3702 are of the same length as the line segment 3510 . Furthermore, the scan line segments are parallel to the line segment 3510 . Finally, the scan line segment 3700 is offset from the line segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge. The scan line segment 3702 is offset from the scan line segment 3700 according to the predetermined width of the rank characters and suit symbols.
- a most contrasting color or brightness value is recorded to form a sequence of contrasting color or brightness values 3704 , also referred to as a contrasting value scan line segment.
- the sequence 3704 is obtained, it is convolved with a mask 3706 of properties expected from rank characters and suit symbols.
- the mask 3706 consists of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols.
- a result 3708 of the convolution gives rise to a peak 3710 where a sub-sequence of sequence 3704 corresponds to the expected properties described by the mask 3706 . Finally, a region of interest 3714 corresponding to the card 3502 is extracted.
- FIG. 14 a flowchart describing another embodiment of the method for extracting a region of interest from a line segment is provided.
- step 3800 several scan line segments are determined.
- the scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset from the analyzed line segment according to a predetermined offset of the region of interest from a corresponding card edge. The other scan line segments are offset from the first scan line segment according to the predetermined width of the rank and suit symbols. The scan line segments are positioned in that manner to ensure that at least some of them would intersect any characters and symbols located along the analyzed line segment.
- each scan line segment is scanned and points of contrasting color or brightness values are recorded to assemble a set of contrasting points, which we will refer to as seed points.
- the set of contrasting points is analyzed to identify clusters that appear to be defining, at least partially, rank characters and suit symbols.
- the clusters can be extracted by grouping the seed points or by further analyzing the vicinity of one or more of the seed points using a region growing algorithm.
- step 3806 regions of interest are extracted from the identified clusters of contrasting points.
- FIG. 15 illustrates an analysis of the line segment 3510 according to the preferred embodiment of the invention.
- the scan line segments 3900 and 3902 are determined.
- the scan line segments 3900 and 3902 are of the same length as the line segment 3510 .
- the scan line segments 3900 and 3902 are parallel to the line segment 3510 .
- the scan line segment 3900 is offset from the line segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge segment.
- the scan line segment 3902 is offset from the scan line segment 3900 according to the predetermined width of rank characters and suit symbols.
- the scan line segments 3900 and 3902 are positioned in that manner to ensure that at least one of them would intersect any characters and symbols located along the line segment 3510 .
- the scan line segments 3900 and 3902 are scanned and points of contrasting color and brightness values are recorded to assemble a sequence of contrasting points. Subsequently, the sequence is analyzed and clusters of seed points 3910 , 3912 and 3914 are identified as likely to define, at least partially, rank characters and suit symbols.
- regions of interest 3920 , 3922 , and 3924 are extracted respectively from the clusters of seed points 3910 , 3912 , and 3914 . Therefore, the method has succeeded in extraction a region of interest of a card having no detectable corners.
- the same invention is applied to the line segments 3512 , 3514 , 3516 , 3518 , 3520 , 3522 , and 3524 as well, in order to identify any desirable region of interest that is yet to be extracted.
- the invention has been described as a method of extracting a region of interest from a card edge, it may do so from any detected card feature, provided that the feature constitutes a valid reference point for locating a region of interest.
- the method may be applied to extract regions of interest from detected corners, or detected pips, instead of line segments.
- Such versatility is a sizeable asset within the context of table games, where some playing cards may present a very limited number of detectable features.
- a recognition method may be applied to identify the value of the card.
- the ROI 118 is rotated upright and a statistical classifier, also referred to as machine learning model, can be applied to recognize the symbol.
- a statistical classifier also referred to as machine learning model
- the ROI 118 may be pre-processed by thresholding the image in the ROI 118 and/or narrowing the ROI 118 to encompass the card identifying symbols.
- Examples of statistical classifiers that may be utilized with this invention include Neural Networks, Support Vector Machines, Hidden Markov Models and Bayesian Networks.
- a Feed-forward Neural Network is one example of a statistical classifier that may be used with this system. Training of the statistical classifier may happen in a supervised or unsupervised manner.
- a method that does not rely on a statistical classifier, such as template matching, may be utilized.
- the pattern of pips on the cards may be utilized to recognize the cards, provide a sufficient portion of the pattern is visible in a card hand.
- a combination of recognition algorithms may be used to improve accuracy of recognition.
- the present invention provides a system for identifying a gaming object on a gaming table in an efficient and seamless manner.
- the system comprises at least one overhead camera for capturing a plurality of images of the table; a detection module for detecting a feature of the object on an image of the plurality; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; a dimensionality reduction module for reducing the transformed region into a reduced representation according to dimensionality reduction algorithms, and an identity module trained to recognize the object from the transformed region.
- the overhead camera corresponds to the Imager 32 .
- the detection module the search module, the feature space module, the dimensionality reduction module, and the identification module, they are components of the IP module 80 .
- FIG. 16 is a block diagram of the preferred system for identifying a gaming object on a gaming table.
- the Imager 32 provides an overhead image of the game table to a Detection module 4000 .
- the Detection Module 4000 detects features of potential gaming objects placed on the game table. Such detection may be performed according to any of the aforementioned methods; for instance, it may consist of the steps 142 , 144 , 146 , 148 , 150 , and 152 , as illustrated in FIG. 8 .
- the Detection Module 4000 comprises a cascade of classifiers trained to recognize specific features of interest such as corners and edges.
- the system further comprises a Booster Module, and the Detection Module 4000 comprises a cascade of classifiers.
- the Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier as illustrated in FIG. 25 . It may operate according to one of several boosting algorithms including Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle Adaboost.
- the Detection Module 4000 provides the image along with the detected features to a Search Module 4002 .
- the latter extracts regions of interest within the image from the detected features.
- Such extraction may be performed according to any of the aforementioned methods; for instance, it may consist of the steps 3600 , 3602 , 3604 , and 3606 , illustrated in FIG. 12 .
- the extracted regions of interest may be further processed by applying image thresholding, rotating or by refining the region of interest.
- the Search Module 4002 provides the extracted regions of interest to the Feature Space (FS) Module 4004 .
- the FS Module 4004 transforms a provided representation into a feature space, or a set of feature spaces that is more appropriate for recognition purposes.
- each region of interest provided to the FS Module 4004 is represented as a grid of pixels, wherein each pixel is assigned a color or brightness value.
- the FS Module 4004 Prior to performing a transformation, the FS Module 4004 must select a desirable feature space according to a required type, speed, and robustness of recognition. The selection may be performed in a supervised manner, an unsupervised manner, or both.
- FIG. 17 illustrates an example of a feature space that may be used for recognition purposes.
- the feature space consists in a histogram of the grayscale values stored in each column of a pixel grid.
- the FS Module 4004 applies a corresponding feature space transformation on a corresponding image.
- geometrical transformation of an image consists in reassigning positions of pixels positions within a corresponding grid. While such a transformation does modify an image, it does not modify underlying semantics; the means by which the original image and its transformed version are represented is the same. On the other hand, feature space transformations modify underlying semantics.
- One example of a feature space transformation consists in modifying the representation of colours within a pixel grid from RGB (Red, Green, and Blue) to HSV (Hue, Saturation, and Value or Brightness).
- RGB Red, Green, and Blue
- HSV Hue, Saturation, and Value or Brightness
- the data is not modified, but its representation is.
- Such a transformation is advantageous in cases where it is desirable for the brightness of a pixel to be readily available.
- the HSV space is less sensitive to a certain type of noise than its RGB counterpart.
- the Hough Line Transform is another example of a feature space transformation. It consists in transforming a binary image from a set of pixels to a set of lines. In the new feature space, each vector represents a line whereas in the original space, each vector represents the coordinates of a pixel. Consequently, such a transformation is particularly advantageous for applications where lines are to be analyzed.
- Other feature space transformations include various filtering operations such as Laplace and Sobel. Pixels resulting from such transformations store image derivative information rather than image intensity.
- FFT Fast Fourier Transform
- DCT Discrete Cosine Transform
- Wavelet transforms are other examples of feature space transformations.
- Images resulting from FFT and DCT are no longer represented spatially (by a pixel grid), but rather in a frequency domain, wherein each point represents a particular frequency contained in the real-domain image.
- Such transformations are practical because the resulting feature space is invariant with respect to some transformations, and robust with respect to others. For instance, discarding the higher frequency components of an image resulting from a DCT makes it more resilient to noise, which is generally present in high frequencies. As a result, recognition is more reliable.
- the use of different feature spaces provides for additional robustness with respect to parameters such as lighting variations, brightness, image noise, image resolutions, ambient smoke, as well as geometrical transformations such as rotations and translations.
- the system of the present invention provides for greater training and recognition accuracy.
- Principal Component Analysis is the main feature space transformation in the arsenal of the FS Module 4004 . It is a linear transform that selects a new coordinate system for a given data set, such that the greatest variance by any projection of the data set relates to a first axis, known as the principal component, the second greatest variance, on the second axis, and so on.
- the first step of the PCA consists in constructing a 2D matrix A of size n ⁇ wh where each column is an image vector, given n images of w ⁇ h pixels. Each image vector is formed by concatenating all the pixel rows of a corresponding image into vector.
- the second step consists in computing an average image from the matrix A by summing up all the rows and dividing by n. The resulting clement vector of size (wh) is called u .
- the third step consists in subtracting u from all the columns of A to get a mean subtracted matrix B of size (n ⁇ wh).
- the fourth step consists in computing the dot products of all possible image pairs.
- C is the covariance matrix.
- the penultimate step consists in Compute the n Eigen values and corresponding Eigen vectors of C. Finally, all Eigen values of C are sorted from the highest Eigen value to the lowest Eigen value.
- the FS Module 4004 applies predominantly one or more of the DCT, FFT, Log Polar Domains, or other techniques resulting in edge images.
- the FS Module 4004 provides the transformed representation, or set of representations to a Dimensionality Reduction (DR) Module 4008 .
- the DR Module 4008 reduces the dimensionality of the provided representations by applying feature selection techniques, feature extraction techniques, or a combination of both.
- the representations provided by the FS Module 4004 result from the application of a PCA, and the DR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the PCA coefficients that contain the most information.
- the representations provided by the FS Module 4004 result from the application of a DCT, and the DR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the DCT coefficients that contain the most information.
- the DR Module 4006 reduces the dimensionality of the provided representations by applying a feature extraction technique that consists in projecting them into a feature space of fewer dimensions.
- the representations provided by the FS Module 4004 result from the application of a DCT, and the DR Module applies a combination of feature selection and feature extraction techniques that consists in selecting a subset of the DCT coefficients that contain the most information, and applying PCA on the selected coefficients.
- the application of dimensionality reduction techniques reduces computing computational overhead, thereby increasing the training and recognition procedures performed by the Identity Module 4008 . Furthermore, dimensionality reduction tends to eliminate, or at the very least reduce noise, and therefore, increase recognition and training efficiency.
- the FS Module 4004 provides the transformed representation or set of transformed representations to an Identity Module 4008 trained to recognize gaming objects from dimensionality reduced representations of regions of interest.
- the DR Module 4006 provides the dimensionality reduced representations to an Identity Module 4008 , which identifies a corresponding gaming object.
- the Identity Module 4008 comprises a statistical classifier trained to recognize gaming objects from dimensionality reduced representations.
- the Identity Module 4008 comprises a Feed-forward Neural Network such as the one illustrated in FIG. 22 that consists of input nodes, multiple hidden layers, and output nodes.
- the hidden layers can be partially connected, as those shown in FIG. 22 , or fully connected.
- a back propagation learning method is utilized in conjunction with the error function an error function to allow the Neural Network to adjust its internal weights according the inputs and outputs.
- the Identification Module comprises a cascade of classifiers.
- the system further comprises a Booster Module
- the Identity Module 4008 comprises a cascade of classifiers.
- the Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier. It may operate according to one of several boosting algorithms including Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle Adaboost.
- the system is used to perform deck verification.
- the dealer presents the corresponding cards on the table, in response to which the Identity Module 4008 is automatically triggered to provide the rank and suit of each identified card to a Deck Verification Module 4010 .
- the latter module analyzes the provided data to ensure that the deck of cards adheres to a provided set of standards.
- the Detection Module 4000 recognizes a configuration of playing cards suitable for a deck verification procedure and triggers the Identity Module 4008 to provide the rank and suit of each identified card to a Deck Verification Module 4010 .
- the Identity Module 4008 is manually triggered to provide the rank and suit of each identified card to the Deck Verification Module 4010 .
- the data can be output to other modules at step 158 .
- Examples of data output at step 158 may include the number of card hands, the Cartesian coordinates of each corner of a card in a hand (or other positional information such as line segments), and the identity of the card as a rank and/or suit.
- step 160 the process waits for a new image and when received processing returns to step 144 .
- IP module 80 may utilize proximity detection sensors 170 .
- Card shoe 24 is a card shoe reader, which dispenses playing cards and generates signals indicative of card identity.
- An example of a card shoe reader 24 may include those disclosed in U.S. Pat. No. 5,374,061 to Albrecht, U.S. Pat. No. 5,941,769 to Order, U.S. Pat. No. 6,039,650 to Hill, or U.S. Pat. No. 6,126,166 to Lorson.
- Commercial card shoe readers such as for example the MP21 card reader unit sold by Bally Gaming or the Intelligent Shoe sold by Shuffle Master Inc. may be utilized.
- a card deck reader such as the readers commercially sold by Bally Gaming and Shuffle Master can be utilized to determine the identity of cards prior to their introduction into the game. Such a card deck reader would pre-determine a sequence of cards to be dealt into the game.
- An array of proximity detection sensors 170 can be positioned under the gaming table 12 parallel to the table surface, such that periodic sampling of the proximity detection sensors 170 produces a sequence of frames, where each frame contains the readings from the proximity detection sensors. Examples of proximity detection sensors 170 include optical sensors, infra red position detectors, photodiodes, capacitance position detectors and ultrasound position detectors.
- Proximity detection sensors 170 can detect the presence or absence of playing cards (or other gaming objects) on the surface of gaming table 12 . Output from the array of proximity detection sensors can be analog or digital and can be further processed in order to obtain data that represents objects on the table surface as blobs and thus replace step 142 of FIG. 8 . In this embodiment a shoe 24 would provide information on the card dealt and sensors 170 would provide positioning data. The density of the sensor array (resolution) will determine what types of object positioning features may be obtained. To assist in obtaining positioning features further processing may be performed such as shown in FIG. 10 which is a plan view of a card position relative to proximity detection sensors 170 . Sensors 170 provide signal strength information, where the value one represents an object detected and the value zero represents no object detected. Straight lines may be fitted to the readings of sensors 170 using a line fitting method. In this manner proximity detection sensors 170 may be utilized to determine position features such as line segments 114 or corners 116 .
- identity data generated from the card shoe reader 24 and positioning data generated from proximity detection sensors 170 may be grouped and output to other modules. Associating positional data to cards may be performed by the IPAT module 84 .
- card reading may have an RFID based implementation.
- RFID chips embedded inside playing cards may be wirelessly interrogated by RFID antennae or scanners in order to determine the identity of the cards.
- Multiple antennae may be used to wirelessly interrogate and triangulate the position of the RFID chips embedded inside the cards.
- Card positioning data may be obtained either by wireless interrogation and triangulation, a matrix of RFID sensors, or via an array of proximity sensors as explained herein.
- the IPAT module 84 performs analysis of the identity and position data of cards/card hands and interprets them “intelligently” for the purpose of tracking game events, game states and general game progression.
- the IPAT module may perform one or more of the following tasks:
- the IPAT module 84 in combination with the Imager 32 , the IP module 80 , and the card shoe 24 , may also detect inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards.
- the system for detecting inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards comprises a card shoe for storing playing cards to be dealt on the table; a card reader for determining an identity and a dealing order of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table, a recognition module for determining an identity and a position of each card positioned on the table from the images; and a tracking module for comparing the dealing order and identity determined by the card reader with the identity and the position determined by the recognition module, and detecting the inconsistency.
- the card shoe and card reader correspond to the card shoe 24 , which comprises an embedded card reader.
- the overhead camera corresponds to the Imager 32 .
- the recognition module corresponds to the IP module 80 .
- the tracking module corresponds to the IPAT module 84 .
- FIG. 18 a flowchart describing the interaction between the IPAT module 84 , IP module 80 , and card shoe 24 for detecting such inconsistencies is provided.
- the IPAT module 84 is calibrated and its global variables are initialized.
- the IPAT module 84 receives data from the card shoe 24 .
- the data is received immediately following each removal of a card from the card shoe 24 .
- the data is received following each removal of a predetermined number of cards from the card shoe 24 .
- the data is received periodically.
- the data consist of a rank and suit of a last card to be removed from the card shoe 24 .
- the data consist of a rank of a last card to be removed from the card shoe 24 .
- the IPAT module 84 receives data from the IP module 80 .
- the data is received periodically. In another embodiment, the data is received in response to the realization of step 4202 .
- the data consist of a rank, suit, and position of each card placed on the game table.
- the data consist of a rank and suit of each card placed on the game table.
- the data consist of a rank of each card placed on the game table.
- the data consist of a suit of each card placed on the game table.
- the data consist of a likely rank, and suit, as well as a position of each card placed on the game table.
- the data consist of a likely rank and a position of each card placed each card placed on the game table.
- the data consist of a likely suit and a position of each card placed each card placed on the game table.
- step 4206 the IPAT module 84 compares the data provided by the card shoe 24 with those provided by the IP module 80 .
- the IPAT module 84 verifies whether the rank and suit of cards removed from the card shoe 24 as well as the order in which they were removed correspond to the rank, suit, and position of cards placed on the game table according to a set of rules of the game being played.
- the IPAT module 84 verifies whether the rank and suit of cards removed from the card shoe 24 correspond to the rank and suit of those that are placed on the game table.
- the IPAT module 84 informs the surveillance module 92 according to step 4208 . Otherwise, the IPAT module 84 returns to step 4202 as soon as subsequent data is provided by the card shoe 24 .
- a dealer withdraws four cards from a card shoe and deals two hands of two cards, face down; one for the player, and one for the bank.
- the player is required to flip the dealt cards and return them back to the dealer.
- the latter organizes the returned cards on the table and determines the outcome of the game.
- One known form of cheating consists in switching cards. More specifically, a player may hide cards of desirable value, switch a dealt card with one of the hidden cards, flip the illegally introduced card and return it back to the dealer.
- the present invention provides an efficient and seamless means to detect such illegal procedures.
- the dealer must withdraw four cards from the card shoe.
- the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds.
- the rank and suit of each of the four cards is read by the card shoe 24 , and provided to the IPAT module 84 .
- the player flips the dealt card and returns them to the dealer.
- the latter organizes the four cards on the table as illustrated in FIG. 19 .
- the Five of Spades 4300 and the Six of Hearts 4302 are placed in a region dedicated to the player's hand, and the Queen of Clubs 4304 and Ace of Diamonds 4306 are placed in a region dedicated to the bank's hand.
- the Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing.
- the IP module 80 determines the position, suit, and rank of cards 4300 , 4302 , 4304 , and 4306 , and provides the information to the IPAT module 84 .
- the latter compares the data received from the card shoe reader and the IP module, and finds no inconsistency. Consequently, it waits for a new set of data from the card shoe reader.
- the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds.
- the rank and suit of each of the four cards is read by the card shoe 24 , and provided to the IPAT module 84 .
- the player switches one of the dealt cards with one of his hidden cards to form a new hand, flips the cards of the new hand, and returns them to the dealer.
- the latter arranges the four cards returned by the player as illustrated in FIG. 20 .
- the Five of Spades 4300 and Four of Hearts 4400 are placed in a region dedicated to the player's hand, and the Queen of Clubs 4304 and Ace of Diamonds 4306 are placed in a region dedicated to the bank's hand.
- the Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing.
- the IP module 80 determines the position, suit, and rank of cards 4300 , 4400 , 4304 and 4306 , and provides the information to the IPAT module 84 .
- the latter compares the data received from the card shoe 24 and the IP module, and finds an inconsistency; the rank of the cards 4300 , 4302 , 4304 , and 4306 removed from the card shoe do not correspond to the rank of the cards 4300 , 4400 , 4304 and 4306 placed on the table. More specifically, the card 4302 has been replaced by 4400 , which likely results from a card switching procedure. Consequently, the IPAT module 84 provides a detailed description of the detected inconsistency to the surveillance module 92 .
- the dealer withdraws in order the Five of Spades, Six of Hearts, Queen of Clubs, and the Ace of Diamonds.
- the rank and suit of each of the four cards is read by the card shoe 24 , and provided to the IPAT module 84 .
- the player flips the dealt card and returns them to the dealer.
- the latter organizes the four cards on the table in an erroneous manner, as illustrated in FIG. 21 .
- the Five of Spades 4300 and the Queen of Clubs 4304 are placed in a region dedicated to the player's hand, and the Six of Hearts 4302 and Ace of Diamonds 4306 are placed in a region dedicated to the bank's hand.
- the Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing.
- the IP module 80 determines the position, suit, and rank of cards 4300 , 4302 , 4304 and 4306 , and provides the information to the IPAT module 84 .
- the latter compares the data received from the card shoe reader 24 and the IP module, and finds an inconsistency; while the rank and suit of the cards removed from the card shoe correspond to the rank and suit of the cards positioned on the table, the order in which the cards were removed from the card shoe does not correspond to the order in which the cards were organized on the table. More specifically, the card 4302 has been permutated with the card 4304 . Consequently, the IPAT module 84 provides a detailed description of the detected inconsistency to the surveillance module 92 .
- the GT module 86 processes input relating to card identities and positions to determine game events according to a set of rules of the game being played. It also keeps track of the state of the game, which it updates according to the determined game events. It may also store and maintain previous game states in memory, to which it may refer for determining a next game state.
- Bet recognition module 88 can determine the value of wagers placed by players at the gaming table.
- an RFID based bet recognition system can be implemented, as shown in FIG. 5 .
- Different embodiments of RFID based bet recognition can be used in conjunction with gaming chips containing RFID transmitters.
- the RFID bet recognition system sold by Progressive Gaming International or by Chipco International can be utilized.
- the bet recognition module 88 can interact with the other modules to provide more comprehensive game tracking.
- the game tracking module 86 can send a capture trigger to the bet recognition module 88 at the start of a game to automatically capture bets at a table game.
- Player tracking module 90 can obtain input from the IP module 80 relating to player identity cards.
- the player tracking module 90 can also obtain input from the game tracking module 86 relating to game events such as the beginning and end of each game.
- game events such as the beginning and end of each game.
- the system can recognize special player identity cards with machine readable indicia printed or affixed to them (via stickers for example).
- the machine readable indicia can include matrix codes, barcodes or other identification indicia.
- specialty identity cards may also be utilized for identifying and registering a dealer at a table.
- specialty identity cards may be utilized to indicate game events such as a deck being shuffled or a dispute being resolved at the table.
- biometrics technologies such as face recognition can be utilized to assist with identification of players.
- surveillance module 92 obtains input relating to automatically detected game events from one or more of the other modules and associates the game events to specific points in recorded video.
- the surveillance module 92 can include means for recording images or video of a gaming table.
- the recording means can include the imagers 32 .
- the recording means can be computer or software activated and can be stored in a digital medium such as a computer hard drive. Less preferred recording means such as analog cameras or analog media such as video cassettes may also be utilized.
- Analysis and reporting module 94 can mine data in the database 102 to provide reports to casino employees.
- the module can be configured to perform functions including automated player tracking, including exact handle, duration of play, decisions per hour, player skill level, player proficiency and true house advantage.
- the module 94 can be configured to automatically track operational efficiency measures such as hands dealt per hour reports, procedure violations, employee efficiency ranks, actual handle for each table and actual house advantage for each table.
- the module 94 can be configured to provide card counter alerts by examining player playing patterns. It can be configured to automatically detect fraudulent or undesired activities such as shuffle tracking, inconsistent deck penetration by dealers and procedure violations.
- the module 94 can be configured to provide any combination or type of statistical data by performing data mining on the recorded data in the database.
- Output including alerts and player compensation notifications, can be through output devices such as monitors, LCD displays, or PDAs.
- An output device can be of any type and is not limited to visual displays and can include auditory or other sensory means.
- the software can potentially be configured to generate any type of report with respect to casino operations.
- Module 94 can be configured to accept input from a user interface running on input devices. These inputs can include, without limitation, training parameters, configuration commands, dealer identity, table status, and other inputs required to operate the system.
- a chip tray recognition module may be provided to determine the contents of the dealer's chip bank.
- an RFID based chip tray recognition system can be implemented.
- a vision based chip tray recognition system can be implemented.
- the chip tray recognition module can send data relating to the value of chips in the dealer's chip tray to other modules.
- a dealer identity module may be employed to track the identity of a dealer.
- the dealer can optionally either key in her unique identity code at the game table or optionally she can use an identity card and associated reader to register their identity.
- a biometrics system may be used to facilitate dealer or employee identification.
- the terms imagers and imaging devices have been used interchangeably in this document.
- the imagers can have any combination of sensor, lens and/or interface. Possible interfaces include, without limitation, 10/100 Ethernet, Gigabit Ethernet, USB, USB 2, FireWire, Optical Fiber, PAL or NTSC interfaces.
- NTSC Gigabit Ethernet
- PAL FireWire
- Optical Fiber PAL
- NTSC NTSC interfaces
- NTSC NTSC interfaces
- NTSC and PAL a processor having a capture card in combination with a frame grabber can be utilized to get digital images or digital video.
- the image processing and computer vision algorithms in the software can utilize any type or combination or color spaces or digital file formats. Possible color spaces include, without limitation, RGB, HSL, CMYK, Grayscale and binary color spaces.
- the overhead imaging system may be associated with one or more display signs.
- Display sign(s) can be non-electronic, electronic or digital.
- a display sign can be an electronic display displaying game related events happening at the table in real time.
- a display and the housing unit for the overhead imaging devices may be integrated into a large unit.
- the overhead imaging system may be located on or near the ceiling above the gaming region.
Abstract
The present invention relates to a system and method for identifying and tracking gaming objects. The system comprises an overhead camera for capturing an image the table, a detection module for detecting a feature of the object on the image, a search module for extracting a region of interest of the image that describes the object from the feature, a search module for extracting a region of interest of the image that describes the object from the feature, a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest, and an identity module comprising a statistical classifier trained to recognize the object from the transformed region. The search module is able to extract a region of interest of an image from any detected feature indicative of its position. The system may be operated in conjunction with a card reader to provide two different sets of playing card data to a tracking module, which may reconcile the provided data in order to detect inconsistencies with respect to playing cards dealt on the table.
Description
- The present application claims priority from U.S. provisional patent applications No. 60/676,936, filed May 3, 2005; 60/693,406, filed Jun. 24, 2005; 60/723,481 filed Oct. 5 2005, 60/723,452 filed Oct. 5 2005, 60/736,334 filed Nov. 15 2005, 60/760,365 filed Jan. 20 2005 and 60/771,058 filed Feb. 8, 2006.
- Casinos propose a wide variety of gambling activities to accommodate players and their preferences. Some of those activities reward strategic thinking while others are impartial, but each one of them obeys a strict set of rules that favours the casino over its clients.
- The success of a casino relies partially on the efficiency and consistency with which those rules are applied by the dealer. A pair of slow dealing hands or an undeserved payout may have substantial consequences on profitability.
- Another critical factor is the consistency with which those rules are respected by the player. Large sums of money travel through the casino, tempting players to bend the rules. Again, an undetected card switch or complicity between a dealer and a player may be highly detrimental to profitability.
- For those reasons among others, casinos have traditionally invested tremendous efforts in monitoring gambling activities. Initially, the task was performed manually, a solution that was both expensive and inefficient. However, technological innovations have been offering advantageous alternatives that reduce costs while increasing efficiency.
- One of the most important aspects of table game monitoring consists in recognizing playing cards, or at the very least, their value with respect to the game being played. Such recognition is particularly challenging when the card corner or the central region of a playing card is undetectable within an overhead image of a card hand, or more generally, within that of an amalgam of overlapping objects. Current solutions for achieving such recognition bear various weaknesses, especially when confronted to those particular situations.
- U.S. patent application Ser. No. 11/052,941, titled “Automated Game Monitoring”, by Tran, discloses a method of recognizing a playing card positioned on a table within an overhead image. The method consists in detecting the contour of the card, validating the card from its contour, detecting adjacent corners of the card, projecting the boundary of the card based on the adjacent corners, binarizing pixels within the boundary, and counting the number of pips to identify the value of the card. While such a method is practical for recognizing a solitary playing card, or at least one that is not significantly overlapped by other objects, it may not be applicable in cases where the corner or central region of the card is undetectable due to the presence of overlapping objects. It also does not provide a method of distinguishing face cards. Furthermore, it does not provide a method of extracting a region of interest encompassing a card identifying symbol when only a partial card edge is available or when card corners are not available.
- A paper titled “Introducing Computers to Blackjack: Implementation of a Card Recognition System Using Computer Vision Techniques”, written by G. Hollinger and N. Ward, proposes the use of neural networks to distinguish face cards. The method proposes determining a central moment of individual playing cards to determine a rotation angle. This approach of determining a rotation angle is not appropriate for overlapping cards forming a card hand. They propose counting the number of pips in the central region of the card to identify number cards. This approach of pip counting will not be feasible when a card is significantly overlapped by another object. They propose training three neural networks to recognize face card symbols extracted from an upper left region of a face card, where each of the networks would be dedicated to a distinct face card symbol. The neural network is trained using a scaled image of the card symbol. A possible disadvantage of trying to directly recognize images of a symbol using a neural network is that it may have insufficient recognition accuracy especially under conditions of stress such as image rotation, noise, insufficient resolution and lighting variations.
- Several references propose to achieve such recognition by endowing each playing card with detectable and identifiable sensors. For instance, U.S. patent application Ser. No. 10/823,051, titled “Wireless monitoring of playing cards and/or wagers in gaming”, by SOLTYS, discloses playing cards bearing a conductive material that may be wirelessly interrogated to achieve recognition in any plausible situation, regardless of visual obtrusions. One disadvantage of their implementation is that such cards are more expensive than normal playing cards. Furthermore, adhering casinos would be restricted to dealing such special playing cards instead of those of their liking.
- Card recognition is particularly instrumental in detecting inconsistencies on a game table, particularly those resulting from illegal procedures. However, such detection is yet to be entirely automated and seamless as it requires some form of human intervention.
- MP Bacc, a product marketed by Bally Gaming for detecting an inconsistency within a game of Baccarat consists of a card shoe reader for reading bar-coded cards as they are being dealt, a barcode reader built into a special table for reading cards that were dealt, as well as a software module for comparing data provided by the card reader and discard rack.
- The software module verifies that the cards that have been removed from the shoe correspond to those that have been inserted into the barcode reader on the table. It also verifies that the order in which the cards have been removed from the shoe corresponds to the order in which they were placed in the barcode reader. One disadvantage of this system is that it requires the use of bar-coded cards and barcode readers to be present in the playing area. The presence of such devices in the playing area may be intrusive to players. Furthermore, dealers may need to be trained to use the special devices and therefore the system does not appear to be seamless or natural to the existing playing environment.
- It would be desirable to be provided with a system for recognizing playing cards positioned on a game table in an accurate and efficient manner.
- It would be desirable to be provided with a method of recognizing standard playing cards positioned on a game table without having to detect their corner.
- It would also be desirable to be provided with a seamless, automated, and reliable system for detecting inconsistencies on a game table and providing an accurate description of the context in which detected inconsistencies occurred.
- An exemplary embodiment is directed to a system for identifying a gaming object on a gaming table comprising at least one overhead camera for capturing an image of the table; a detection module for detecting a feature of the object on the image; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; and an identity module trained to recognize the object from the transformed region.
- According to another embodiment, at least one factor attributable to casino and table game environments and gaming objects impedes reliable recognition of said object by said statistical classifier when trained to recognize said object from said region of interest without transformation by said feature space module.
- Another embodiment is directed to a method of identifying a value of a playing card placed on a game table comprising: capturing an image of the table; detecting at least one feature of the playing card on the image; delimiting a target region of the image according to the feature, wherein the target region overlaps a region of interest, and the region of interest describes the value; scanning the target region for a pattern of contrasting points; detecting the pattern; delimiting the region of interest of the image according to a position of the pattern; and analyzing the region of interest to identify the value.
- Another embodiment is directed to a system for detecting an inconsistency with respect to playing cards dealt on a game table comprising: a card reader for determining an identity of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table; a recognition module for determining an identity of each card positioned on the table from the images; and a tracking module for comparing the identity determined by the card reader with the identity determined by the recognition module, and detecting the inconsistency.
- For a better understanding of embodiments of the present invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings which aid in understanding and in which:
-
FIG. 1 is an overhead view of a card game; -
FIG. 2 is a side plan view of an imaging system; -
FIG. 3 is a side plan view of an overhead imaging system; -
FIG. 4 is a top plan view of a lateral imaging system; -
FIG. 5 is an overhead view of a gaming table containing RFID detectors; -
FIG. 6 is a block diagram of the components of an exemplary embodiment of a system for tracking gaming objects; -
FIG. 7 is a plan view of card hand representations; -
FIG. 8 is a flowchart of a first embodiment of an IP module; -
FIG. 9 is an overhead view of a gaming table with proximity detection sensors; -
FIG. 10 is a plan view of a card position relative to proximity detection sensors; -
FIG. 11 illustrates an overhead image of a card hand where the corners of a card are undetectable; -
FIG. 12 is a flowchart describing the preferred method of extracting a region of interest from a card edge; -
FIG. 13 illustrates an application of the preferred method of extracting a region of interest from a card edge; -
FIG. 14 is a flowchart describing another method for extracting a region of interest from a card edge; -
FIG. 15 illustrates an application of another method of extracting a region of interest from a card edge; -
FIG. 16 is a block diagram of the preferred system for identifying a gaming object on a gaming table; -
FIG. 17 illustrates an example of a feature space that may be used for recognition purposes -
FIG. 18 is a flowchart describing a method of detecting inconsistencies with respect to playing cards dealt on a game table; -
FIG. 19 illustrates a first application of the method of detecting inconsistencies with respect to playing cards dealt on a game table; -
FIG. 20 illustrates a second application a method of detecting inconsistencies with respect to playing cards dealt on a game table; -
FIG. 21 illustrates a third a method of detecting inconsistencies with respect to playing cards dealt on a game table; -
FIG. 22 illustrates a Feed Forward Neural Network; -
FIG. 23 illustrates Haar feature classifiers; -
FIG. 24 is a flowchart describing a method of calibrating an imaging system within the context of table game tracking; and -
FIG. 25 illustrates a combination of weak classifiers into one strong classifier as achieved through a boosting module. - In the following description of exemplary embodiments we will use the card game of blackjack as an example to illustrate how the embodiments may be utilized.
- Referring now to
FIG. 1 an overhead view of a card game is shown generally as 10. More specifically,FIG. 1 is an example of a blackjack game in progress. A gaming table is shown asfeature 12.Feature 14 is a single player and feature 16 is the dealer.Player 14 has threecards 18 dealt bydealer 16 within dealingarea 20. The dealer's cards are shown asfeature 22. In thisexample dealer 16 utilizes acard shoe 24 to dealcards area 20. Within gaming table 12 there are a plurality of bettingregions 26 in which aplayer 14 may place a bet. A bet is placed through the use ofchips 28.Chips 28 are wagering chips used in a game, examples of which are plaques, jetons, wheelchecks, Radio Frequency Identification Device (RFID) embedded wagering chips and optically encoded wagering chips. - An example of a bet being placed by
player 14 is shown aschips 28 a within bettingregion 26 a.Dealer 16 utilizeschip tray 30 to receive and providechips 28.Feature 32 is an imaging system, which is utilized by the present invention to provide overhead imaging and optional lateral imaging ofgame 10. An optional feature is aplayer identity card 34, which may be utilized by the present invention to identify aplayer 14. - At the beginning of every
game players 14 that wish to play place their wager, usually in the form ofgaming chips 28, in a betting region 26 (also known as betting circle or wagering area).Chips 28 can be added to a bettingregion 26 during the course of the game as per the rules of the game being played. Thedealer 16 then initiates the game by dealing theplaying cards shoe 24. Theshoe 24 can take different embodiments including non-electromechanical types and electromechanical types. Theshoe 24 can be coupled to an apparatus (not shown) to read, scan or image cards being dealt from theshoe 24. Thedealer 16 can deal theplaying cards area 20. The dealingarea 20 may have a different shape or a different size than shown inFIG. 1 . The dealingarea 20, under normal circumstances, is clear of foreign objects and usually only containsplaying cards player identity card 34 and dice. Aplayer identity card 34 is an identity card that aplayer 14 may possess, which is used by the player to provide identity data and assist in obtaining complimentary (“comps”) points from a casino. Aplayer identity card 34 may be used to collect comp points, which in turn may be redeemed later on for comps. Dealers may have dealer identity cards (not shown) similar to player identity cards that dealers use to register themselves at the table. - During the progression of the game,
playing cards area 20 by thedealer 16. The dealingarea 20 may have specific regions outlined on the table 12 where thecards - For the purpose of this disclosure, chips, cards, card hands, currency bills, player identity cards, dealer identity cards, lammers and dice are collectively referred to as gaming objects. In addition the term “gaming region” is meant to refer to any section of gaming table 12 including the entire gaming table 12.
- Referring now to
FIG. 2 , a side plan view of an imaging system is shown. This is imagingsystem 32 ofFIG. 1 .Imaging system 32 comprisesoverhead imaging system 40 and optionallateral imaging system 42.Imaging system 32 can be located on or beside the gaming table 12 to image a gaming region from a top view and/or from a lateral view.Overhead imaging system 40 can periodically image a gaming region from a planar overhead perspective. Theoverhead imaging system 40 can be coupled to the ceiling or to a wall or any location that would allow an approximate top view of the table 12. The optionallateral imaging system 42 can image a gaming region from a lateral perspective.Imaging systems wiring 44 which runs throughtower 46. - The
imaging system 32 utilizes periodic imaging to capture a video stream at a specific number of frames over a specific period of time, such as for example, thirty frames per second. Periodic imaging can also be used by animaging system 32 when triggered via software or hardware means to capture an image upon the occurrence of a specific event. An example of a specific event would be if a stack of chips were placed in a bettingregion 26. An optical chip stack or chip detection method utilizingoverhead imaging system 40 can detect this event and can send a trigger tolateral imaging system 42 to capture an image of the bettingregion 26. In an alternative embodimentoverhead imaging system 40 can trigger an RFID reader to identify the chips. Should there be a discrepancy between the two means of identifying chips the discrepancy will be flagged. - Referring now to
FIG. 3 , a side plan view of an overhead imaging system is shown.Overhead imaging system 40 comprises one ormore imaging devices 50 and optionally one or more lighting sources (if required) 52 which are each connected towiring 44. Eachimaging device 50 can periodically produce images of a gaming region. Charged Coupling Device (CCD) sensors, Complementary Metal Oxide Semiconductor (CMOS) sensors, line scan imagers, area-scan imagers and progressive scan imagers are examples ofimaging devices 50.Imaging devices 50 may be selective to any frequency of light in the electromagnetic spectrum, including ultra violet, infra red and wavelength selective.Imaging devices 50 may be color or grayscale.Lighting sources 52 may be utilized to improve lighting conditions for imaging. Incandescent, fluorescent, halogen, infra red and ultra violet light sources are examples of lighting sources 52. - An
optional case 54 enclosesoverhead imaging system 40 and if so provided, includes atransparent portion 56, as shown by the dotted line, so thatimaging devices 50 may view a gaming region. - Referring now to
FIG. 4 , a top plan view of a lateral imaging system is shown.Lateral imaging system 42 comprises one ormore imaging devices 50 andoptional lighting sources 52 as described with reference toFIG. 3 . - An
optional case 60 encloseslateral imaging system 42 and if so provided includes atransparent portion 62, as shown by the dotted line, so thatimaging devices 50 may view a gaming region. - The examples of
overhead imaging system 40 andlateral imaging system 42 are not meant by the inventors to restrict the configuration of the devices to the examples shown. Any number ofimaging devices 50 may be utilized and if a case is used to house theimaging devices 50, thetransparent portions - According to one embodiment of the present invention, a calibration module assigns parameters for visual properties of the gaming region.
FIG. 24 is a flowchart describing the operation of the calibration module as applied to the overhead imaging system. The calibration process can be: manual, with human assistance; fully automatic; or semi automatic. - Referring back to
FIG. 24 , afirst step 4800 consists in waiting for an image of the gaming region from the overhead imager(s). Thenext step 4802 consists in displaying the image to allow the user to select the area of interest where gaming activities occur. For instance, within the context of blackjack gaming, the area of interest can be a box encompassing the betting boxes, the dealing arc, and the dealer's chip tray. - In
step 4804, coefficients for perspective correction are calculated. Such correction consists in an image processing technique whereby an image can be warped to any desired view point. Its application is particularly useful if the overhead imagers are located in the signage and the view of the gaming region is slightly warped. A perfectly overhead view point would be best for further image analysis. A checkerboard or markers on the table may be utilized to assist with calculating the perspective correction coefficients. - Subsequently, in
step 4806, the resulting image is displayed to allow the user to select specific points or regions of interest within the gaming area. For instance, the user may select the position of betting spots and the region encompassing the dealer's chip tray. Other specific regions or points within the gaming area may be selected. - In the
next step 4808, camera parameters such as shutter value, gain value(s) are calculated and white balancing operations are performed. Numerous algorithms are publicly available to one skilled in the art for performing camera calibration. - In
step 4810, additional camera calibration is performed to adjust the lens focus and aperture. - Once the camera calibration is complete and according to
step 4812, an image of the table layout, clear of any objects on its surface, is captured and saved as a background image. Such an image may be for detecting objects on the table. The background image may be continuously captured at various points during system operation in order to have a most recent background image. - In
step 4814, while the table surface is still clear of objects additional points of interest such as predetermined markers are captured. - In the
final step 4816, the calibration parameters are stored in memory. - It must be noted that the calibration concepts may be applied for the lateral imaging system as well as other imaging systems.
- In an optional embodiment, continuous calibration checks may be utilized to ensure that the initially calibrated environment remains relevant. For instance a continuous brightness check may be performed periodically, and if it fails, an alert may be asserted through a feedback device indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction, and region of interest definition.
- As an example, if lighting in the gaming region changes calibration may need to be performed again. A continuous brightness check may be applied periodically and if the brightness check fails, an alert may be asserted through one of the feedback devices indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction and the regions of interest.
- In an optional embodiment, a white sheet similar in shade to a playing card surface may be placed on the table during calibration in order to determine the value of the white sheet at various points on the gaming table and consequently the lighting conditions at these various points. The recorded values may be subsequently utilized to determine threshold parameters for detecting positions of objects on the table.
- It must be noted that not all steps of calibration need human input. Certain steps such as white balancing may be performed automatically.
- In addition to the imaging systems described above, exemplary embodiments may also make use of RFID detectors for gambling chips containing an RFID.
FIG. 5 is an overhead view of a gaming table containingRFID detectors 70. When one ormore chips 28 containing an RFID are placed on anRFID detector 70 situated below a bettingregion 26 the values of thechips 28 can be detected by theRFID detector 70. The same technology may be utilized to detect the values of RFID chips within thechip tray 30. - Referring now to
FIG. 6 a block diagram of the components of an exemplary embodiment is shown. Identity and Positioning module (IP module) 80 identifies the value and position of cards on the gaming table 12. Intelligent Position Analysis and Tracking module (IPAT module) 84 performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression. The Game Tracking module (GT module) 86 processes data from theIPAT module 84 and keeps track of game events and game status. TheGT module 86 can optionally obtain input fromBet Recognition module 88.Bet Recognition module 88 identifies the value of wagers placed at the game.Player Tracking module 90 keeps track of patrons and players that are participating at the games. An optional dealer tracking module can keep track of the dealer dealing at the table.Surveillance module 92 records video data fromimaging system 32 and links game event data to recorded video.Surveillance module 92 provides efficient search and replay capability by way of linking game event time stamps to the recorded video. Analysis andReporting module 94 analyzes the gathered data in order to generate reports on players, tables and casino personnel. Example reports include reports statistics on game related activities such as profitability, employee efficiency and player playing patterns. Events occurring during the course of a game can be analyzed and appropriate actions can be taken such as player profiling, procedure violation alerts or fraud alerts. -
Modules 80 to 94 communicate with one another through anetwork 96. A 100 Mbps Ethernet Local Area Network or Wireless Network can be used as a digital network. The digital network is not limited to the specified implementations, and can be of any other type, including local area network (LAN), Wide Area Network (WAN), wired or wireless Internet, or the World Wide Web, and can take the form of a proprietary extranet. -
Controller 98 such as a processor or multiple processors can be employed to executemodules 80 to 94 and to coordinate their interaction amongst themselves, with theimaging system 32 and with input/output devices 100,optional shoe 24 andoptional RFID detectors 70. Further,controller 98 utilizes data stored indatabase 102 for providing operating parameters to any of themodules 80 to 94.Modules 80 to 94 may write data todatabase 102 or collect stored data fromdatabase 102. Input/Output devices 100 such as a laptop computer, may be used to input operational parameters intodatabase 102. Examples of operational parameters are the position coordinates of the bettingregions 26 on the gaming table 12, position coordinates of thedealer chip tray 30, game type and game rules. - Before describing how the present invention may be implemented we first provide some preliminary definitions. Referring now to
FIG. 7 a plan view of card representations is shown. A card or card hand is first identified by an image from theimaging system 32 as ablob 110. A blob may be any object in the image of a gaming area but for the purposes of this introduction we will refer toblobs 110 that are cards and card hands. The outer boundary ofblob 110 is then traced to determine acontour 112 which is a sequence of boundary points forming the outer boundary of a card or a card hand. In determining a contour, digital imaging thresholding is used to establish thresholds of grey. In the case of a card or card hand, theblob 110 would be white and bright on a table. From the blob 110 a path is traced around its boundary until thecontour 112 is established. Acontour 112 is then examined for regions of interest (ROI) 118, which identify a specific card. Although inFIG. 7 ROI 118 has been shown to be the rank and suit of a card an alternative ROI could be used to identify the pip pattern in the centre of a card. From the information obtained fromROIs 118 it is possible to identify cards in acard hand 120. -
IP module 80 may be implemented in a number of different ways. In a first embodiment, overhead imaging system 32 (seeFIG. 2 ) located above the surface of the gaming table provides overhead images. An overhead image need not be at precisely ninety degrees above the gaming table 12. In one embodiment it has been found that seventy degrees works well to generate an overhead view. An overhead view enables the use of two dimensional Cartesian coordinates of a gaming region. One or more image processing algorithms process these overhead images of a gaming region to determine the identity and position of playing cards on the gaming table 12. - Referring now to
FIG. 8 a flowchart of an embodiment of anIP module 80 is shown. Beginning atstep 140 initialization and calibration of global variables occurs. Examples of calibration are manual or automated setting of camera properties for animager 32 such as shutter value, gain levels and threshold levels. In the case of thresholds, a different threshold may be stored for each pixel in the image or different thresholds may be stored for different regions of the image. Alternatively, the threshold values may be dynamically calculated from each image. Dynamic determination of a threshold would calculate the threshold level to be used for filtering out playing cards from a darker table background. - Moving to step 142 the process waits to receive an overhead image of a gaming region from
overhead imaging system 40. At step 144 a thresholding algorithm is applied to the overhead image in order to differentiate playing cards from the background to create a threshold image. A background subtraction algorithm may be combined with the thresholding algorithm for improved performance. Contrast information of the playing card against the background of the gaming table 12 can be utilized to determine static or adaptive threshold parameters. Static thresholds are fixed while dynamic thresholds may vary based upon input such as the lighting on a table. The threshold operation can be performed on a gray level image or on a color image. Step 144 requires that the surface of game table 12 be visually contrasted against the card. For instance, if the surface of game table 12 is predominantly white, then a threshold may not be effective for obtaining the outlines of playing cards. The output of the thresholded image will ideally show the playing cards asindependent blobs 110. This may not always be the case due to issues of motion or occlusion. Other bright objects such as a dealer's hand may also be visible asblobs 110 in the thresholded image. Filtering operations such as erosion, dilation and smoothing may optionally be performed on the thresholded image in order to eliminate noise or to smooth the boundaries of ablob 110. - In the
next step 146, thecontour 112 corresponding to eachblob 110 is detected. Acontour 112 can be a sequence of boundary points of theblob 110 that more or less define the shape of theblob 110. Thecontour 112 of ablob 110 can be extracted by traversing along the boundary points of theblob 110 using a boundary following algorithm. Alternatively, a connected components algorithm may also be utilized to obtain thecontour 112. - Once the
contours 112 have been obtained processing moves to step 148 where shape analysis is performed in order to identify contours that are likely not cards or card hands and eliminate these from further analysis. By examining the area of acontour 112 and the external boundaries, a match may be made to the known size and/or dimensions of cards. If acontour 112 does not match the expected dimensions of a card or card hand it can be discarded. - Moving next to step 150,
line segments 114 forming the card and card hand boundaries are extracted. One way to extract line segments is to traverse along the boundary points of thecontour 112 and test the traversed points with a line fitting algorithm. Another potential line detection algorithm that may be utilized is a Hough Transform. At the end ofstep 150,line segments 114 forming the card or card hand boundaries are obtained. It is to be noted that, in alternate embodiments,straight line segments 114 of the card and card hand boundaries may be obtained in other ways. For instance,straight line segments 114 can be obtained directly from an edge detected image. For example, an edge detector such as the Laplace edge detector can be applied to the source image to obtain an edge map of the image from whichstraight line segments 114 can be detected. These algorithms are non-limiting examples of methods to extract positioning features, and one skilled in the art might use alternate methods to extract these card and card hand positioning features. - Moving to step 152, one or
more corners 116 of cards can be obtained from the detectedstraight line segments 114.Card corners 116 may be detected directly from the original image or thresholded image by applying a corner detector algorithm such as for example, using a template matching method using templates of corner points. Alternatively, thecorner 116 may be detected by traversing points alongcontour 112 and fitting the points to a corner shape. Corner points 116, andline segments 114 are then utilized to create a position profile for cards and card hands, i.e. where they reside in the gaming region. - Moving to step 154,
card corners 116 are utilized to obtain a Region of Interest (ROI) 118 encompassing a card identifying symbol, such as the number of the card, and the suit. A card identifying symbol can also include features located in the card such as the arrangement of pips on the card, or can be some other machine readable code. - Corners of a card are highly indicative of a position of a region of interest. For this very reason, they constitute the preferred reference points for extracting regions of interest. Occasionally, corners of a card may be undetectable within an amalgam of overlapping gaming objects, such as a card hand. The present invention provides a method of identifying such cards by extracting a region of interest from any detected card feature that may constitute a valid reference point.
-
FIG. 11 illustrates an overhead image of acard hand 3500 comprised ofcards card 3504 overlaps thecard 3502 and is overlapped by thecard 3506 such that corners of thecard 3504 are not detectable. - According to a preferred embodiment of the invention, the overhead image is analyzed to obtain the contour of the
card hand 3500. Subsequently,line segments card hand 3500 are extracted. The detected line segments are thereafter utilized to detectconvex corners - As mentioned herein above, corners constitute the preferred reference points for extracting Regions of Interest. In the following description, the term “index corner” refers to a corner of a card in the vicinity of which a region of interest is located. The term “blank corner” refers to a corner of a card that is not an index corner.
- The
corner 3530 is the first one to be considered. A sample of pixels drawn within the contour, in the vicinity of thecorner 3530, is analyzed in order to determine whether thecorner 3530 is an index corner. A sufficient number of contrasting pixels are detected and thecorner 3530 is identified as an index corner. Consequently, a region of interest is projected and extracted according to the position of thecorner 3530, as well as the width, height, and offset of regions of interests from index corners. - Similarly, the
corner 3532 is identified as an index corner and a corresponding region of interest is projected and extracted. - The
corner 3534 is the third to be considered.Corner 3534 is identified as a blank corner. Due to their coordinates, thecorners corner 3534 is dismissed from further analysis. - Similarly to
corners corner 3536 is identified as an index corner and a corresponding region of interest is projected and extracted. - The
corners corners corners - As a result of the corner analysis, the regions of interest of the
cards card hand 3500 have been extracted. However, none of the corners of thecard 3504 has been detected and consequently, no corresponding region of interest has been extracted. - In order to extract any remaining regions of interest, the extracted
line segments card hand 3500 are utilized according to a method provided by the present invention. - In
FIG. 12 , a flowchart describing the preferred method for extracting a region of interest from a card edge segment is provided. It must be noted that a partial card edge segment may suffice for employing this method. - In
step 3600, two scan line segments are determined. The scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset according to a predetermined offset of the region of interest from a corresponding card edge. The second of the scan line segments is offset from the first scan line segment according to the predetermined width of the rank and suit symbols. - In
step 3602, pixel rows delimited by the scan line segments are scanned, and for each of the rows a most contrasting color or brightness value is recorded. - Subsequently, in step 3604, the resulting sequence of most contrasting color or brightness values, referred to as a contrasting value scan line segment, is analyzed to identify regions that may correspond to a card rank and suit. The analysis may be performed according to pattern matching or pattern recognition algorithms.
- According to the preferred embodiment, the sequence of contrasting color values is convolved with a mask of properties expected from rank characters and suit symbols. For instance, in the context of a white card having darker coloured rank characters and suit symbols, the mask may consist of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols. The result of the convolution will give rise to peaks where a sequence of the set of contrasting color values corresponds to the expected properties described by the mask.
- Several methods are available for performing such convolution, including but not limited to cross-correlation, squared difference, correlation coefficient, as well as their normalized versions.
- In
step 3606, the resulting peaks are detected, and the corresponding regions of interests are extracted. -
FIG. 13 illustrates an analysis of theline segment 3510 according to the preferred embodiment of the invention. - First, two scan line segments, 3700 and 3702 are determined. The
scan line segments line segment 3510. Furthermore, the scan line segments are parallel to theline segment 3510. Finally, thescan line segment 3700 is offset from theline segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge. Thescan line segment 3702 is offset from thescan line segment 3700 according to the predetermined width of the rank characters and suit symbols. - Subsequently, rows delimited by the
scan line segments brightness values 3704, also referred to as a contrasting value scan line segment. - Once the
sequence 3704 is obtained, it is convolved with amask 3706 of properties expected from rank characters and suit symbols. Themask 3706 consists of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols. - A
result 3708 of the convolution gives rise to apeak 3710 where a sub-sequence ofsequence 3704 corresponds to the expected properties described by themask 3706. Finally, a region of interest 3714 corresponding to thecard 3502 is extracted. - In
FIG. 14 , a flowchart describing another embodiment of the method for extracting a region of interest from a line segment is provided. - In
step 3800, several scan line segments are determined. The scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset from the analyzed line segment according to a predetermined offset of the region of interest from a corresponding card edge. The other scan line segments are offset from the first scan line segment according to the predetermined width of the rank and suit symbols. The scan line segments are positioned in that manner to ensure that at least some of them would intersect any characters and symbols located along the analyzed line segment. - In
step 3802, each scan line segment is scanned and points of contrasting color or brightness values are recorded to assemble a set of contrasting points, which we will refer to as seed points. - Subsequently, in
step 3804, the set of contrasting points is analyzed to identify clusters that appear to be defining, at least partially, rank characters and suit symbols. The clusters can be extracted by grouping the seed points or by further analyzing the vicinity of one or more of the seed points using a region growing algorithm. - Finally, in
step 3806, regions of interest are extracted from the identified clusters of contrasting points. -
FIG. 15 illustrates an analysis of theline segment 3510 according to the preferred embodiment of the invention. - First, two
scan line segments scan line segments line segment 3510. Furthermore, thescan line segments line segment 3510. Finally, thescan line segment 3900 is offset from theline segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge segment. Thescan line segment 3902 is offset from thescan line segment 3900 according to the predetermined width of rank characters and suit symbols. Thescan line segments line segment 3510. - The
scan line segments seed points - Finally, regions of
interest seed points - Referring back to
FIG. 11 , the same invention is applied to theline segments - Although the invention has been described within the context of a hand of cards, it may be applied within the context of a single gaming object, or an amalgam of overlapping gaming objects.
- Although the invention has been described as preceded by a corner analysis, it may be applied without any previous corner analysis. However, it is usually preferable to start with a corner analysis since corners are preferred over line segments as reference points.
- Although the invention has been described as a method of extracting a region of interest from a card edge, it may do so from any detected card feature, provided that the feature constitutes a valid reference point for locating a region of interest. For instance, the method may be applied to extract regions of interest from detected corners, or detected pips, instead of line segments. Such versatility is a sizeable asset within the context of table games, where some playing cards may present a very limited number of detectable features.
- It is important to note that the preceding corner analysis could have been performed according to the invention.
- Referring back to
FIG. 8 , atstep 156, a recognition method may be applied to identify the value of the card. In one embodiment, theROI 118 is rotated upright and a statistical classifier, also referred to as machine learning model, can be applied to recognize the symbol. Prior to recognition, theROI 118 may be pre-processed by thresholding the image in theROI 118 and/or narrowing theROI 118 to encompass the card identifying symbols. Examples of statistical classifiers that may be utilized with this invention include Neural Networks, Support Vector Machines, Hidden Markov Models and Bayesian Networks. A Feed-forward Neural Network is one example of a statistical classifier that may be used with this system. Training of the statistical classifier may happen in a supervised or unsupervised manner. In an alternate embodiment, a method that does not rely on a statistical classifier, such as template matching, may be utilized. In yet another embodiment, the pattern of pips on the cards may be utilized to recognize the cards, provide a sufficient portion of the pattern is visible in a card hand. A combination of recognition algorithms may be used to improve accuracy of recognition. - The present invention provides a system for identifying a gaming object on a gaming table in an efficient and seamless manner. The system comprises at least one overhead camera for capturing a plurality of images of the table; a detection module for detecting a feature of the object on an image of the plurality; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; a dimensionality reduction module for reducing the transformed region into a reduced representation according to dimensionality reduction algorithms, and an identity module trained to recognize the object from the transformed region.
- Within the context of the system illustrated in
FIG. 6 , the overhead camera corresponds to theImager 32. As for the detection module, the search module, the feature space module, the dimensionality reduction module, and the identification module, they are components of theIP module 80. -
FIG. 16 is a block diagram of the preferred system for identifying a gaming object on a gaming table. - The
Imager 32 provides an overhead image of the game table to aDetection module 4000. Subsequently, theDetection Module 4000 detects features of potential gaming objects placed on the game table. Such detection may be performed according to any of the aforementioned methods; for instance, it may consist of thesteps FIG. 8 . - According to one embodiment of the present invention, the
Detection Module 4000 comprises a cascade of classifiers trained to recognize specific features of interest such as corners and edges. - According to another embodiment of the present invention, the system further comprises a Booster Module, and the
Detection Module 4000 comprises a cascade of classifiers. The Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier as illustrated inFIG. 25 . It may operate according to one of several boosting algorithms including Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle Adaboost. - Referring back to
FIG. 16 , theDetection Module 4000 provides the image along with the detected features to aSearch Module 4002. The latter extracts regions of interest within the image from the detected features. Such extraction may be performed according to any of the aforementioned methods; for instance, it may consist of thesteps FIG. 12 . The extracted regions of interest may be further processed by applying image thresholding, rotating or by refining the region of interest. - The
Search Module 4002 provides the extracted regions of interest to the Feature Space (FS)Module 4004. For each region of interest, theFS Module 4004 transforms a provided representation into a feature space, or a set of feature spaces that is more appropriate for recognition purposes. - According to one embodiment, each region of interest provided to the
FS Module 4004 is represented as a grid of pixels, wherein each pixel is assigned a color or brightness value. - Prior to performing a transformation, the
FS Module 4004 must select a desirable feature space according to a required type, speed, and robustness of recognition. The selection may be performed in a supervised manner, an unsupervised manner, or both. -
FIG. 17 illustrates an example of a feature space that may be used for recognition purposes. The feature space consists in a histogram of the grayscale values stored in each column of a pixel grid. - Once a feature space is selected, the
FS Module 4004 applies a corresponding feature space transformation on a corresponding image. - It is important to distinguish feature space transformations from geometrical transformations. The geometrical transformation of an image consists in reassigning positions of pixels positions within a corresponding grid. While such a transformation does modify an image, it does not modify underlying semantics; the means by which the original image and its transformed version are represented is the same. On the other hand, feature space transformations modify underlying semantics.
- One example of a feature space transformation consists in modifying the representation of colours within a pixel grid from RGB (Red, Green, and Blue) to HSV (Hue, Saturation, and Value or Brightness). In this particular case, the data is not modified, but its representation is. Such a transformation is advantageous in cases where it is desirable for the brightness of a pixel to be readily available. Furthermore, the HSV space is less sensitive to a certain type of noise than its RGB counterpart.
- The Hough Line Transform is another example of a feature space transformation. It consists in transforming a binary image from a set of pixels to a set of lines. In the new feature space, each vector represents a line whereas in the original space, each vector represents the coordinates of a pixel. Consequently, such a transformation is particularly advantageous for applications where lines are to be analyzed.
- Other feature space transformations include various filtering operations such as Laplace and Sobel. Pixels resulting from such transformations store image derivative information rather than image intensity.
- Canny edge detection, Fast Fourier Transform (FFT), and Discrete Cosine Transform (DCT), and Wavelet transforms are other examples of feature space transformations. Images resulting from FFT and DCT are no longer represented spatially (by a pixel grid), but rather in a frequency domain, wherein each point represents a particular frequency contained in the real-domain image. Such transformations are practical because the resulting feature space is invariant with respect to some transformations, and robust with respect to others. For instance, discarding the higher frequency components of an image resulting from a DCT makes it more resilient to noise, which is generally present in high frequencies. As a result, recognition is more reliable.
- Within the context of the present invention, the use of different feature spaces provides for additional robustness with respect to parameters such as lighting variations, brightness, image noise, image resolutions, ambient smoke, as well as geometrical transformations such as rotations and translations. As a result, the system of the present invention provides for greater training and recognition accuracy.
- According to a preferred embodiment of the present invention, Principal Component Analysis (PCA) is the main feature space transformation in the arsenal of the
FS Module 4004. It is a linear transform that selects a new coordinate system for a given data set, such that the greatest variance by any projection of the data set relates to a first axis, known as the principal component, the second greatest variance, on the second axis, and so on. - The first step of the PCA consists in constructing a 2D matrix A of size n×wh where each column is an image vector, given n images of w×h pixels. Each image vector is formed by concatenating all the pixel rows of a corresponding image into vector. The second step consists in computing an average image from the matrix A by summing up all the rows and dividing by n. The resulting clement vector of size (wh) is called
u . The third step consists in subtractingu from all the columns of A to get a mean subtracted matrix B of size (n×wh). The fourth step consists in computing the dot products of all possible image pairs. Let C be the new (n×n) matrix where C[i][j]=dot product of B[i] and B[j]. C is the covariance matrix. The penultimate step consists in Compute the n Eigen values and corresponding Eigen vectors of C. Finally, all Eigen values of C are sorted from the highest Eigen value to the lowest Eigen value. - According to another embodiment, the
FS Module 4004 applies predominantly one or more of the DCT, FFT, Log Polar Domains, or other techniques resulting in edge images. - Referring back to
FIG. 16 , and according to the preferred embodiment of the invention, theFS Module 4004 provides the transformed representation, or set of representations to a Dimensionality Reduction (DR)Module 4008. TheDR Module 4008 reduces the dimensionality of the provided representations by applying feature selection techniques, feature extraction techniques, or a combination of both. - According to the preferred embodiment of the present invention, the representations provided by the
FS Module 4004 result from the application of a PCA, and theDR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the PCA coefficients that contain the most information. - According to one embodiment of the present invention, the representations provided by the
FS Module 4004 result from the application of a DCT, and theDR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the DCT coefficients that contain the most information. - According to another embodiment of the present invention, the
DR Module 4006 reduces the dimensionality of the provided representations by applying a feature extraction technique that consists in projecting them into a feature space of fewer dimensions. - According to another embodiment of the present invention, the representations provided by the
FS Module 4004 result from the application of a DCT, and the DR Module applies a combination of feature selection and feature extraction techniques that consists in selecting a subset of the DCT coefficients that contain the most information, and applying PCA on the selected coefficients. - Within the context of the present invention, the application of dimensionality reduction techniques reduces computing computational overhead, thereby increasing the training and recognition procedures performed by the
Identity Module 4008. Furthermore, dimensionality reduction tends to eliminate, or at the very least reduce noise, and therefore, increase recognition and training efficiency. - According to another embodiment of the invention, the
FS Module 4004 provides the transformed representation or set of transformed representations to anIdentity Module 4008 trained to recognize gaming objects from dimensionality reduced representations of regions of interest. - Referring back to
FIG. 16 and according to the preferred embodiment of the present invention, theDR Module 4006 provides the dimensionality reduced representations to anIdentity Module 4008, which identifies a corresponding gaming object. - Still according to the preferred embodiment of the present invention, the
Identity Module 4008 comprises a statistical classifier trained to recognize gaming objects from dimensionality reduced representations. - According to one embodiment of the present invention, the
Identity Module 4008 comprises a Feed-forward Neural Network such as the one illustrated inFIG. 22 that consists of input nodes, multiple hidden layers, and output nodes. The hidden layers can be partially connected, as those shown inFIG. 22 , or fully connected. During the initial supervised training mode, a back propagation learning method is utilized in conjunction with the error function an error function to allow the Neural Network to adjust its internal weights according the inputs and outputs. - According to another embodiment of the present invention, the Identification Module comprises a cascade of classifiers.
- According to another embodiment of the present invention, the system further comprises a Booster Module, and the
Identity Module 4008 comprises a cascade of classifiers. The Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier. It may operate according to one of several boosting algorithms including Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle Adaboost. - Referring back to
FIG. 16 , and according to one embodiment of the present invention, the system is used to perform deck verification. When such verification is required, the dealer presents the corresponding cards on the table, in response to which theIdentity Module 4008 is automatically triggered to provide the rank and suit of each identified card to aDeck Verification Module 4010. The latter module analyzes the provided data to ensure that the deck of cards adheres to a provided set of standards. - According to one embodiment of the present invention, the
Detection Module 4000 recognizes a configuration of playing cards suitable for a deck verification procedure and triggers theIdentity Module 4008 to provide the rank and suit of each identified card to aDeck Verification Module 4010. - According to another embodiment of the present invention, the
Identity Module 4008 is manually triggered to provide the rank and suit of each identified card to theDeck Verification Module 4010. - Referring back to
FIG. 8 , once the identity and position profile of each visible card in the gaming region has been obtained, the data can be output to other modules atstep 158. Examples of data output atstep 158 may include the number of card hands, the Cartesian coordinates of each corner of a card in a hand (or other positional information such as line segments), and the identity of the card as a rank and/or suit. - At
step 160 the process waits for a new image and when received processing returns to step 144. - Referring now to
FIG. 9 an overhead view of gaming table with proximity detection sensors is shown. In an alternativeembodiment IP module 80 may utilizeproximity detection sensors 170.Card shoe 24 is a card shoe reader, which dispenses playing cards and generates signals indicative of card identity. An example of acard shoe reader 24 may include those disclosed in U.S. Pat. No. 5,374,061 to Albrecht, U.S. Pat. No. 5,941,769 to Order, U.S. Pat. No. 6,039,650 to Hill, or U.S. Pat. No. 6,126,166 to Lorson. Commercial card shoe readers such as for example the MP21 card reader unit sold by Bally Gaming or the Intelligent Shoe sold by Shuffle Master Inc. may be utilized. In an alternate embodiment of the card shoe reader, a card deck reader such as the readers commercially sold by Bally Gaming and Shuffle Master can be utilized to determine the identity of cards prior to their introduction into the game. Such a card deck reader would pre-determine a sequence of cards to be dealt into the game. An array ofproximity detection sensors 170 can be positioned under the gaming table 12 parallel to the table surface, such that periodic sampling of theproximity detection sensors 170 produces a sequence of frames, where each frame contains the readings from the proximity detection sensors. Examples ofproximity detection sensors 170 include optical sensors, infra red position detectors, photodiodes, capacitance position detectors and ultrasound position detectors.Proximity detection sensors 170 can detect the presence or absence of playing cards (or other gaming objects) on the surface of gaming table 12. Output from the array of proximity detection sensors can be analog or digital and can be further processed in order to obtain data that represents objects on the table surface as blobs and thus replacestep 142 ofFIG. 8 . In this embodiment ashoe 24 would provide information on the card dealt andsensors 170 would provide positioning data. The density of the sensor array (resolution) will determine what types of object positioning features may be obtained. To assist in obtaining positioning features further processing may be performed such as shown inFIG. 10 which is a plan view of a card position relative toproximity detection sensors 170.Sensors 170 provide signal strength information, where the value one represents an object detected and the value zero represents no object detected. Straight lines may be fitted to the readings ofsensors 170 using a line fitting method. In this mannerproximity detection sensors 170 may be utilized to determine position features such asline segments 114 orcorners 116. - In this embodiment, identity data generated from the
card shoe reader 24 and positioning data generated fromproximity detection sensors 170 may be grouped and output to other modules. Associating positional data to cards may be performed by theIPAT module 84. - In another alternate embodiment of the
IP module 80, card reading may have an RFID based implementation. For example, RFID chips embedded inside playing cards may be wirelessly interrogated by RFID antennae or scanners in order to determine the identity of the cards. Multiple antennae may be used to wirelessly interrogate and triangulate the position of the RFID chips embedded inside the cards. Card positioning data may be obtained either by wireless interrogation and triangulation, a matrix of RFID sensors, or via an array of proximity sensors as explained herein. - We shall now describe the function of the Intelligent Position Analysis and Tracking module (IPAT module) 84 (see
FIG. 6 ). TheIPAT module 84 performs analysis of the identity and position data of cards/card hands and interprets them “intelligently” for the purpose of tracking game events, game states and general game progression. The IPAT module may perform one or more of the following tasks: - a) Object modeling;
- b) Object motion tracking;
- c) Points in contour test;
- d) Detect occlusion of cards;
- e) Set status flags for card positional features; and
- f) Separate overlapping card hands into individual card hands.
- According to the present invention, the
IPAT module 84, in combination with theImager 32, theIP module 80, and thecard shoe 24, may also detect inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards. - According to a preferred embodiment of the present invention, the system for detecting inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards comprises a card shoe for storing playing cards to be dealt on the table; a card reader for determining an identity and a dealing order of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table, a recognition module for determining an identity and a position of each card positioned on the table from the images; and a tracking module for comparing the dealing order and identity determined by the card reader with the identity and the position determined by the recognition module, and detecting the inconsistency.
- Within the context of the system illustrated in
FIG. 6 , the card shoe and card reader correspond to thecard shoe 24, which comprises an embedded card reader. The overhead camera corresponds to theImager 32. The recognition module corresponds to theIP module 80. Finally, the tracking module corresponds to theIPAT module 84. - In
FIG. 18 , a flowchart describing the interaction between theIPAT module 84,IP module 80, andcard shoe 24 for detecting such inconsistencies is provided. In step 4200, theIPAT module 84 is calibrated and its global variables are initialized. In step 4202, theIPAT module 84 receives data from thecard shoe 24. - In the preferred embodiment of the present invention, the data is received immediately following each removal of a card from the
card shoe 24. In another embodiment, the data is received following each removal of a predetermined number of cards from thecard shoe 24. In yet another embodiment, the data is received periodically. - In the preferred embodiment of the present invention, the data consist of a rank and suit of a last card to be removed from the
card shoe 24. In another embodiment, the data consist of a rank of a last card to be removed from thecard shoe 24. - In step 4204, the
IPAT module 84 receives data from theIP module 80. - In the preferred embodiment of the present invention, the data is received periodically. In another embodiment, the data is received in response to the realization of step 4202.
- In the preferred embodiment of the present invention, the data consist of a rank, suit, and position of each card placed on the game table.
- In another embodiment, the data consist of a rank and suit of each card placed on the game table.
- In yet another embodiment, the data consist of a rank of each card placed on the game table.
- In yet another embodiment, the data consist of a suit of each card placed on the game table.
- In yet another embodiment of the present invention, the data consist of a likely rank, and suit, as well as a position of each card placed on the game table.
- In yet another embodiment of the present invention, the data consist of a likely rank and a position of each card placed each card placed on the game table.
- In yet another embodiment of the present invention, the data consist of a likely suit and a position of each card placed each card placed on the game table.
- In step 4206, the
IPAT module 84 compares the data provided by thecard shoe 24 with those provided by theIP module 80. - In the preferred embodiment of the present invention, the
IPAT module 84 verifies whether the rank and suit of cards removed from thecard shoe 24 as well as the order in which they were removed correspond to the rank, suit, and position of cards placed on the game table according to a set of rules of the game being played. - In another embodiment, the
IPAT module 84 verifies whether the rank and suit of cards removed from thecard shoe 24 correspond to the rank and suit of those that are placed on the game table. - If an inconsistency is detected, the
IPAT module 84 informs thesurveillance module 92 according to step 4208. Otherwise, theIPAT module 84 returns to step 4202 as soon as subsequent data is provided by thecard shoe 24. - The invention will now be described within the context of monitoring a game of Baccarat. According to the rules of the game, a dealer withdraws four cards from a card shoe and deals two hands of two cards, face down; one for the player, and one for the bank. The player is required to flip the dealt cards and return them back to the dealer. The latter organizes the returned cards on the table and determines the outcome of the game. One known form of cheating consists in switching cards. More specifically, a player may hide cards of desirable value, switch a dealt card with one of the hidden cards, flip the illegally introduced card and return it back to the dealer. The present invention provides an efficient and seamless means to detect such illegal procedures.
- As mentioned hereinabove, according to the rules of the Baccarat, the dealer must withdraw four cards from the card shoe. According to a first exemplary scenario, the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the
card shoe 24, and provided to theIPAT module 84. - The player flips the dealt card and returns them to the dealer. The latter organizes the four cards on the table as illustrated in
FIG. 19 . The Five ofSpades 4300 and the Six ofHearts 4302 are placed in a region dedicated to the player's hand, and the Queen ofClubs 4304 and Ace ofDiamonds 4306 are placed in a region dedicated to the bank's hand. - The
Imager 32 captures overhead images of the table, and sends the images to theIP module 80 for processing. TheIP module 80 determines the position, suit, and rank ofcards IPAT module 84. The latter compares the data received from the card shoe reader and the IP module, and finds no inconsistency. Consequently, it waits for a new set of data from the card shoe reader. - According to a second exemplary scenario, the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the
card shoe 24, and provided to theIPAT module 84. - The player switches one of the dealt cards with one of his hidden cards to form a new hand, flips the cards of the new hand, and returns them to the dealer. The latter arranges the four cards returned by the player as illustrated in
FIG. 20 . The Five ofSpades 4300 and Four ofHearts 4400 are placed in a region dedicated to the player's hand, and the Queen ofClubs 4304 and Ace ofDiamonds 4306 are placed in a region dedicated to the bank's hand. - The
Imager 32 captures overhead images of the table, and sends the images to theIP module 80 for processing. TheIP module 80 determines the position, suit, and rank ofcards IPAT module 84. The latter compares the data received from thecard shoe 24 and the IP module, and finds an inconsistency; the rank of thecards cards card 4302 has been replaced by 4400, which likely results from a card switching procedure. Consequently, theIPAT module 84 provides a detailed description of the detected inconsistency to thesurveillance module 92. - According to a third exemplary scenario, the dealer withdraws in order the Five of Spades, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the
card shoe 24, and provided to theIPAT module 84. - The player flips the dealt card and returns them to the dealer. The latter organizes the four cards on the table in an erroneous manner, as illustrated in
FIG. 21 . The Five ofSpades 4300 and the Queen ofClubs 4304 are placed in a region dedicated to the player's hand, and the Six ofHearts 4302 and Ace ofDiamonds 4306 are placed in a region dedicated to the bank's hand. - The
Imager 32 captures overhead images of the table, and sends the images to theIP module 80 for processing. TheIP module 80 determines the position, suit, and rank ofcards IPAT module 84. The latter compares the data received from thecard shoe reader 24 and the IP module, and finds an inconsistency; while the rank and suit of the cards removed from the card shoe correspond to the rank and suit of the cards positioned on the table, the order in which the cards were removed from the card shoe does not correspond to the order in which the cards were organized on the table. More specifically, thecard 4302 has been permutated with thecard 4304. Consequently, theIPAT module 84 provides a detailed description of the detected inconsistency to thesurveillance module 92. - While the invention has been described within the context of monitoring a game of Baccarat, it is applicable to any table game involving playing cards dealt from a card shoe.
- We shall now discuss the functionality of the game tracking (GT) module 86 (see
FIG. 6 ). TheGT module 86 processes input relating to card identities and positions to determine game events according to a set of rules of the game being played. It also keeps track of the state of the game, which it updates according to the determined game events. It may also store and maintain previous game states in memory, to which it may refer for determining a next game state. - Returning to
FIG. 6 we will now discussbet recognition module 88.Bet recognition module 88 can determine the value of wagers placed by players at the gaming table. In one embodiment, an RFID based bet recognition system can be implemented, as shown inFIG. 5 . Different embodiments of RFID based bet recognition can be used in conjunction with gaming chips containing RFID transmitters. As an example, the RFID bet recognition system sold by Progressive Gaming International or by Chipco International can be utilized. - The
bet recognition module 88 can interact with the other modules to provide more comprehensive game tracking. As an example, thegame tracking module 86 can send a capture trigger to thebet recognition module 88 at the start of a game to automatically capture bets at a table game. - Referring to
FIG. 6 we will now discussplayer tracking module 90.Player tracking module 90 can obtain input from theIP module 80 relating to player identity cards. Theplayer tracking module 90 can also obtain input from thegame tracking module 86 relating to game events such as the beginning and end of each game. By associating each recognized player identity card with the wager located closest to the card in an overhead image of the gaming region, the wager can be associated with that player identity card. In this manner, comp points can be automatically accumulated to specific player identity cards. - Optionally the system can recognize special player identity cards with machine readable indicia printed or affixed to them (via stickers for example). The machine readable indicia can include matrix codes, barcodes or other identification indicia. Such specialty identity cards may also be utilized for identifying and registering a dealer at a table. Furthermore, specialty identity cards may be utilized to indicate game events such as a deck being shuffled or a dispute being resolved at the table.
- Optionally, biometrics technologies such as face recognition can be utilized to assist with identification of players.
- We will now discuss the functionality of
surveillance module 92.Surveillance module 92 obtains input relating to automatically detected game events from one or more of the other modules and associates the game events to specific points in recorded video. Thesurveillance module 92 can include means for recording images or video of a gaming table. The recording means can include theimagers 32. The recording means can be computer or software activated and can be stored in a digital medium such as a computer hard drive. Less preferred recording means such as analog cameras or analog media such as video cassettes may also be utilized. - We shall now discuss the analysis and reporting
module 94 ofFIG. 6 . Analysis and reportingmodule 94 can mine data in thedatabase 102 to provide reports to casino employees. The module can be configured to perform functions including automated player tracking, including exact handle, duration of play, decisions per hour, player skill level, player proficiency and true house advantage. Themodule 94 can be configured to automatically track operational efficiency measures such as hands dealt per hour reports, procedure violations, employee efficiency ranks, actual handle for each table and actual house advantage for each table. Themodule 94 can be configured to provide card counter alerts by examining player playing patterns. It can be configured to automatically detect fraudulent or undesired activities such as shuffle tracking, inconsistent deck penetration by dealers and procedure violations. Themodule 94 can be configured to provide any combination or type of statistical data by performing data mining on the recorded data in the database. - Output, including alerts and player compensation notifications, can be through output devices such as monitors, LCD displays, or PDAs. An output device can be of any type and is not limited to visual displays and can include auditory or other sensory means. The software can potentially be configured to generate any type of report with respect to casino operations.
-
Module 94 can be configured to accept input from a user interface running on input devices. These inputs can include, without limitation, training parameters, configuration commands, dealer identity, table status, and other inputs required to operate the system. - Although not shown in
FIG. 6 a chip tray recognition module may be provided to determine the contents of the dealer's chip bank. In one embodiment an RFID based chip tray recognition system can be implemented. In another embodiment, a vision based chip tray recognition system can be implemented. The chip tray recognition module can send data relating to the value of chips in the dealer's chip tray to other modules. - Although not shown in
FIG. 6 , a dealer identity module may be employed to track the identity of a dealer. The dealer can optionally either key in her unique identity code at the game table or optionally she can use an identity card and associated reader to register their identity. A biometrics system may be used to facilitate dealer or employee identification. - The terms imagers and imaging devices have been used interchangeably in this document. The imagers can have any combination of sensor, lens and/or interface. Possible interfaces include, without limitation, 10/100 Ethernet, Gigabit Ethernet, USB,
USB 2, FireWire, Optical Fiber, PAL or NTSC interfaces. For analog interfaces such as NTSC and PAL a processor having a capture card in combination with a frame grabber can be utilized to get digital images or digital video. - The image processing and computer vision algorithms in the software can utilize any type or combination or color spaces or digital file formats. Possible color spaces include, without limitation, RGB, HSL, CMYK, Grayscale and binary color spaces.
- The overhead imaging system may be associated with one or more display signs. Display sign(s) can be non-electronic, electronic or digital. A display sign can be an electronic display displaying game related events happening at the table in real time. A display and the housing unit for the overhead imaging devices may be integrated into a large unit. The overhead imaging system may be located on or near the ceiling above the gaming region.
- Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (21)
1. A system for identifying a gaming object on a gaming table comprising:
at least one overhead camera for capturing an image of said table;
a detection module for detecting a feature of said object on said image;
a search module for extracting a region of interest of said image that describes said object from said feature;
a feature space module for transforming a feature space of said region of interest to obtain a transformed region of interest; and
an identity module comprising a statistical classifier trained to recognize said object from said transformed region.
2. The system of claims 1, wherein said feature space module comprises a Principal Component Analysis module for transforming said feature space according to principal component analysis algorithms.
3. The system of claim 1 , further comprising a dimensionality reduction module for reducing said transformed region into a reduced representation according to dimensionality reduction algorithms, wherein said statistical classifier is trained to recognize said object from said reduced representation.
4. The system of claim 1 , wherein said identity module comprises a cascade of classifiers.
5. The system of claim 1 , wherein said detection module comprises a cascade of classifiers.
6. The system of claim 4 , further comprising a boosting module for combining weak ones of said cascade of classifiers.
7. The system of claim 5 , further comprising a boosting module for combining weak ones of said cascade of classifiers.
8. The system of claims 4, wherein said detection module comprises a cascade of classifiers, further comprising a boosting module for combining weak classifiers of said cascades of classifiers.
9. The system of claim 1 , wherein said object is a card belonging to a deck of cards, and further comprising a deck verification module for receiving a suit and a rank of said card from said statistical classifier, and verifying that said deck of cards adheres to a provided set of standards.
10. The system of claim 1 , wherein said object is a playing card, and said region of interest is a region of said image occupied by an index of said card.
11. The system of claim 10 , wherein said region of interest is a region of said image occupied by a suit of said card.
12. A method of identifying a value of a playing card placed on a game table comprising:
capturing an image of said table;
detecting at least one feature of said playing card on said image;
delimiting a target region of said image according to said feature, wherein said target region overlaps a region of interest, and said region of interest describes said value;
scanning said target region for a pattern of contrasting points;
detecting said pattern;
delimiting said region of interest of said image according to a position of said pattern; and
analyzing said region of interest to identify said value.
13. The method of claim 12 , wherein said feature is a segment of an edge of said card.
14. The method of claim 13 , further comprising determining at least two scan lines parallel to said edge within said target region, wherein said scanning is performed along said lines, and whereby said scanning is more efficient.
15. The method of claim 12 , wherein said scanning is performed along lines perpendicular to said edge, and said detecting comprises recording a most contrasting point for each of said lines to obtain a series of points, and applying a pattern recognition algorithm to said series to identify a pattern characteristic of a card identifying symbol.
16. The method of claim 15 , wherein said applying a pattern recognition algorithm comprises convolving said pattern with a mask of properties expected from a card identifying symbol.
17. The method of claim 12 , wherein said feature is a corner of said card.
18. A system for detecting an inconsistency with respect to playing cards dealt on a game table comprising:
a card reader for determining an identity of each playing card as it is being dealt on said table;
an overhead camera for capturing images of said table;
a recognition module for determining an identity of each card positioned on said table from said images; and
a tracking module for comparing said identity determined by said card reader with said identity determined by said recognition module, and detecting said inconsistency.
19. The system of claim 18 , wherein said card reader determines a dealing order of said each playing card as it is being dealt on said table, said recognition module determines a position of said each card positioned on said table, and said tracking module compares said identity and said order determined by said card reader with said identity and said position determined by said recognition module and detects said inconsistency according to procedures of a game.
20. The system of claim 18 , wherein said recognition module determines an approximate identity of said each card positioned on said table, and said tracking module compares said approximate identity with said identity determined by said recognition module, and detects said inconsistency.
21. The system of claim 18 , wherein said card reader is comprised in a card shoe for storing playing cards to be dealt on said table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/381,473 US20070077987A1 (en) | 2005-05-03 | 2006-05-03 | Gaming object recognition |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67693605P | 2005-05-03 | 2005-05-03 | |
US69340605P | 2005-06-24 | 2005-06-24 | |
US72348105P | 2005-10-05 | 2005-10-05 | |
US72345205P | 2005-10-05 | 2005-10-05 | |
US73633405P | 2005-11-15 | 2005-11-15 | |
US76036506P | 2006-01-20 | 2006-01-20 | |
US77105806P | 2006-02-08 | 2006-02-08 | |
US11/381,473 US20070077987A1 (en) | 2005-05-03 | 2006-05-03 | Gaming object recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070077987A1 true US20070077987A1 (en) | 2007-04-05 |
Family
ID=37461021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/381,473 Abandoned US20070077987A1 (en) | 2005-05-03 | 2006-05-03 | Gaming object recognition |
Country Status (2)
Country | Link |
---|---|
US (1) | US20070077987A1 (en) |
AU (1) | AU2006201849A1 (en) |
Cited By (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050272501A1 (en) * | 2004-05-07 | 2005-12-08 | Louis Tran | Automated game monitoring |
US20070015583A1 (en) * | 2005-05-19 | 2007-01-18 | Louis Tran | Remote gaming with live table games |
US20070143072A1 (en) * | 2005-12-20 | 2007-06-21 | Pitney Bowes Inc. | RFID systems and methods for probabalistic location determination |
US20070173318A1 (en) * | 2006-01-20 | 2007-07-26 | Abbott Eric L | Player ranking for tournament play |
US20090124379A1 (en) * | 2007-11-09 | 2009-05-14 | Igt | Transparent Card Display |
US7690996B2 (en) | 2006-11-06 | 2010-04-06 | Igt | Server based gaming system and method for providing one or more tournaments at gaming tables |
US20100087241A1 (en) * | 2008-10-02 | 2010-04-08 | Igt | Gaming System with Mobile User Input Device |
US20100228548A1 (en) * | 2009-03-09 | 2010-09-09 | Microsoft Corporation | Techniques for enhanced automatic speech recognition |
US20100273547A1 (en) * | 2009-04-28 | 2010-10-28 | Stasi Perry B | Method and system for capturing live table game data |
US20110065496A1 (en) * | 2009-09-11 | 2011-03-17 | Wms Gaming, Inc. | Augmented reality mechanism for wagering game systems |
US20110106798A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Search Result Enhancement Through Image Duplicate Detection |
US20110106782A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Content-based image search |
US20110103699A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Image metadata propagation |
US20120089545A1 (en) * | 2009-04-01 | 2012-04-12 | Sony Corporation | Device and method for multiclass object detection |
CN102456128A (en) * | 2010-10-27 | 2012-05-16 | 徐继圣 | Stereoscopic vision dice point identification system and method in uncontrolled environments |
WO2012081012A1 (en) * | 2010-12-16 | 2012-06-21 | Pointgrab Ltd. | Computer vision based hand identification |
FR2982057A1 (en) * | 2011-10-28 | 2013-05-03 | Peoleo | Method for recognition of playing card image acquired by video camera in scene, involves identifying image with reference image in event of success of search of reference vector near to signature vector in data base of memory |
US8666115B2 (en) | 2009-10-13 | 2014-03-04 | Pointgrab Ltd. | Computer vision gesture based control of a device |
US8938124B2 (en) | 2012-05-10 | 2015-01-20 | Pointgrab Ltd. | Computer vision based tracking of a hand |
US20150087417A1 (en) * | 2013-09-23 | 2015-03-26 | Konami Gaming, Inc. | System and methods for operating gaming environments |
US20150199872A1 (en) * | 2013-09-23 | 2015-07-16 | Konami Gaming, Inc. | System and methods for operating gaming environments |
US9165420B1 (en) | 2007-11-13 | 2015-10-20 | Genesis Gaming Solutions, Inc. | Bet spot indicator on a gaming table |
US9174114B1 (en) * | 2007-11-13 | 2015-11-03 | Genesis Gaming Solutions, Inc. | System and method for generating reports associated with casino table operation |
US20160027253A1 (en) * | 2008-07-11 | 2016-01-28 | Bally Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US20160180536A1 (en) * | 2013-09-20 | 2016-06-23 | Fujitsu Limited | Image processing apparatus, image processing method, and storage medium |
US9378605B2 (en) | 2007-09-13 | 2016-06-28 | Universal Entertainment Corporation | Gaming machine and gaming system using chips |
US9524606B1 (en) | 2005-05-23 | 2016-12-20 | Visualimits, Llc | Method and system for providing dynamic casino game signage with selectable messaging timed to play of a table game |
US20170069159A1 (en) * | 2015-09-04 | 2017-03-09 | Musigma Business Solutions Pvt. Ltd. | Analytics system and method |
US20170173459A1 (en) * | 2014-03-19 | 2017-06-22 | Maurice Mills | Online Remote Game System |
US20170193755A1 (en) * | 2016-01-05 | 2017-07-06 | Ags Llc | Electronic gaming devices for playing a card game having multiple wagering opportunities |
CN108055828A (en) * | 2015-08-03 | 2018-05-18 | 天使游戏纸牌股份有限公司 | Management system, recreation substitutionary coinage, check device, the management system of recreation substitutionary coinage of desktop game |
US20180173948A1 (en) * | 2016-12-16 | 2018-06-21 | Qualcomm Incorporated | Low power data generation for iris-related detection and authentication |
US10046230B1 (en) | 2012-10-01 | 2018-08-14 | Genesis Gaming Solutions, Inc. | Tabletop insert for gaming table |
US10061984B2 (en) * | 2016-10-24 | 2018-08-28 | Accenture Global Solutions Limited | Processing an image to identify a metric associated with the image and/or to determine a value for the metric |
AT519722A1 (en) * | 2017-02-27 | 2018-09-15 | Inunum High Quality Systems Anstalt | Method for detecting at least one token object |
KR20180122998A (en) * | 2015-08-03 | 2018-11-14 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
US20190035212A1 (en) * | 2016-01-27 | 2019-01-31 | Evolution Malta Ltd | Method and system for card shuffle integrity tracking |
US10217312B1 (en) * | 2016-03-30 | 2019-02-26 | Visualimits, Llc | Automatic region of interest detection for casino tables |
KR20190021238A (en) * | 2016-05-16 | 2019-03-05 | 센센 네트웍스 그룹 피티와이 엘티디 | System and method for automated table game activity recognition |
US10242525B1 (en) | 2007-11-13 | 2019-03-26 | Genesis Gaming Solutions, Inc. | System and method for casino table operation |
WO2019068141A1 (en) * | 2017-10-02 | 2019-04-11 | Sensen Networks Group Pty Ltd | System and method for machine learning-driven object detection |
EP3534344A1 (en) * | 2018-03-02 | 2019-09-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and program |
US10410066B2 (en) * | 2015-05-29 | 2019-09-10 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US20190333321A1 (en) * | 2015-05-29 | 2019-10-31 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US10515284B2 (en) | 2014-09-30 | 2019-12-24 | Qualcomm Incorporated | Single-processor computer vision hardware control and application execution |
US10614332B2 (en) | 2016-12-16 | 2020-04-07 | Qualcomm Incorportaed | Light source modulation for iris size adjustment |
WO2020072664A1 (en) * | 2018-10-02 | 2020-04-09 | Gaming Partners International Usa, Inc. | Vision based recognition of gaming chips |
US10650550B1 (en) * | 2016-03-30 | 2020-05-12 | Visualimits, Llc | Automatic region of interest detection for casino tables |
CN111445504A (en) * | 2020-03-25 | 2020-07-24 | 哈尔滨工程大学 | Water-to-air distortion correction algorithm based on image sequence |
US10878656B2 (en) | 2016-08-02 | 2020-12-29 | Angel Playing Cards Co., Ltd. | Inspection system and management system |
CN112541564A (en) * | 2019-09-20 | 2021-03-23 | 腾讯科技(深圳)有限公司 | Method and device for reducing Bayes deep neural network computation complexity |
US20210090380A1 (en) * | 2016-05-09 | 2021-03-25 | Ags Llc | Methods, devices and systems for processing wagers associated with games having multiple wagers |
US10970962B2 (en) | 2015-08-03 | 2021-04-06 | Angel Playing Cards Co., Ltd. | Management system of substitute currency for gaming |
CN112767227A (en) * | 2021-03-12 | 2021-05-07 | 中山大学 | Image watermarking method capable of resisting screen shooting |
WO2021130555A1 (en) * | 2019-12-24 | 2021-07-01 | Sensetime International Pte. Ltd. | Method and apparatus for detecting a dealing sequence, storage medium and electronic device |
US11068712B2 (en) | 2014-09-30 | 2021-07-20 | Qualcomm Incorporated | Low-power iris scan initialization |
US11074780B2 (en) | 2015-08-03 | 2021-07-27 | Angel Playing Cards Co., Ltd. | Management system of substitute currency for gaming |
US20210343123A1 (en) * | 2013-08-08 | 2021-11-04 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
EP3329975B1 (en) * | 2015-08-03 | 2021-12-29 | Angel Playing Cards Co., Ltd. | Inspection device for inspecting substitute currency for gaming |
US20220020342A1 (en) * | 2019-03-29 | 2022-01-20 | Rohm Co., Ltd. | Semiconductor device |
US11308642B2 (en) * | 2017-03-30 | 2022-04-19 | Visualimits Llc | Automatic region of interest detection for casino tables |
US11335166B2 (en) | 2017-10-03 | 2022-05-17 | Arb Labs Inc. | Progressive betting systems |
US20230082837A1 (en) * | 2021-09-14 | 2023-03-16 | Sensetime International Pte. Ltd. | Status switching method and apparatus, edge computing device and computer storage medium |
EP4046064A4 (en) * | 2019-10-15 | 2024-03-06 | Arb Labs Inc | Systems and methods for tracking playing chips |
US11961364B2 (en) | 2015-08-03 | 2024-04-16 | Angel Group Co., Ltd. | Fraud detection system in a casino |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5712922A (en) * | 1992-04-14 | 1998-01-27 | Eastman Kodak Company | Neural network optical character recognition system and method for classifying characters in a moving web |
US6126166A (en) * | 1996-10-28 | 2000-10-03 | Advanced Casino Technologies, Inc. | Card-recognition and gaming-control device |
US6134344A (en) * | 1997-06-26 | 2000-10-17 | Lucent Technologies Inc. | Method and apparatus for improving the efficiency of support vector machines |
US20020147042A1 (en) * | 2001-02-14 | 2002-10-10 | Vt Tech Corp. | System and method for detecting the result of a game of chance |
US20030062675A1 (en) * | 2001-09-28 | 2003-04-03 | Canon Kabushiki Kaisha | Image experiencing system and information processing method |
US20030072487A1 (en) * | 2001-10-12 | 2003-04-17 | Xerox Corporation | Background-based image segmentation |
US20030123721A1 (en) * | 2001-12-28 | 2003-07-03 | International Business Machines Corporation | System and method for gathering, indexing, and supplying publicly available data charts |
US20030236113A1 (en) * | 2002-05-30 | 2003-12-25 | Prime Table Games Llc | Game playing apparatus |
US20040023722A1 (en) * | 2002-08-03 | 2004-02-05 | Vt Tech Corp. | Virtual video stream manager |
US20050134935A1 (en) * | 2003-12-19 | 2005-06-23 | Schmidtler Mauritius A.R. | Automatic document separation |
US20050137005A1 (en) * | 2003-09-05 | 2005-06-23 | Bally Gaming International, Inc. | Systems, methods, and devices for monitoring card games, such as Baccarat |
US20050259866A1 (en) * | 2004-05-20 | 2005-11-24 | Microsoft Corporation | Low resolution OCR for camera acquired documents |
US20050272501A1 (en) * | 2004-05-07 | 2005-12-08 | Louis Tran | Automated game monitoring |
US20060027970A1 (en) * | 2002-11-26 | 2006-02-09 | Kyrychenko Olexandr I | Gaming equipment for table games using playing cards and tokens, in particular for black jack |
US20060177109A1 (en) * | 2001-12-21 | 2006-08-10 | Leonard Storch | Combination casino table game imaging system for automatically recognizing the faces of players--as well as terrorists and other undesirables-- and for recognizing wagered gaming chips |
US20060205508A1 (en) * | 2005-03-14 | 2006-09-14 | Original Deal, Inc. | On-line table gaming with physical game objects |
US20070004499A1 (en) * | 2005-07-01 | 2007-01-04 | Online Poker Technologies, Llc | Online gaming system |
US20070060260A1 (en) * | 2005-09-12 | 2007-03-15 | Bally Gaming, Inc. | Systems, methods and articles to facilitate playing card games with multi-compartment playing card receivers |
US20080113783A1 (en) * | 2006-11-10 | 2008-05-15 | Zbigniew Czyzewski | Casino table game monitoring system |
US20090124379A1 (en) * | 2007-11-09 | 2009-05-14 | Igt | Transparent Card Display |
-
2006
- 2006-05-03 AU AU2006201849A patent/AU2006201849A1/en not_active Abandoned
- 2006-05-03 US US11/381,473 patent/US20070077987A1/en not_active Abandoned
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5712922A (en) * | 1992-04-14 | 1998-01-27 | Eastman Kodak Company | Neural network optical character recognition system and method for classifying characters in a moving web |
US6126166A (en) * | 1996-10-28 | 2000-10-03 | Advanced Casino Technologies, Inc. | Card-recognition and gaming-control device |
US6134344A (en) * | 1997-06-26 | 2000-10-17 | Lucent Technologies Inc. | Method and apparatus for improving the efficiency of support vector machines |
US20020147042A1 (en) * | 2001-02-14 | 2002-10-10 | Vt Tech Corp. | System and method for detecting the result of a game of chance |
US20030062675A1 (en) * | 2001-09-28 | 2003-04-03 | Canon Kabushiki Kaisha | Image experiencing system and information processing method |
US20030072487A1 (en) * | 2001-10-12 | 2003-04-17 | Xerox Corporation | Background-based image segmentation |
US20060177109A1 (en) * | 2001-12-21 | 2006-08-10 | Leonard Storch | Combination casino table game imaging system for automatically recognizing the faces of players--as well as terrorists and other undesirables-- and for recognizing wagered gaming chips |
US20030123721A1 (en) * | 2001-12-28 | 2003-07-03 | International Business Machines Corporation | System and method for gathering, indexing, and supplying publicly available data charts |
US20030236113A1 (en) * | 2002-05-30 | 2003-12-25 | Prime Table Games Llc | Game playing apparatus |
US20040023722A1 (en) * | 2002-08-03 | 2004-02-05 | Vt Tech Corp. | Virtual video stream manager |
US20060027970A1 (en) * | 2002-11-26 | 2006-02-09 | Kyrychenko Olexandr I | Gaming equipment for table games using playing cards and tokens, in particular for black jack |
US20050137005A1 (en) * | 2003-09-05 | 2005-06-23 | Bally Gaming International, Inc. | Systems, methods, and devices for monitoring card games, such as Baccarat |
US20050134935A1 (en) * | 2003-12-19 | 2005-06-23 | Schmidtler Mauritius A.R. | Automatic document separation |
US20050272501A1 (en) * | 2004-05-07 | 2005-12-08 | Louis Tran | Automated game monitoring |
US20050259866A1 (en) * | 2004-05-20 | 2005-11-24 | Microsoft Corporation | Low resolution OCR for camera acquired documents |
US20060205508A1 (en) * | 2005-03-14 | 2006-09-14 | Original Deal, Inc. | On-line table gaming with physical game objects |
US20070004499A1 (en) * | 2005-07-01 | 2007-01-04 | Online Poker Technologies, Llc | Online gaming system |
US20070060260A1 (en) * | 2005-09-12 | 2007-03-15 | Bally Gaming, Inc. | Systems, methods and articles to facilitate playing card games with multi-compartment playing card receivers |
US20080113783A1 (en) * | 2006-11-10 | 2008-05-15 | Zbigniew Czyzewski | Casino table game monitoring system |
US20090124379A1 (en) * | 2007-11-09 | 2009-05-14 | Igt | Transparent Card Display |
Non-Patent Citations (1)
Title |
---|
Eatinger et al., The Playing Card Image Recognition Project, Accessed prior to 15 March 2005, * |
Cited By (185)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050272501A1 (en) * | 2004-05-07 | 2005-12-08 | Louis Tran | Automated game monitoring |
US7901285B2 (en) | 2004-05-07 | 2011-03-08 | Image Fidelity, LLC | Automated game monitoring |
US20070015583A1 (en) * | 2005-05-19 | 2007-01-18 | Louis Tran | Remote gaming with live table games |
US9524606B1 (en) | 2005-05-23 | 2016-12-20 | Visualimits, Llc | Method and system for providing dynamic casino game signage with selectable messaging timed to play of a table game |
US20070143072A1 (en) * | 2005-12-20 | 2007-06-21 | Pitney Bowes Inc. | RFID systems and methods for probabalistic location determination |
US7388494B2 (en) * | 2005-12-20 | 2008-06-17 | Pitney Bowes Inc. | RFID systems and methods for probabalistic location determination |
US20070173318A1 (en) * | 2006-01-20 | 2007-07-26 | Abbott Eric L | Player ranking for tournament play |
US7704144B2 (en) | 2006-01-20 | 2010-04-27 | Igt | Player ranking for tournament play |
US7690996B2 (en) | 2006-11-06 | 2010-04-06 | Igt | Server based gaming system and method for providing one or more tournaments at gaming tables |
US9378605B2 (en) | 2007-09-13 | 2016-06-28 | Universal Entertainment Corporation | Gaming machine and gaming system using chips |
WO2009061614A1 (en) * | 2007-11-09 | 2009-05-14 | Igt | Transparent card display |
US8905834B2 (en) | 2007-11-09 | 2014-12-09 | Igt | Transparent card display |
US20090124379A1 (en) * | 2007-11-09 | 2009-05-14 | Igt | Transparent Card Display |
US9174114B1 (en) * | 2007-11-13 | 2015-11-03 | Genesis Gaming Solutions, Inc. | System and method for generating reports associated with casino table operation |
US9511275B1 (en) | 2007-11-13 | 2016-12-06 | Genesis Gaming Solutions, Inc. | Bet spot indicator on a gaming table |
US9889371B1 (en) | 2007-11-13 | 2018-02-13 | Genesis Gaming Solutions, Inc. | Bet spot indicator on a gaming table |
US10825288B1 (en) | 2007-11-13 | 2020-11-03 | Genesis Gaming Solutions, Inc. | System and method for casino table operation |
US9165420B1 (en) | 2007-11-13 | 2015-10-20 | Genesis Gaming Solutions, Inc. | Bet spot indicator on a gaming table |
US11538304B1 (en) | 2007-11-13 | 2022-12-27 | Genesis Gaming Solutions, Inc | System and method for casino table operation |
US10242525B1 (en) | 2007-11-13 | 2019-03-26 | Genesis Gaming Solutions, Inc. | System and method for casino table operation |
US9361755B2 (en) * | 2008-07-11 | 2016-06-07 | Bally Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US20160027253A1 (en) * | 2008-07-11 | 2016-01-28 | Bally Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US10410471B2 (en) | 2008-07-11 | 2019-09-10 | Bally Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US9842468B2 (en) | 2008-07-11 | 2017-12-12 | Bally Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US9619968B2 (en) | 2008-07-11 | 2017-04-11 | Balley Gaming, Inc. | Methods of receiving electronic wagers in a wagering game via a handheld electronic wager input device |
US10249131B2 (en) | 2008-10-02 | 2019-04-02 | Igt | Gaming system including a gaming table and a plurality of user input devices |
US9129473B2 (en) | 2008-10-02 | 2015-09-08 | Igt | Gaming system including a gaming table and a plurality of user input devices |
US20100087241A1 (en) * | 2008-10-02 | 2010-04-08 | Igt | Gaming System with Mobile User Input Device |
US9640027B2 (en) | 2008-10-02 | 2017-05-02 | Igt | Gaming system including a gaming table and a plurality of user input devices |
US11410490B2 (en) | 2008-10-02 | 2022-08-09 | Igt | Gaming system including a gaming table and a plurality of user input devices |
US8529345B2 (en) | 2008-10-02 | 2013-09-10 | Igt | Gaming system including a gaming table with mobile user input devices |
US8306819B2 (en) | 2009-03-09 | 2012-11-06 | Microsoft Corporation | Enhanced automatic speech recognition using mapping between unsupervised and supervised speech model parameters trained on same acoustic training data |
US20100228548A1 (en) * | 2009-03-09 | 2010-09-09 | Microsoft Corporation | Techniques for enhanced automatic speech recognition |
US8843424B2 (en) * | 2009-04-01 | 2014-09-23 | Sony Corporation | Device and method for multiclass object detection |
US20120089545A1 (en) * | 2009-04-01 | 2012-04-12 | Sony Corporation | Device and method for multiclass object detection |
US20100273547A1 (en) * | 2009-04-28 | 2010-10-28 | Stasi Perry B | Method and system for capturing live table game data |
US20110065496A1 (en) * | 2009-09-11 | 2011-03-17 | Wms Gaming, Inc. | Augmented reality mechanism for wagering game systems |
US8666115B2 (en) | 2009-10-13 | 2014-03-04 | Pointgrab Ltd. | Computer vision gesture based control of a device |
US8693732B2 (en) | 2009-10-13 | 2014-04-08 | Pointgrab Ltd. | Computer vision gesture based control of a device |
TWI506459B (en) * | 2009-11-02 | 2015-11-01 | Microsoft Technology Licensing Llc | Content-based image search |
US20110106798A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Search Result Enhancement Through Image Duplicate Detection |
US20110106782A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Content-based image search |
US20110103699A1 (en) * | 2009-11-02 | 2011-05-05 | Microsoft Corporation | Image metadata propagation |
US9710491B2 (en) * | 2009-11-02 | 2017-07-18 | Microsoft Technology Licensing, Llc | Content-based image search |
US8433140B2 (en) | 2009-11-02 | 2013-04-30 | Microsoft Corporation | Image metadata propagation |
CN102456128A (en) * | 2010-10-27 | 2012-05-16 | 徐继圣 | Stereoscopic vision dice point identification system and method in uncontrolled environments |
WO2012081012A1 (en) * | 2010-12-16 | 2012-06-21 | Pointgrab Ltd. | Computer vision based hand identification |
US20130279756A1 (en) * | 2010-12-16 | 2013-10-24 | Ovadya Menadeva | Computer vision based hand identification |
FR2982057A1 (en) * | 2011-10-28 | 2013-05-03 | Peoleo | Method for recognition of playing card image acquired by video camera in scene, involves identifying image with reference image in event of success of search of reference vector near to signature vector in data base of memory |
US8938124B2 (en) | 2012-05-10 | 2015-01-20 | Pointgrab Ltd. | Computer vision based tracking of a hand |
US10046230B1 (en) | 2012-10-01 | 2018-08-14 | Genesis Gaming Solutions, Inc. | Tabletop insert for gaming table |
US10471337B2 (en) | 2012-10-01 | 2019-11-12 | Genesis Gaming Solutions, Inc. | Tabletop insert for gaming table |
US11210908B2 (en) * | 2013-08-08 | 2021-12-28 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US20220351586A1 (en) * | 2013-08-08 | 2022-11-03 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US20210343123A1 (en) * | 2013-08-08 | 2021-11-04 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US20230169832A1 (en) * | 2013-08-08 | 2023-06-01 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US11615679B2 (en) * | 2013-08-08 | 2023-03-28 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US11557181B2 (en) * | 2013-08-08 | 2023-01-17 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US11810431B2 (en) * | 2013-08-08 | 2023-11-07 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US20220122427A1 (en) * | 2013-08-08 | 2022-04-21 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US20220351585A1 (en) * | 2013-08-08 | 2022-11-03 | Angel Group Co., Ltd. | Method for administering a package of shuffled playing cards |
US9704246B2 (en) * | 2013-09-20 | 2017-07-11 | Fujitsu Limited | Image processing apparatus, image processing method, and storage medium |
US20160180536A1 (en) * | 2013-09-20 | 2016-06-23 | Fujitsu Limited | Image processing apparatus, image processing method, and storage medium |
AU2016244306B2 (en) * | 2013-09-23 | 2018-07-12 | Konami Gaming, Incorporated | System and methods for operating gaming environments |
US20150087417A1 (en) * | 2013-09-23 | 2015-03-26 | Konami Gaming, Inc. | System and methods for operating gaming environments |
AU2016244308B2 (en) * | 2013-09-23 | 2018-10-18 | Konami Gaming, Incorporated | System and methods for operating gaming environments |
US20150199872A1 (en) * | 2013-09-23 | 2015-07-16 | Konami Gaming, Inc. | System and methods for operating gaming environments |
US20170173459A1 (en) * | 2014-03-19 | 2017-06-22 | Maurice Mills | Online Remote Game System |
US11068712B2 (en) | 2014-09-30 | 2021-07-20 | Qualcomm Incorporated | Low-power iris scan initialization |
US10515284B2 (en) | 2014-09-30 | 2019-12-24 | Qualcomm Incorporated | Single-processor computer vision hardware control and application execution |
US11087141B2 (en) | 2015-05-29 | 2021-08-10 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US11749053B2 (en) | 2015-05-29 | 2023-09-05 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US11636731B2 (en) * | 2015-05-29 | 2023-04-25 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US10410066B2 (en) * | 2015-05-29 | 2019-09-10 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US20190333321A1 (en) * | 2015-05-29 | 2019-10-31 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US20230260360A1 (en) * | 2015-05-29 | 2023-08-17 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
US10832517B2 (en) * | 2015-05-29 | 2020-11-10 | Arb Labs Inc. | Systems, methods and devices for monitoring betting activities |
KR20220119576A (en) * | 2015-08-03 | 2022-08-30 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
US11386748B2 (en) | 2015-08-03 | 2022-07-12 | Angel Playing Cards Co., Ltd. | Fraud detection system in a casino |
KR20200043350A (en) * | 2015-08-03 | 2020-04-27 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
US11961363B2 (en) | 2015-08-03 | 2024-04-16 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11961364B2 (en) | 2015-08-03 | 2024-04-16 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11816957B2 (en) | 2015-08-03 | 2023-11-14 | Angel Group Co., Ltd. | Management system of substitute currency for gaming |
KR102148675B1 (en) | 2015-08-03 | 2020-08-27 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
US11810426B2 (en) | 2015-08-03 | 2023-11-07 | Angel Group Co., Ltd. | Management system of substitute currency for gaming |
US11810422B2 (en) | 2015-08-03 | 2023-11-07 | Angel Group Co., Ltd. | Management system of substitute currency for gaming |
US11741780B2 (en) | 2015-08-03 | 2023-08-29 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11727750B2 (en) | 2015-08-03 | 2023-08-15 | Angel Group Co., Ltd. | Fraud detection system in a casino |
EP4220536A1 (en) * | 2015-08-03 | 2023-08-02 | Angel Playing Cards Co., Ltd. | Fraud detection system at game parlor |
KR102540218B1 (en) | 2015-08-03 | 2023-06-05 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
CN108055828A (en) * | 2015-08-03 | 2018-05-18 | 天使游戏纸牌股份有限公司 | Management system, recreation substitutionary coinage, check device, the management system of recreation substitutionary coinage of desktop game |
US11657673B2 (en) | 2015-08-03 | 2023-05-23 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11657674B2 (en) | 2015-08-03 | 2023-05-23 | Angel Group Go., Ltd. | Fraud detection system in casino |
KR102533788B1 (en) | 2015-08-03 | 2023-05-17 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
US10970962B2 (en) | 2015-08-03 | 2021-04-06 | Angel Playing Cards Co., Ltd. | Management system of substitute currency for gaming |
KR102238964B1 (en) | 2015-08-03 | 2021-04-09 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
KR102528634B1 (en) | 2015-08-03 | 2023-05-03 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20210043748A (en) * | 2015-08-03 | 2021-04-21 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
AU2019226277B2 (en) * | 2015-08-03 | 2021-05-06 | Angel Group Co., Ltd. | Fraud detection system in casino |
US11620872B2 (en) | 2015-08-03 | 2023-04-04 | Angel Group Co., Ltd. | Fraud detection system in a casino |
AU2021236449B2 (en) * | 2015-08-03 | 2023-03-16 | Angel Group Co., Ltd. | Fraud detection system in casino |
US11587398B2 (en) | 2015-08-03 | 2023-02-21 | Angel Group Co., Ltd. | Fraud detection system in a casino |
KR102261995B1 (en) | 2015-08-03 | 2021-06-08 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20210068364A (en) * | 2015-08-03 | 2021-06-09 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20210068365A (en) * | 2015-08-03 | 2021-06-09 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR102262808B1 (en) | 2015-08-03 | 2021-06-09 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20230023686A (en) * | 2015-08-03 | 2023-02-17 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20180122998A (en) * | 2015-08-03 | 2018-11-14 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
US11527131B2 (en) | 2015-08-03 | 2022-12-13 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11074780B2 (en) | 2015-08-03 | 2021-07-27 | Angel Playing Cards Co., Ltd. | Management system of substitute currency for gaming |
US11527130B2 (en) | 2015-08-03 | 2022-12-13 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11514751B2 (en) | 2015-08-03 | 2022-11-29 | Angel Group Co., Ltd. | Management system for table games, substitute currency for gaming, inspection device, and management system for substitute currency for gaming |
KR20180123460A (en) * | 2015-08-03 | 2018-11-16 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
KR20190039907A (en) * | 2015-08-03 | 2019-04-16 | 엔제루 프레잉구 카도 가부시키가이샤 | Fraud detection system in casino |
KR102448691B1 (en) | 2015-08-03 | 2022-09-28 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR102335366B1 (en) | 2015-08-03 | 2021-12-03 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR102448700B1 (en) | 2015-08-03 | 2022-09-28 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
EP3329975B1 (en) * | 2015-08-03 | 2021-12-29 | Angel Playing Cards Co., Ltd. | Inspection device for inspecting substitute currency for gaming |
US11393286B2 (en) | 2015-08-03 | 2022-07-19 | Angel Group Co., Ltd. | Fraud detection system in a casino |
KR20220002836A (en) * | 2015-08-03 | 2022-01-07 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
US11393285B2 (en) | 2015-08-03 | 2022-07-19 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11232674B2 (en) | 2015-08-03 | 2022-01-25 | Angel Group Co., Ltd. | Inspection device for detecting fraud |
US11270554B2 (en) | 2015-08-03 | 2022-03-08 | Angel Group Co., Ltd. | Substitute currency for gaming, inspection device, and manufacturing method of substitute currency for gaming, and management system for table games |
KR102372948B1 (en) | 2015-08-03 | 2022-03-10 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
KR20220037425A (en) * | 2015-08-03 | 2022-03-24 | 엔제루 구루푸 가부시키가이샤 | Fraud detection system in casino |
US11393284B2 (en) | 2015-08-03 | 2022-07-19 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11386749B2 (en) | 2015-08-03 | 2022-07-12 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US11380161B2 (en) | 2015-08-03 | 2022-07-05 | Angel Group Co., Ltd. | Fraud detection system in a casino |
US20170069159A1 (en) * | 2015-09-04 | 2017-03-09 | Musigma Business Solutions Pvt. Ltd. | Analytics system and method |
WO2017037730A3 (en) * | 2015-09-04 | 2017-04-06 | Mu Sigma Business Solutions Pvt Ltd | Analytics system and method |
US20170193755A1 (en) * | 2016-01-05 | 2017-07-06 | Ags Llc | Electronic gaming devices for playing a card game having multiple wagering opportunities |
US10872505B2 (en) * | 2016-01-05 | 2020-12-22 | Ags Llc | Electronic gaming devices for playing a card game having multiple wagering opportunities |
US11024119B2 (en) * | 2016-01-27 | 2021-06-01 | Evolution Malta Ltd | Method and system for card shuffle integrity tracking |
US20190035212A1 (en) * | 2016-01-27 | 2019-01-31 | Evolution Malta Ltd | Method and system for card shuffle integrity tracking |
US10217312B1 (en) * | 2016-03-30 | 2019-02-26 | Visualimits, Llc | Automatic region of interest detection for casino tables |
US10650550B1 (en) * | 2016-03-30 | 2020-05-12 | Visualimits, Llc | Automatic region of interest detection for casino tables |
US20210090380A1 (en) * | 2016-05-09 | 2021-03-25 | Ags Llc | Methods, devices and systems for processing wagers associated with games having multiple wagers |
US10956750B2 (en) | 2016-05-16 | 2021-03-23 | Sensen Networks Group Pty Ltd | System and method for automated table game activity recognition |
KR102462409B1 (en) * | 2016-05-16 | 2022-11-02 | 센센 네트웍스 그룹 피티와이 엘티디 | Systems and Methods for Automated Table Game Activity Recognition |
US11580746B2 (en) | 2016-05-16 | 2023-02-14 | Sensen Networks Group Pty Ltd | System and method for automated table game activity recognition |
EP3459047A4 (en) * | 2016-05-16 | 2020-01-01 | Sensen Networks Group Pty Ltd | System and method for automated table game activity recognition |
KR20190021238A (en) * | 2016-05-16 | 2019-03-05 | 센센 네트웍스 그룹 피티와이 엘티디 | System and method for automated table game activity recognition |
US10878656B2 (en) | 2016-08-02 | 2020-12-29 | Angel Playing Cards Co., Ltd. | Inspection system and management system |
US20210407253A1 (en) * | 2016-08-02 | 2021-12-30 | Angel Group Co., Ltd. | Inspection system and management system |
US10916089B2 (en) | 2016-08-02 | 2021-02-09 | Angel Playing Cards Co., Ltd. | Inspection system and management system |
US20230162566A1 (en) * | 2016-08-02 | 2023-05-25 | Angel Group Co., Ltd. | Inspection system and management system |
US11631299B2 (en) * | 2016-08-02 | 2023-04-18 | Angel Group Co., Ltd. | Inspection system and management system |
US20210158654A1 (en) * | 2016-08-02 | 2021-05-27 | Angel Playing Cards Co., Ltd. | Inspection system and management system |
US11842606B2 (en) * | 2016-08-02 | 2023-12-12 | Angel Group Co., Ltd. | Inspection system and management system |
US10061984B2 (en) * | 2016-10-24 | 2018-08-28 | Accenture Global Solutions Limited | Processing an image to identify a metric associated with the image and/or to determine a value for the metric |
US10713492B2 (en) | 2016-10-24 | 2020-07-14 | Accenture Global Solutions Limited | Processing an image to identify a metric associated with the image and/or to determine a value for the metric |
US20180365497A1 (en) * | 2016-10-24 | 2018-12-20 | Accenture Global Solutions Limited | Processing an image to identify a metric associated with the image and/or to determine a value for the metric |
US20180173948A1 (en) * | 2016-12-16 | 2018-06-21 | Qualcomm Incorporated | Low power data generation for iris-related detection and authentication |
US10614332B2 (en) | 2016-12-16 | 2020-04-07 | Qualcomm Incorportaed | Light source modulation for iris size adjustment |
US10984235B2 (en) * | 2016-12-16 | 2021-04-20 | Qualcomm Incorporated | Low power data generation for iris-related detection and authentication |
AT519722A1 (en) * | 2017-02-27 | 2018-09-15 | Inunum High Quality Systems Anstalt | Method for detecting at least one token object |
US11170605B2 (en) | 2017-02-27 | 2021-11-09 | Revolutionary Technology Systems Ag | Method for detecting at least one gambling chip object |
AT519722B1 (en) * | 2017-02-27 | 2021-09-15 | Revolutionary Tech Systems Ag | Method for the detection of at least one token object |
US11861866B2 (en) * | 2017-03-30 | 2024-01-02 | Visualimits, Llc | Automatic region of interest detection for casino tables |
US11308642B2 (en) * | 2017-03-30 | 2022-04-19 | Visualimits Llc | Automatic region of interest detection for casino tables |
US20220230355A1 (en) * | 2017-03-30 | 2022-07-21 | Visualimits, Llc | Automatic region of interest detection for casino tables |
US11288508B2 (en) | 2017-10-02 | 2022-03-29 | Sensen Networks Group Pty Ltd | System and method for machine learning-driven object detection |
WO2019068141A1 (en) * | 2017-10-02 | 2019-04-11 | Sensen Networks Group Pty Ltd | System and method for machine learning-driven object detection |
US11694336B2 (en) | 2017-10-02 | 2023-07-04 | Sensen Networks Group Pty Ltd | System and method for machine learning-driven object detection |
JP7246382B2 (en) | 2017-10-02 | 2023-03-27 | センセン ネットワークス グループ ピーティーワイ リミテッド | Systems and methods for machine learning driven object detection |
JP2020536324A (en) * | 2017-10-02 | 2020-12-10 | センセン ネットワークス グループ ピーティーワイ リミテッド | Systems and methods for machine learning driven object detection |
US11335166B2 (en) | 2017-10-03 | 2022-05-17 | Arb Labs Inc. | Progressive betting systems |
US11823532B2 (en) | 2017-10-03 | 2023-11-21 | Arb Labs Inc. | Progressive betting systems |
US10841458B2 (en) | 2018-03-02 | 2020-11-17 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
EP3534344A1 (en) * | 2018-03-02 | 2019-09-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and program |
CN110225318A (en) * | 2018-03-02 | 2019-09-10 | 佳能株式会社 | Image processing equipment and image processing method |
WO2020072664A1 (en) * | 2018-10-02 | 2020-04-09 | Gaming Partners International Usa, Inc. | Vision based recognition of gaming chips |
US20220020342A1 (en) * | 2019-03-29 | 2022-01-20 | Rohm Co., Ltd. | Semiconductor device |
CN112541564A (en) * | 2019-09-20 | 2021-03-23 | 腾讯科技(深圳)有限公司 | Method and device for reducing Bayes deep neural network computation complexity |
EP4046064A4 (en) * | 2019-10-15 | 2024-03-06 | Arb Labs Inc | Systems and methods for tracking playing chips |
KR102524969B1 (en) * | 2019-12-24 | 2023-04-21 | 센스타임 인터내셔널 피티이. 리미티드. | Method and apparatus, storage medium and electronic device for detecting dispensing order |
US11420107B2 (en) * | 2019-12-24 | 2022-08-23 | Sensetime International Pte. Ltd. | Method and apparatus for detecting a dealing sequence, storage medium and electronic device |
JP2022521560A (en) * | 2019-12-24 | 2022-04-11 | 商▲湯▼国▲際▼私人有限公司 | Card distribution order detection method, device, storage medium and electronic device |
JP7191109B2 (en) | 2019-12-24 | 2022-12-16 | 商▲湯▼国▲際▼私人有限公司 | Card distribution order detection method, device, storage medium and electronic device |
WO2021130555A1 (en) * | 2019-12-24 | 2021-07-01 | Sensetime International Pte. Ltd. | Method and apparatus for detecting a dealing sequence, storage medium and electronic device |
KR20210084338A (en) * | 2019-12-24 | 2021-07-07 | 센스타임 인터내셔널 피티이. 리미티드. | Method and apparatus, storage medium and electronic device for detecting dispensing sequence |
CN113424236A (en) * | 2019-12-24 | 2021-09-21 | 商汤国际私人有限公司 | Dealing sequence detection method and device, storage medium and electronic equipment |
CN111445504A (en) * | 2020-03-25 | 2020-07-24 | 哈尔滨工程大学 | Water-to-air distortion correction algorithm based on image sequence |
CN112767227A (en) * | 2021-03-12 | 2021-05-07 | 中山大学 | Image watermarking method capable of resisting screen shooting |
US20230082837A1 (en) * | 2021-09-14 | 2023-03-16 | Sensetime International Pte. Ltd. | Status switching method and apparatus, edge computing device and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
AU2006201849A1 (en) | 2006-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070077987A1 (en) | Gaming object recognition | |
US8016665B2 (en) | Table game tracking | |
US20230154279A1 (en) | Fraud detection system in a casino | |
AU2021102736A4 (en) | System for game object detection | |
US20070111773A1 (en) | Automated tracking of playing cards | |
US11798353B2 (en) | System and method for synthetic image training of a neural network associated with a casino table game monitoring system | |
US20050026680A1 (en) | System, apparatus and method for automatically tracking a table game | |
US20060252554A1 (en) | Gaming object position analysis and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TANGAM GAMING TECHNOLOGY INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GURURAJAN, PREM;GANDHI, MAULIN;REEL/FRAME:017920/0074 Effective date: 20060629 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |