US10614313B2 - Recognition and valuation of products within video content - Google Patents

Recognition and valuation of products within video content Download PDF

Info

Publication number
US10614313B2
US10614313B2 US15/838,736 US201715838736A US10614313B2 US 10614313 B2 US10614313 B2 US 10614313B2 US 201715838736 A US201715838736 A US 201715838736A US 10614313 B2 US10614313 B2 US 10614313B2
Authority
US
United States
Prior art keywords
video content
products
brands
green screen
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US15/838,736
Other versions
US20190180108A1 (en
Inventor
Pasquale A. Catalano
Andrew G. Crimmins
Arkadiy O. Tsfasman
John S. Werner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US15/838,736 priority Critical patent/US10614313B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CATALANO, PASQUALE A., CRIMMINS, ANDREW G., TSFASMAN, ARKADIY O., WERNER, JOHN S.
Publication of US20190180108A1 publication Critical patent/US20190180108A1/en
Application granted granted Critical
Publication of US10614313B2 publication Critical patent/US10614313B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06K9/00744
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0273Determination of fees for advertising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the present invention relates in general to generating video content, and more specifically, to managing and editing portions of video content.
  • Product placement is an advertising technique used by companies to subtly promote their products through a non-traditional advertising technique, usually through appearances in film, television, or other media.
  • Targeted advertising is a form of advertising in which online advertisers use sophisticated methods directed towards audiences with certain traits to focus on consumers who are likely to have a strong preference for a product (i.e., customers that may have more interest in a product will receive the message instead of those who have no interest and whose preferences do not match a product's attributes).
  • Embodiments of the invention are directed to a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof.
  • a non-limiting example of the computer-implemented method includes receiving, by a processor, video content.
  • the processor analyzes the video content to identify one or more products, brands and/or green screen objects within the video content.
  • the processor further assigns a product placement score to each of the identified one or more products, brands and/or green screen objects.
  • the processor further outputs a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
  • Embodiments of the invention are directed to a computer program product that can include a storage medium readable by a processing circuit that can store instructions for execution by the processing circuit for performing a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof.
  • the method includes receiving video content.
  • the processor analyzes the video content to identify one or more products, brands and/or green screen objects within the video content.
  • the processor further assigns a product placement score to each of the identified one or more products, brands and/or green screen objects.
  • the processor further outputs a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
  • Embodiments of the invention are directed to a system.
  • the system can include a processor in communication with one or more types of memory.
  • the processor can be configured to receive video content.
  • the processor can be further configured to analyze the video content to identify one or more products, brands and/or green screen objects within the video content.
  • the processor can be further configured to assign a product placement score to each of the identified one or more products, brands and/or green screen objects.
  • the processor can be further configured to a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
  • FIG. 1 is a diagram illustrating an exemplary operating environment according to one or more embodiments of the present invention
  • FIG. 2 is a block diagram illustrating one example of a portion of the processing system one or more computing devices described in FIG. 1 for practice of the teachings herein;
  • FIG. 3 is a block diagram illustrating a computing system according to one or more embodiments of the present invention.
  • FIG. 4 is a diagram illustrating an exemplary scene having products and green screen objects according to one or more embodiments of the present invention.
  • FIG. 5 is a flow diagram illustrating a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof in accordance with one or more embodiments of the present invention.
  • compositions comprising, “comprising,” “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion.
  • a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus.
  • exemplary is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
  • the terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc.
  • the terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc.
  • connection may include both an indirect “connection” and a direct “connection.”
  • Agreements regarding advertising (i.e., product placement and/or targeted marketing) within video content typically occur in pre-production; however, such agreements can be problematic because how and where products have been placed, as well as how products are portrayed in a final product of the video content can differ from the terms agreed due to editing or other production considerations. Unfortunately, few changes to the video content can occur once the video content is shot (post-production) to address discrepancies with the agreed terms. Moreover, valuations for selling product placement in video content are difficult, especially in instances where changes are desired in post-production.
  • locating products that exist in video content after productions occur and determining where products can be inserted into video content (e.g., television, movies, streaming services, etc.) in post-production is needed.
  • determining a product placement score for each location based on criteria such as screen time, lighting, visual focal points which can be used to sell advertisements in the video content determine whether additional editing or re-shoots are needed or determine what products can be replaced in post-production would be beneficial.
  • a solution that allows for video content creators and advertisement companies to more fully understand the value of where and how products are represented in video content, as well as an ability to make changes to the video content in post-production to address issues between video content creators and advertisement companies would be useful for creating a final version of the video content.
  • FIG. 1 is a block diagram illustrating an operating environment 100 according to one or more embodiments of the present invention.
  • the environment 100 can include one or more computing devices, for example, personal digital assistant (PDA) or cellular telephone (mobile device) 54 A, server 54 B, computer 54 C, and/or storage device 54 D which are connected via network 150 .
  • the one or more computing devices may communicate with one another using network 150 .
  • Storage device 54 D can include network attached and/or remote storage.
  • Network 150 can be, for example, a local area network (LAN), a wide area network (WAN), such as the Internet, a dedicated short-range communications network, or any combination thereof, and may include wired, wireless, fiber optic, or any other connection.
  • Network 150 can be any combination of connections and protocols that will support communication between mobile device 54 A, server 54 B, computer 54 C, and/or storage device 54 D respectively.
  • the processing system 200 can form at least a portion of one or more computing devices, mobile device 54 A, server 54 B, computer 54 C and storage device 54 D.
  • the processing system 200 has one or more central processing units (processors) 201 a , 201 b , 201 c , etc. (collectively or generically referred to as processor(s) 201 ).
  • processors 201 may include a reduced instruction set computer (RISC) microprocessor.
  • RISC reduced instruction set computer
  • Processors 201 are coupled to system memory 214 and various other components via a system bus 213 .
  • Read only memory (ROM) 202 is coupled to the system bus 213 and may include a basic input/output system (BIOS), which controls certain basic functions of the processing system 200 .
  • BIOS basic input/output system
  • FIG. 2 further depicts an input/output (I/O) adapter 207 and a network adapter 206 coupled to the system bus 213 .
  • I/O adapter 207 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 203 and/or tape storage drive 205 or any other similar component.
  • I/O adapter 207 , hard disk 203 , and tape storage device 205 are collectively referred to herein as mass storage 204 .
  • Operating system 220 for execution on the processing system 200 may be stored in mass storage 204 .
  • a network adapter 206 interconnects bus 213 with an outside network 216 enabling data processing system 200 to communicate with other such systems.
  • a screen (e.g., a display monitor) 215 can be connected to system bus 213 by display adaptor 212 , which may include a graphics adapter to improve the performance of graphics intensive applications and a video controller.
  • adapters 207 , 206 , and 212 may be connected to one or more I/O busses that are connected to system bus 213 via an intermediate bus bridge (not shown).
  • Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI).
  • PCI Peripheral Component Interconnect
  • Additional input/output devices are shown as connected to system bus 213 via user interface adapter 208 and display adapter 212 .
  • a keyboard 209 , mouse 210 , and speaker 211 can all be interconnected to bus 213 via user interface adapter 208 , which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit.
  • the processing system 200 includes a graphics-processing unit 230 .
  • Graphics processing unit 230 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display.
  • Graphics processing unit 230 is very efficient at manipulating computer graphics and image processing, and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel.
  • the processing system 200 includes processing capability in the form of processors 201 , storage capability including system memory 214 and mass storage 204 , input means such as keyboard 209 and mouse 210 , and output capability including speaker 211 and display 215 .
  • processing capability in the form of processors 201
  • storage capability including system memory 214 and mass storage 204
  • input means such as keyboard 209 and mouse 210
  • output capability including speaker 211 and display 215 .
  • a portion of system memory 214 and mass storage 204 collectively store an operating system to coordinate the functions of the various components shown in FIG. 2 .
  • the computing system 300 can include, but is not limited to, a post-production product/brand analyzer 310 .
  • the post-production product/brand analyzer 310 can be used to analyze video content 305 received by the post-production product/brand analyzer 310 in order to determine products, product locations and useful locations within the video content 305 which can be used to generate a product placement scoring dataset for the video content 305 .
  • the post-production product/brand analyzer 310 can also include a product/brand image database 315 .
  • the product/brand image database 315 can store a plurality of images, 3D models, generic models, etc., of shapes or objects that may be associated with products or brands within the video content 305 . The stored images can be used for comparison with products within the video content 305 .
  • the post-production product/brand analyzer 310 can also include a product identification system 320 , which can be made up of, for example, a visual recognition software analyzer 325 and speech software analyzer 330 .
  • the product identification system 320 can use the visual recognition software analyzer 325 to identify products within the video content 305 using, for example, video recognition software (ex., IBM Watson® Video Recognition API).
  • the visual recognition software of the visual recognition software analyzer 325 may also extract, from the video content 305 , a context associated with the identified product or brand (e.g., was a baseball bat used for the game winning hit or to commit a crime).
  • the visual recognition software of the visual recognition software analyzer 325 may also identify “green screen” objects that were purposely left unbranded during recording video content 305 which may be subsequently replaced later should a location associated with the green screen object be purchased by an advertiser.
  • green screen objects may refer to areas in a scene in which a visual effects/post-production technique for compositing (layering) two images or video streams together based on color hues (chroma range) will be employed, but may also refer in general to potential areas, voids or locations of objects with non-relevant information (e.g., a billboard located in the background of a scene) within a scene that have been identified as locations to potentially add products or brands during post-production.
  • the product identification system 320 can use the speech software analyzer 330 to identify speech associated with products within the video content 305 using, for example, speech analysis software (ex., WatsonTM Speech to Text API, Natural Language API, AlchemyLanguage API, Tone Analyzer API, etc.). Accordingly, audio associated with the video content 305 can be converted from speech to text. The text can be searched to determine whether brand names are spoken within the video content 305 , along with any associated brand slogans or trademarked terms.
  • speech analysis software ex., WatsonTM Speech to Text API, Natural Language API, AlchemyLanguage API, Tone Analyzer API, etc.
  • the post-production product/brand analyzer 310 can also include a product placement scoring engine 335 .
  • a valuation can be associated with each of the identified product and green screen object identified in the video content 305 .
  • a product placement score can be given to an identified product location.
  • the valuation can be based on a product placement score assigned to the identified product location generated by the product placement scoring engine 335 .
  • Factors that may affect the product placement score generated by the product placement scoring engine 335 for an identified product or green screen object can include a duration of time the identified product or green screen object is displayed (i.e., screen time), a size of brand placement or green screen space in comparison to an entire viewing screen, a location of the identified product or green screen object within the viewing screen (i.e., will viewers of the content 305 be drawn to the location of the identified product or green screen object depending upon what is happening within a scene or is the identified product or green screen object being held by an actor or between two actors on the viewing screen), has at least a portion of the identified product or green screen object been obscured by other objects or cut off within the viewing screen, a context associated with how the identified product was used in the video content 305 (ex, a baseball bat used for the game winning hit will positively affect the product placement score, whereas a baseball bat used to commit a crime may negatively affect the product placement score), post-processing work that would be required to replace the identified product if the identified product is moving in a scene
  • the post-production product/brand analyzer 310 can generate and output a dataset 340 which can be used by video content creators to sell targeted advertising space more easily for previously recorded video content.
  • the dataset 340 may include brands/products, product placement scores, locations, timestamps, visual information (e.g., size, color, shading, etc.), and audio information (e.g., product mentioned by name, what was said about a product, etc.).
  • the video content creators can also use the dataset 340 to identify which products may be specific to a particular locale (ex, the United States) which may not be relevant or useful, from an advertising perspective, if the video content 305 was aired in a different locale (ex, Singapore). Accordingly, the video content creators can attempt to replace the identified products with products associated with a given locale by selling the advertising space associated with the identified products.
  • the dataset 340 may be used to confirm that the products have been included in the video content 305 as agreed.
  • an advertiser may agree with a video content creator to include a product for a certain price, as long as the product has a certain product placement score in a final version of the video content 305 .
  • the video content creator can confirm the agreed product placement score for the advertised product within the video content 305 . If the product placement score for the product is below what was agreed, the product can be flagged and the video content creator can perform a re-shoot of a scene associated with the advertised product.
  • FIG. 4 is a diagram illustrating an exemplary scene 400 of video content 305 including a plurality of products and green screen objects according to one or more embodiments of the present invention.
  • an actor 405 may be conversing with actor 410 .
  • Actor 405 may be wearing a t-shirt including a logo of Product 1 .
  • Actor 410 may be holding Product 2 .
  • Product 3 may be located between actor 405 and actor 410 .
  • a portion of product 4 may be located outside of scene 400 .
  • Scene 400 may also include green screen object 1 and green screen object 2 .
  • the post-production product/brand analyzer 310 can indicate via a product placement score that the logo of Product 1 located on the t-shirt of actor 405 is a candidate for post-production replacement. While advertising Product 1 may be appropriate for a given market or country, advertising Product 1 in other markets or countries may not be beneficial to advertisers. Accordingly, the video content creator may create a valuation for advertising Product 1 in one or more agreed markets and/or countries, and advertise other products in other markets and/or countries.
  • the product placement score can be in consideration of the fact that the eyes of the viewers will be drawn to the actor.
  • the product placement score may be lowered if actor moves a lot making viewing of Product 1 difficult.
  • the eyes of the viewers may be drawn to Product 2 , especially if brand is easily readable. Accordingly, the product placement score would increase due to location of Product 2 within scene 400 . If Product 2 is mentioned by actor 410 , the product placement score may be reduced due to the difficulty of replacing the product because the actor's speech within the video content would need to be altered to remove the mention of Product 2 .
  • the eyes of the viewers may be drawn to Product 3 because Product 3 is located between actor 405 and actor 410 . Accordingly, depending on how Product 3 is presented will affect the product placement score.
  • the product placement score of Product 4 may be affected by the fact that a portion of Product 4 is not viewable within scene 400 .
  • the post-production product/brand analyzer 310 may analyze green screen object 1 and green screen object 2 to determine whether green screen object 1 and green screen object 2 would be useful as targeted advertising space within a final version of video content 305 .
  • green screen object 1 may be useful as a target advertising space due to green screen object 1 being location between actor 405 and actor 410 .
  • the product placement score for green screen object 1 will be in consideration of the location of green screen object 1 in relation to actor 405 and actor 410 .
  • the product placement score for green screen object 2 will be affected by the fact that green screen object 2 is located away from actor 405 and actor 410 .
  • FIG. 5 is a flow diagram illustrating a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof according to one or more embodiments of the present invention.
  • video content that has been recorded is uploaded into a post-production product/brand analyzer to determine relevant products and/or green screen objects that may be useful in advertising products with the video content.
  • the video content is analyzed using, for example, video recognition software.
  • products, brands and/or green screen objects within the video content are identified.
  • additional information for each of the identified products, brands and/or green screen objects within the video content may be extracted. For example, additional information may be related to location information, duration on screen, whether the identified products was created using computer-generated imagery (CGI), color, shading, size or the like.
  • CGI computer-generated imagery
  • audio associated with the video content is analyzed using, for example, speech analysis software.
  • the post-production product/brand analyzer can determine whether any of the identified products and/or brands have been referred to within audio associated with the video content. If identified products and/or brands have been referred to within audio associated with the video content, the method proceeds to block 540 , where a time stamp is associated with the products and/or brands, and any characters referring to the products and/or brands are identified.
  • the post-production product/brand analyzer can determine whether ADR will be needed to replace the products and/or brands and adjust a product placement score for the associated products and/or brands accordingly (e.g., the product placement score will be reduced when ADR is required). The method would then proceed to block 550 .
  • each of the identified products, brands and/or green screen objects within the video content is assigned a product placement score.
  • a variety of factors may affect the product placement score for a given product, brand and/or green screen object.
  • the product placement scores of the products, brands and/or green screen objects within the video content are aggregated into a dataset.
  • the output dataset may be used by the video content creator and/or one or more advertisers to assign a value/cost to an advertising space within the video content, ensure agreed upon terms for a product advertisement within the video content is in compliance, determine whether a scene re-shoot is required, determine which products and/or brands can be replaced based on locale, or the like.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Embodiments of the invention include method, systems and computer program products for identifying one or more products, brands and/or green screen objects within video content and valuations thereof. The computer-implemented method includes receiving, by a processor, video content. The processor analyzes the video content to identify one or more products, brands and/or green screen objects within the video content. The processor further assigns a product placement score to each of the identified one or more products, brands and/or green screen objects. The processor further outputs a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.

Description

BACKGROUND
The present invention relates in general to generating video content, and more specifically, to managing and editing portions of video content.
Product placement is an advertising technique used by companies to subtly promote their products through a non-traditional advertising technique, usually through appearances in film, television, or other media. Targeted advertising is a form of advertising in which online advertisers use sophisticated methods directed towards audiences with certain traits to focus on consumers who are likely to have a strong preference for a product (i.e., customers that may have more interest in a product will receive the message instead of those who have no interest and whose preferences do not match a product's attributes).
Negotiations regarding product placement and/or targeted advertising traditionally occur in pre-production before a film, television, or other media is produced. Valuations for product placement and/or targeted advertising during pre-production can be problematic due to re-shoots and editing during production.
SUMMARY
Embodiments of the invention are directed to a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof. A non-limiting example of the computer-implemented method includes receiving, by a processor, video content. The processor analyzes the video content to identify one or more products, brands and/or green screen objects within the video content. The processor further assigns a product placement score to each of the identified one or more products, brands and/or green screen objects. The processor further outputs a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
Embodiments of the invention are directed to a computer program product that can include a storage medium readable by a processing circuit that can store instructions for execution by the processing circuit for performing a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof. The method includes receiving video content. The processor analyzes the video content to identify one or more products, brands and/or green screen objects within the video content. The processor further assigns a product placement score to each of the identified one or more products, brands and/or green screen objects. The processor further outputs a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
Embodiments of the invention are directed to a system. The system can include a processor in communication with one or more types of memory. The processor can be configured to receive video content. The processor can be further configured to analyze the video content to identify one or more products, brands and/or green screen objects within the video content. The processor can be further configured to assign a product placement score to each of the identified one or more products, brands and/or green screen objects. The processor can be further configured to a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score.
Additional technical features and benefits are realized through the techniques of the present invention. Embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed subject matter. For a better understanding, refer to the detailed description and to the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram illustrating an exemplary operating environment according to one or more embodiments of the present invention;
FIG. 2 is a block diagram illustrating one example of a portion of the processing system one or more computing devices described in FIG. 1 for practice of the teachings herein;
FIG. 3 is a block diagram illustrating a computing system according to one or more embodiments of the present invention;
FIG. 4 is a diagram illustrating an exemplary scene having products and green screen objects according to one or more embodiments of the present invention; and
FIG. 5 is a flow diagram illustrating a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof in accordance with one or more embodiments of the present invention.
The diagrams depicted herein are illustrative. There can be many variations to the diagram or the operations described therein without departing from the spirit of the invention. For instance, the actions can be performed in a differing order or actions can be added, deleted or modified. In addition, the term “coupled” and variations thereof describes having a communications path between two elements and does not imply a direct connection between the elements with no intervening elements/connections between them. All of these variations are considered a part of the specification.
In the accompanying figures and following detailed description of the disclosed embodiments of the invention, the various elements illustrated in the figures are provided with two or three digit reference numbers. With minor exceptions, the leftmost digit(s) of each reference number correspond to the figure in which its element is first illustrated.
DETAILED DESCRIPTION
Various embodiments of the invention are described herein with reference to the related drawings. Alternative embodiments of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein.
The following definitions and abbreviations are to be used for the interpretation of the claims and the specification. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus.
Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” may include both an indirect “connection” and a direct “connection.”
The terms “about,” “substantially,” “approximately,” and variations thereof, are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ±8% or 5%, or 2% of a given value.
For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details.
Turning now to an overview of technologies that are more specifically relevant to aspects of the invention, which are related to advertising within video content. Product placement and/or targeted marketing in video content can drive revenue for companies by subtly advertising their products within the video content.
Agreements regarding advertising (i.e., product placement and/or targeted marketing) within video content typically occur in pre-production; however, such agreements can be problematic because how and where products have been placed, as well as how products are portrayed in a final product of the video content can differ from the terms agreed due to editing or other production considerations. Unfortunately, few changes to the video content can occur once the video content is shot (post-production) to address discrepancies with the agreed terms. Moreover, valuations for selling product placement in video content are difficult, especially in instances where changes are desired in post-production.
Accordingly, locating products that exist in video content after productions occur and determining where products can be inserted into video content (e.g., television, movies, streaming services, etc.) in post-production is needed. In addition, determining a product placement score for each location based on criteria such as screen time, lighting, visual focal points which can be used to sell advertisements in the video content, determine whether additional editing or re-shoots are needed or determine what products can be replaced in post-production would be beneficial. Hence, a solution that allows for video content creators and advertisement companies to more fully understand the value of where and how products are represented in video content, as well as an ability to make changes to the video content in post-production to address issues between video content creators and advertisement companies would be useful for creating a final version of the video content.
The above-described aspects of the invention address the shortcomings of the prior art by creating and utilizing a product placement score output dataset which can be used by video content creators to sell targeted advertising space within recorded video content. In addition, areas within the video content not being utilized can be identified for possible product placement in post-production. Accordingly, products in video content can be better valued for discussions regarding advertising.
FIG. 1 is a block diagram illustrating an operating environment 100 according to one or more embodiments of the present invention. The environment 100 can include one or more computing devices, for example, personal digital assistant (PDA) or cellular telephone (mobile device) 54A, server 54B, computer 54C, and/or storage device 54D which are connected via network 150. The one or more computing devices may communicate with one another using network 150. Storage device 54D can include network attached and/or remote storage.
Network 150 can be, for example, a local area network (LAN), a wide area network (WAN), such as the Internet, a dedicated short-range communications network, or any combination thereof, and may include wired, wireless, fiber optic, or any other connection. Network 150 can be any combination of connections and protocols that will support communication between mobile device 54A, server 54B, computer 54C, and/or storage device 54D respectively.
Referring to FIG. 2, there is shown an embodiment of a processing system 200 for implementing the teachings herein. The processing system 200 can form at least a portion of one or more computing devices, mobile device 54A, server 54B, computer 54C and storage device 54D. In this embodiment, the processing system 200 has one or more central processing units (processors) 201 a, 201 b, 201 c, etc. (collectively or generically referred to as processor(s) 201). In one embodiment, each processor 201 may include a reduced instruction set computer (RISC) microprocessor. Processors 201 are coupled to system memory 214 and various other components via a system bus 213. Read only memory (ROM) 202 is coupled to the system bus 213 and may include a basic input/output system (BIOS), which controls certain basic functions of the processing system 200.
FIG. 2 further depicts an input/output (I/O) adapter 207 and a network adapter 206 coupled to the system bus 213. I/O adapter 207 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 203 and/or tape storage drive 205 or any other similar component. I/O adapter 207, hard disk 203, and tape storage device 205 are collectively referred to herein as mass storage 204. Operating system 220 for execution on the processing system 200 may be stored in mass storage 204. A network adapter 206 interconnects bus 213 with an outside network 216 enabling data processing system 200 to communicate with other such systems. A screen (e.g., a display monitor) 215 can be connected to system bus 213 by display adaptor 212, which may include a graphics adapter to improve the performance of graphics intensive applications and a video controller. In one embodiment, adapters 207, 206, and 212 may be connected to one or more I/O busses that are connected to system bus 213 via an intermediate bus bridge (not shown). Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Additional input/output devices are shown as connected to system bus 213 via user interface adapter 208 and display adapter 212. A keyboard 209, mouse 210, and speaker 211 can all be interconnected to bus 213 via user interface adapter 208, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit.
In exemplary embodiments, the processing system 200 includes a graphics-processing unit 230. Graphics processing unit 230 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. In general, graphics-processing unit 230 is very efficient at manipulating computer graphics and image processing, and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel.
Thus, as configured in FIG. 2, the processing system 200 includes processing capability in the form of processors 201, storage capability including system memory 214 and mass storage 204, input means such as keyboard 209 and mouse 210, and output capability including speaker 211 and display 215. In one embodiment, a portion of system memory 214 and mass storage 204 collectively store an operating system to coordinate the functions of the various components shown in FIG. 2.
Referring now to FIG. 3, there is illustrated a computing system 300 in accordance with one or more embodiments of the invention. As illustrated, the computing system 300 can include, but is not limited to, a post-production product/brand analyzer 310. The post-production product/brand analyzer 310 can be used to analyze video content 305 received by the post-production product/brand analyzer 310 in order to determine products, product locations and useful locations within the video content 305 which can be used to generate a product placement scoring dataset for the video content 305.
The post-production product/brand analyzer 310 can also include a product/brand image database 315. The product/brand image database 315 can store a plurality of images, 3D models, generic models, etc., of shapes or objects that may be associated with products or brands within the video content 305. The stored images can be used for comparison with products within the video content 305.
The post-production product/brand analyzer 310 can also include a product identification system 320, which can be made up of, for example, a visual recognition software analyzer 325 and speech software analyzer 330. The product identification system 320 can use the visual recognition software analyzer 325 to identify products within the video content 305 using, for example, video recognition software (ex., IBM Watson® Video Recognition API). The visual recognition software of the visual recognition software analyzer 325 may also extract, from the video content 305, a context associated with the identified product or brand (e.g., was a baseball bat used for the game winning hit or to commit a crime). The visual recognition software of the visual recognition software analyzer 325 may also identify “green screen” objects that were purposely left unbranded during recording video content 305 which may be subsequently replaced later should a location associated with the green screen object be purchased by an advertiser. Herein, “green screen” objects may refer to areas in a scene in which a visual effects/post-production technique for compositing (layering) two images or video streams together based on color hues (chroma range) will be employed, but may also refer in general to potential areas, voids or locations of objects with non-relevant information (e.g., a billboard located in the background of a scene) within a scene that have been identified as locations to potentially add products or brands during post-production.
The product identification system 320 can use the speech software analyzer 330 to identify speech associated with products within the video content 305 using, for example, speech analysis software (ex., Watson™ Speech to Text API, Natural Language API, AlchemyLanguage API, Tone Analyzer API, etc.). Accordingly, audio associated with the video content 305 can be converted from speech to text. The text can be searched to determine whether brand names are spoken within the video content 305, along with any associated brand slogans or trademarked terms.
The post-production product/brand analyzer 310 can also include a product placement scoring engine 335. Upon identifying products and green screen objects, visually and/or audibly, within the video content 305, a valuation can be associated with each of the identified product and green screen object identified in the video content 305. Using a combination of audio and video data associated with the identified products and green screen objects, a product placement score can be given to an identified product location. The valuation can be based on a product placement score assigned to the identified product location generated by the product placement scoring engine 335.
Factors that may affect the product placement score generated by the product placement scoring engine 335 for an identified product or green screen object can include a duration of time the identified product or green screen object is displayed (i.e., screen time), a size of brand placement or green screen space in comparison to an entire viewing screen, a location of the identified product or green screen object within the viewing screen (i.e., will viewers of the content 305 be drawn to the location of the identified product or green screen object depending upon what is happening within a scene or is the identified product or green screen object being held by an actor or between two actors on the viewing screen), has at least a portion of the identified product or green screen object been obscured by other objects or cut off within the viewing screen, a context associated with how the identified product was used in the video content 305 (ex, a baseball bat used for the game winning hit will positively affect the product placement score, whereas a baseball bat used to commit a crime may negatively affect the product placement score), post-processing work that would be required to replace the identified product if the identified product is moving in a scene, whether within the video content 305, actors talk about an identified product without specifically mentioning it by name (ex, “that was refreshing” after taking a drink may positively impact the product placement score, whereas “that tastes awful” after taking a bite of food may negatively impact the product placement score), and if an identified product was mentioned by name, the product placement score may be impacted (ex, the identified product that is mentioned by an actor on screen may be difficult to replace and require Automated Dialogue Replacement (ADR) from an actor, but if the actor is not on screen, replacing the identified may be easier.
Upon scoring individual products and green screen objects within the video content 305, the post-production product/brand analyzer 310 can generate and output a dataset 340 which can be used by video content creators to sell targeted advertising space more easily for previously recorded video content. The dataset 340 may include brands/products, product placement scores, locations, timestamps, visual information (e.g., size, color, shading, etc.), and audio information (e.g., product mentioned by name, what was said about a product, etc.).
The video content creators can also use the dataset 340 to identify which products may be specific to a particular locale (ex, the United States) which may not be relevant or useful, from an advertising perspective, if the video content 305 was aired in a different locale (ex, Singapore). Accordingly, the video content creators can attempt to replace the identified products with products associated with a given locale by selling the advertising space associated with the identified products.
Also, for advertising space already sold in the video content 305, the dataset 340 may be used to confirm that the products have been included in the video content 305 as agreed. For example, an advertiser may agree with a video content creator to include a product for a certain price, as long as the product has a certain product placement score in a final version of the video content 305. At various stages of production, the video content creator can confirm the agreed product placement score for the advertised product within the video content 305. If the product placement score for the product is below what was agreed, the product can be flagged and the video content creator can perform a re-shoot of a scene associated with the advertised product.
FIG. 4 is a diagram illustrating an exemplary scene 400 of video content 305 including a plurality of products and green screen objects according to one or more embodiments of the present invention. In scene 400, an actor 405 may be conversing with actor 410. Actor 405 may be wearing a t-shirt including a logo of Product 1. Actor 410 may be holding Product 2. Product 3 may be located between actor 405 and actor 410. A portion of product 4 may be located outside of scene 400. Scene 400 may also include green screen object 1 and green screen object 2.
When scene 400 is input into the post-production product/brand analyzer 310, the post-production product/brand analyzer 310 can indicate via a product placement score that the logo of Product 1 located on the t-shirt of actor 405 is a candidate for post-production replacement. While advertising Product 1 may be appropriate for a given market or country, advertising Product 1 in other markets or countries may not be beneficial to advertisers. Accordingly, the video content creator may create a valuation for advertising Product 1 in one or more agreed markets and/or countries, and advertise other products in other markets and/or countries.
The product placement score can be in consideration of the fact that the eyes of the viewers will be drawn to the actor. The product placement score may be lowered if actor moves a lot making viewing of Product 1 difficult.
Because actor 410 is holding Product 2, the eyes of the viewers may be drawn to Product 2, especially if brand is easily readable. Accordingly, the product placement score would increase due to location of Product 2 within scene 400. If Product 2 is mentioned by actor 410, the product placement score may be reduced due to the difficulty of replacing the product because the actor's speech within the video content would need to be altered to remove the mention of Product 2. The eyes of the viewers may be drawn to Product 3 because Product 3 is located between actor 405 and actor 410. Accordingly, depending on how Product 3 is presented will affect the product placement score. The product placement score of Product 4 may be affected by the fact that a portion of Product 4 is not viewable within scene 400.
While green screen object 1 and green screen object 2 did not include any product placements within scene 400, the post-production product/brand analyzer 310 may analyze green screen object 1 and green screen object 2 to determine whether green screen object 1 and green screen object 2 would be useful as targeted advertising space within a final version of video content 305. For example, green screen object 1 may be useful as a target advertising space due to green screen object 1 being location between actor 405 and actor 410. Accordingly, the product placement score for green screen object 1 will be in consideration of the location of green screen object 1 in relation to actor 405 and actor 410. The product placement score for green screen object 2 will be affected by the fact that green screen object 2 is located away from actor 405 and actor 410.
FIG. 5 is a flow diagram illustrating a method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof according to one or more embodiments of the present invention. At block 505, video content that has been recorded is uploaded into a post-production product/brand analyzer to determine relevant products and/or green screen objects that may be useful in advertising products with the video content. At block 510, the video content is analyzed using, for example, video recognition software. At block 515, based on the visual analysis, products, brands and/or green screen objects within the video content are identified. At block 520, additional information for each of the identified products, brands and/or green screen objects within the video content may be extracted. For example, additional information may be related to location information, duration on screen, whether the identified products was created using computer-generated imagery (CGI), color, shading, size or the like.
At block 525, audio associated with the video content is analyzed using, for example, speech analysis software. At block 530, the post-production product/brand analyzer can determine whether any of the identified products and/or brands have been referred to within audio associated with the video content. If identified products and/or brands have been referred to within audio associated with the video content, the method proceeds to block 540, where a time stamp is associated with the products and/or brands, and any characters referring to the products and/or brands are identified. At block 545, the post-production product/brand analyzer can determine whether ADR will be needed to replace the products and/or brands and adjust a product placement score for the associated products and/or brands accordingly (e.g., the product placement score will be reduced when ADR is required). The method would then proceed to block 550.
If identified products and/or brands have not been referred to within audio associated with the video content, the method proceeds to block 550, where each of the identified products, brands and/or green screen objects within the video content is assigned a product placement score. As mentioned above, a variety of factors may affect the product placement score for a given product, brand and/or green screen object. At block 555, the product placement scores of the products, brands and/or green screen objects within the video content are aggregated into a dataset. The output dataset may be used by the video content creator and/or one or more advertisers to assign a value/cost to an advertising space within the video content, ensure agreed upon terms for a product advertisement within the video content is in compliance, determine whether a scene re-shoot is required, determine which products and/or brands can be replaced based on locale, or the like.
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

Claims (17)

What is claimed is:
1. A computer-implemented method for identifying one or more products, brands and/or green screen objects within video content and valuations thereof, the method comprising:
receiving, by a processor, video content;
analyzing, by the processor, the video content to identify one or more products, brands and/or green screen objects within the video content;
assigning, by the processor, a product placement score to each of the identified one or more products, brands and/or green screen objects, wherein the product placement score for each of the product, brand and/or green screen object is based on a screen time duration and location of each of the product, brand and/or green screen object in the video content;
modifying, by the processor, the product placement score for a first object of the identified one or more products, brands and/or green screen objects based on determining a positive context or a negative context of a use of the first object in the video content;
outputting, by the processor, a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score; and
modifying the video content to replace the first object of the one or more products, brands and/or green screen objects with a second object.
2. The computer-implemented method of claim 1, wherein the video content is analyzed using a visual analysis to identify one or more products, brands and/or green screen objects within the video content.
3. The computer-implemented method of claim 1, wherein the video content is analyzed using an audio analysis to identify one or more products, brands and/or green screen objects within the video content.
4. The computer-implemented method of claim 3, further comprising using the audio analysis to determine that one or more products, brands and/or green screen objects identified using visual analysis has been referred to within the video content.
5. The computer-implemented method of claim 1, further comprising:
obtaining an agreed product placement score for a first product of the one or more products, brands, and/or green screen objects; and
based on a determination that the product placement score for the first product is below the agreed product placement score, flagging the first product for a re-shoot of one or more scenes including a placement of the first product.
6. The computer-implemented method of claim 1, wherein the replacement one or more of the identified one or more products, brands and/or green screen objects within the video content is in response to a change in advertising locale.
7. The computer-implemented method of claim 1, wherein a green screen object is associated with locations within the video content that are not associated with a product and/or brand when the video content is analyzed.
8. A computer program product for identifying one or more products, brands and/or green screen objects within video content and valuations thereof, the computer program product comprising:
a non-transitory computer readable storage medium having stored thereon first program instructions executable by a processor to cause the processor to:
receive video content;
analyze the video content to identify one or more products, brands and/or green screen objects within the video content;
assign a product placement score to each of the identified one or more products, brands and/or green screen objects, wherein the product placement score for each of the product, brand and/or green screen object is based on a screen time duration and location of each of the product, brand and/or green screen object in the video content;
modifying, by the processor, the product placement score for a first object of the identified one or more products, brands and/or green screen objects based on determining a positive context or a negative context of a use of the first object in the video content;
output a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score; and
modifying the video content to replace the first object of the one or more products, brands and/or green screen objects with a second object.
9. The computer program product of claim 8, wherein the video content is analyzed using a visual analysis to identify one or more products, brands and/or green screen objects within the video content.
10. The computer program product of claim 8, wherein the video content is analyzed using an audio analysis to identify one or more products, brands and/or green screen objects within the video content.
11. The computer program product of claim 10, further comprising using the audio analysis to determine that one or more products, brands and/or green screen objects identified using visual analysis has been referred to within the video content.
12. The computer program product of claim 8, wherein the replacement one or more of the identified one or more products, brands and/or green screen objects within the video content is in response to a change in advertising locale.
13. The computer program product of claim 8, wherein a green screen object is associated with locations within the video content that are not associated with a product and/or brand when the video content is analyzed.
14. A system comprising:
a storage medium coupled to a processor;
the processor configured to:
receive video content;
analyze the video content to identify one or more products, brands and/or green screen objects within the video content;
assign a product placement score to each of the identified one or more products, brands and/or green screen objects, wherein the product placement score for each of the product, brand and/or green screen object is based on a screen time duration and location of each of the product, brand and/or green screen object in the video content;
modifying, by the processor, the product placement score for a first object of the identified one or more products, brands and/or green screen objects based on determining a positive context or a negative context of a use of the first object in the video content;
output a dataset including product placement scores assigned to each of the identified one or more products, brands and/or green screen objects, wherein the dataset provides a valuation for each of the identified one or more products, brands and/or green screen objects based on an associated product placement score; and
modifying the video content to replace the first object of the one or more products, brands and/or green screen objects with a second object.
15. The system of claim 14, wherein the video content is analyzed using a visual analysis to identify one or more products, brands and/or green screen objects within the video content.
16. The system of claim 14, wherein the video content is analyzed using an audio analysis to identify one or more products, brands and/or green screen objects within the video content.
17. The system of claim 16, further comprising using the audio analysis to determine that one or more products, brands and/or green screen objects identified using visual analysis has been referred to within the video content.
US15/838,736 2017-12-12 2017-12-12 Recognition and valuation of products within video content Expired - Fee Related US10614313B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/838,736 US10614313B2 (en) 2017-12-12 2017-12-12 Recognition and valuation of products within video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/838,736 US10614313B2 (en) 2017-12-12 2017-12-12 Recognition and valuation of products within video content

Publications (2)

Publication Number Publication Date
US20190180108A1 US20190180108A1 (en) 2019-06-13
US10614313B2 true US10614313B2 (en) 2020-04-07

Family

ID=66696279

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/838,736 Expired - Fee Related US10614313B2 (en) 2017-12-12 2017-12-12 Recognition and valuation of products within video content

Country Status (1)

Country Link
US (1) US10614313B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174427A1 (en) * 2014-03-31 2021-06-10 Monticello Enterprises LLC System and method for providing a search entity-based payment process
US11983759B2 (en) 2014-03-31 2024-05-14 Monticello Enterprises LLC System and method for providing simplified in-store purchases and in-app purchases using a use-interface-based payment API
US11989769B2 (en) 2014-03-31 2024-05-21 Monticello Enterprises LLC System and method for providing simplified in-store, product-based and rental payment processes
US12008629B2 (en) 2014-03-31 2024-06-11 Monticello Enterprises LLC System and method for providing a social media shopping experience
US12236471B2 (en) 2014-03-31 2025-02-25 Monticello Enterprises LLC System and method for providing a social media shopping experience

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108877718B (en) * 2018-07-24 2021-02-02 武汉华星光电技术有限公司 GOA circuit and display device
US12026201B2 (en) * 2021-05-31 2024-07-02 Google Llc Automated product identification within hosted and streamed videos
US20230308708A1 (en) * 2022-03-25 2023-09-28 Donde Fashion, Inc. Systems and methods for controlling a user interface for presentation of live media streams
US11983386B2 (en) * 2022-09-23 2024-05-14 Coupang Corp. Computerized systems and methods for automatic generation of livestream carousel widgets

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028873A1 (en) * 2001-08-02 2003-02-06 Thomas Lemmons Post production visual alterations
US20040194128A1 (en) * 2003-03-28 2004-09-30 Eastman Kodak Company Method for providing digital cinema content based upon audience metrics
US7207053B1 (en) * 1992-12-09 2007-04-17 Sedna Patent Services, Llc Method and apparatus for locally targeting virtual objects within a terminal
US20070214476A1 (en) * 2006-03-07 2007-09-13 Sony Computer Entertainment America Inc. Dynamic replacement of cinematic stage props in program content
US20080033804A1 (en) * 2006-07-14 2008-02-07 Vulano Group, Inc. Network architecture for dynamic personalized object placement in a multi-media program
US20080033801A1 (en) * 2006-07-14 2008-02-07 Vulano Group, Inc. System for dynamic personalized object placement in a multi-media program
US20090249386A1 (en) * 2008-03-31 2009-10-01 Microsoft Corporation Facilitating advertisement placement over video content
US20090327346A1 (en) * 2008-06-30 2009-12-31 Nokia Corporation Specifying media content placement criteria
US20100313218A1 (en) * 2009-06-03 2010-12-09 Visible World, Inc. Targeting Television Advertisements Based on Automatic Optimization of Demographic Information
US20100318406A1 (en) * 2009-06-12 2010-12-16 Frank Zazza Quantitative Branding Analysis
US20110125573A1 (en) * 2009-11-20 2011-05-26 Scanscout, Inc. Methods and apparatus for optimizing advertisement allocation
US20110219402A1 (en) * 2010-03-05 2011-09-08 Sony Corporation Apparatus and method for replacing a broadcasted advertisement based on heuristic information
US20120030012A1 (en) * 2010-07-28 2012-02-02 Michael Fisher Yield optimization for advertisements
US20120044250A1 (en) * 2010-08-18 2012-02-23 Demand Media, Inc. Systems, Methods, and Machine-Readable Storage Media for Presenting Animations Overlying Multimedia Files
US20120084155A1 (en) * 2010-10-01 2012-04-05 Yahoo! Inc. Presentation of content based on utility
US8191089B2 (en) * 2008-09-10 2012-05-29 National Taiwan University System and method for inserting advertisement in contents of video program
US20120192226A1 (en) * 2011-01-21 2012-07-26 Impossible Software GmbH Methods and Systems for Customized Video Modification
US20120257842A1 (en) * 2011-04-11 2012-10-11 Aibo Tian System and method for determining image placement on a canvas
US8312486B1 (en) * 2008-01-30 2012-11-13 Cinsay, Inc. Interactive product placement system and method therefor
US20120290987A1 (en) * 2011-05-13 2012-11-15 Gupta Kalyan M System and Method for Virtual Object Placement
US8458053B1 (en) * 2008-12-17 2013-06-04 Google Inc. Click-to buy overlays
US20130141530A1 (en) * 2011-12-05 2013-06-06 At&T Intellectual Property I, L.P. System and Method to Digitally Replace Objects in Images or Video
US8949889B1 (en) * 2012-07-09 2015-02-03 Amazon Technologies, Inc. Product placement in content
US20150113555A1 (en) * 2013-10-23 2015-04-23 At&T Intellectual Property I, Lp Method and apparatus for promotional programming
US20150143410A1 (en) * 2013-11-20 2015-05-21 At&T Intellectual Property I, Lp System and method for product placement amplification
US9058764B1 (en) * 2007-11-30 2015-06-16 Sprint Communications Company L.P. Markers to implement augmented reality
US20160112729A1 (en) * 2014-10-20 2016-04-21 Comcast Cable Communications, Llc Digital content spatial replacement system and method
US20160212455A1 (en) * 2013-09-25 2016-07-21 Intel Corporation Dynamic product placement in media content
US9508080B2 (en) * 2009-10-28 2016-11-29 Vidclx, Llc System and method of presenting a commercial product by inserting digital content into a video stream
US20160373814A1 (en) * 2015-06-19 2016-12-22 Autodesk, Inc. Real-time content filtering and replacement
US20170048597A1 (en) 2014-01-10 2017-02-16 ModCon IP LLC Modular content generation, modification, and delivery system
US20180268441A1 (en) * 2017-03-14 2018-09-20 At&T Intellectual Property I, L.P. Targeted user digital embedded advertising
US20190073811A1 (en) * 2017-09-05 2019-03-07 Adobe Systems Incorporated Automatic creation of a group shot image from a short video clip using intelligent select and merge

Patent Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7207053B1 (en) * 1992-12-09 2007-04-17 Sedna Patent Services, Llc Method and apparatus for locally targeting virtual objects within a terminal
US20030028873A1 (en) * 2001-08-02 2003-02-06 Thomas Lemmons Post production visual alterations
US20040194128A1 (en) * 2003-03-28 2004-09-30 Eastman Kodak Company Method for providing digital cinema content based upon audience metrics
US20070214476A1 (en) * 2006-03-07 2007-09-13 Sony Computer Entertainment America Inc. Dynamic replacement of cinematic stage props in program content
US20080033804A1 (en) * 2006-07-14 2008-02-07 Vulano Group, Inc. Network architecture for dynamic personalized object placement in a multi-media program
US20080033801A1 (en) * 2006-07-14 2008-02-07 Vulano Group, Inc. System for dynamic personalized object placement in a multi-media program
US9058764B1 (en) * 2007-11-30 2015-06-16 Sprint Communications Company L.P. Markers to implement augmented reality
US8312486B1 (en) * 2008-01-30 2012-11-13 Cinsay, Inc. Interactive product placement system and method therefor
US20090249386A1 (en) * 2008-03-31 2009-10-01 Microsoft Corporation Facilitating advertisement placement over video content
US20090327346A1 (en) * 2008-06-30 2009-12-31 Nokia Corporation Specifying media content placement criteria
US8191089B2 (en) * 2008-09-10 2012-05-29 National Taiwan University System and method for inserting advertisement in contents of video program
US8458053B1 (en) * 2008-12-17 2013-06-04 Google Inc. Click-to buy overlays
US20100313218A1 (en) * 2009-06-03 2010-12-09 Visible World, Inc. Targeting Television Advertisements Based on Automatic Optimization of Demographic Information
US20100318406A1 (en) * 2009-06-12 2010-12-16 Frank Zazza Quantitative Branding Analysis
US9508080B2 (en) * 2009-10-28 2016-11-29 Vidclx, Llc System and method of presenting a commercial product by inserting digital content into a video stream
US20110125573A1 (en) * 2009-11-20 2011-05-26 Scanscout, Inc. Methods and apparatus for optimizing advertisement allocation
US20110219402A1 (en) * 2010-03-05 2011-09-08 Sony Corporation Apparatus and method for replacing a broadcasted advertisement based on heuristic information
US20120030012A1 (en) * 2010-07-28 2012-02-02 Michael Fisher Yield optimization for advertisements
US20120044250A1 (en) * 2010-08-18 2012-02-23 Demand Media, Inc. Systems, Methods, and Machine-Readable Storage Media for Presenting Animations Overlying Multimedia Files
US20120084155A1 (en) * 2010-10-01 2012-04-05 Yahoo! Inc. Presentation of content based on utility
US20120192226A1 (en) * 2011-01-21 2012-07-26 Impossible Software GmbH Methods and Systems for Customized Video Modification
US20120257842A1 (en) * 2011-04-11 2012-10-11 Aibo Tian System and method for determining image placement on a canvas
US20120290987A1 (en) * 2011-05-13 2012-11-15 Gupta Kalyan M System and Method for Virtual Object Placement
US20130141530A1 (en) * 2011-12-05 2013-06-06 At&T Intellectual Property I, L.P. System and Method to Digitally Replace Objects in Images or Video
US8949889B1 (en) * 2012-07-09 2015-02-03 Amazon Technologies, Inc. Product placement in content
US20160212455A1 (en) * 2013-09-25 2016-07-21 Intel Corporation Dynamic product placement in media content
US20150113555A1 (en) * 2013-10-23 2015-04-23 At&T Intellectual Property I, Lp Method and apparatus for promotional programming
US20150143410A1 (en) * 2013-11-20 2015-05-21 At&T Intellectual Property I, Lp System and method for product placement amplification
US9532086B2 (en) 2013-11-20 2016-12-27 At&T Intellectual Property I, L.P. System and method for product placement amplification
US20170048597A1 (en) 2014-01-10 2017-02-16 ModCon IP LLC Modular content generation, modification, and delivery system
US20160112729A1 (en) * 2014-10-20 2016-04-21 Comcast Cable Communications, Llc Digital content spatial replacement system and method
US20160373814A1 (en) * 2015-06-19 2016-12-22 Autodesk, Inc. Real-time content filtering and replacement
US20180268441A1 (en) * 2017-03-14 2018-09-20 At&T Intellectual Property I, L.P. Targeted user digital embedded advertising
US20190073811A1 (en) * 2017-09-05 2019-03-07 Adobe Systems Incorporated Automatic creation of a group shot image from a short video clip using intelligent select and merge

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174427A1 (en) * 2014-03-31 2021-06-10 Monticello Enterprises LLC System and method for providing a search entity-based payment process
US11836784B2 (en) * 2014-03-31 2023-12-05 Monticello Enterprises LLC System and method for providing a search entity-based payment process
US11842380B2 (en) 2014-03-31 2023-12-12 Monticello Enterprises LLC System and method for providing a social media shopping experience
US11983759B2 (en) 2014-03-31 2024-05-14 Monticello Enterprises LLC System and method for providing simplified in-store purchases and in-app purchases using a use-interface-based payment API
US11989769B2 (en) 2014-03-31 2024-05-21 Monticello Enterprises LLC System and method for providing simplified in-store, product-based and rental payment processes
US12008629B2 (en) 2014-03-31 2024-06-11 Monticello Enterprises LLC System and method for providing a social media shopping experience
US12045868B2 (en) 2014-03-31 2024-07-23 Monticello Enterprises LLC System and method for receiving data at a merchant device from a user device over a wireless link
US12131370B2 (en) 2014-03-31 2024-10-29 Monticello Enterprises LLC System and method for receiving data at a merchant device from a user device over a wireless link
US12148021B2 (en) 2014-03-31 2024-11-19 Monticello Enterprises LLC System and method for providing an improved payment process over a wireless link
US12236471B2 (en) 2014-03-31 2025-02-25 Monticello Enterprises LLC System and method for providing a social media shopping experience

Also Published As

Publication number Publication date
US20190180108A1 (en) 2019-06-13

Similar Documents

Publication Publication Date Title
US10614313B2 (en) Recognition and valuation of products within video content
US9294822B2 (en) Processing and apparatus for advertising component placement utilizing an online catalog
US8860803B2 (en) Dynamic replacement of cinematic stage props in program content
US11288727B2 (en) Content creation suggestions using failed searches and uploads
US20160050465A1 (en) Dynamically targeted ad augmentation in video
US20200213644A1 (en) Advertisement insertion in videos
US10726443B2 (en) Deep product placement
US10110933B2 (en) Video file processing
US20140082209A1 (en) Personalized streaming internet video
US10939143B2 (en) System and method for dynamically creating and inserting immersive promotional content in a multimedia
US10721519B2 (en) Automatic generation of network pages from extracted media content
CN107633433A (en) The checking method and device of advertisement
CN112348560A (en) Intelligent advertisement material auditing method and device and electronic equipment
KR102553332B1 (en) Method and apparatus for editing content on a live broadcasting platform
US20150227970A1 (en) System and method for providing movie file embedded with advertisement movie
US12231702B2 (en) Inserting digital contents into a multi-view video
WO2024220972A1 (en) Product placement systems and methods for 3d productions
US10992979B2 (en) Modification of electronic messaging spaces for enhanced presentation of content in a video broadcast
KR102045347B1 (en) Surppoting apparatus for video making, and control method thereof
US20220109900A1 (en) System for intermediating virtual image, device for inserting virtual image, and method for operating same
US20230080997A1 (en) Methods and apparatus for pasting advertisement to video
US11949970B1 (en) Generating boundary points for media content
KR20180092869A (en) Apparatus and method for providing marketing platform
US20230351442A1 (en) System and method for determining a targeted creative from multi-dimensional testing
KR102387978B1 (en) Electronic commerce integrated meta-media generating method, distributing system and method for the electronic commerce integrated meta-media

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CATALANO, PASQUALE A.;CRIMMINS, ANDREW G.;TSFASMAN, ARKADIY O.;AND OTHERS;REEL/FRAME:044368/0350

Effective date: 20171207

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CATALANO, PASQUALE A.;CRIMMINS, ANDREW G.;TSFASMAN, ARKADIY O.;AND OTHERS;REEL/FRAME:044368/0350

Effective date: 20171207

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240407