Sorry for the AI-Fix but well: Here's a Neural Network trained on Symbolism in Advertising, it generates virtual Meaning from learned Meanings. The knacker for me is, that Database of Meaning itself is a derivation from our own accumulated Meaning. Sunglasses are cool, yeah, but they are also a tool. On the other hand, Advertising is mighty good at exploiting our Symbolism, and the results are accordingly pretty good at spotting ambivalent messages in ads. Reliable Irony-Detector next. (via Samim)
Advertisements are a powerful tool for affecting human behavior. Product ads convince us to make large purchases, e.g. for cars and home appliances, or small but recurrent purchases, e.g. for laundry detergent. Public service announcements (PSAs) encourage different behaviors, e.g. combating domestic violence or driving safely. To stand out from the rest, ads have to be both eye-catching and memorable
, while also conveying the information that the ad designer wants to impart. All of this must be done in limited space (one image) and time (however many seconds the viewer is willing to spend looking at an ad).
How can ads get the most “bang for their buck”? One technique is to make references to knowledge that viewers already have, in the form of e.g. cultural knowledge, conceptual mappings, and symbols that humans have learned [58, 39, 61, 38]. These symbolic mappings might come from literature (e.g. a snake symbolizes evil or danger), movies (e.g. motorcycles symbolize adventure or coolness), common sense (a flexed arm symbolizes strength), or even pop culture (Usain Bolt symbolizes speed).
In this paper, we describe how to use symbolic mappings to predict the messages of advertisements. On one hand, we use the symbol bounding boxes and labels from the Ads Dataset of  as visual anchors to ideas outside the image. On the other hand, we use knowledge sources external to the main task, such as object detection, to better relate ad images to their corresponding messages. These are both forms of using outside knowledge which boil down to learning links between objects and symbolic concepts. We use each type of knowledge in two ways, as a constraint or as an additive component for the learned image representation.