Impression-Based Advertising: A Cross-Platform Solution

Abstract

This study investigated a possible solution to the challenge of cross-platform advertising measurement. Opinions of advertising experts (agencies, media, clients, researchers, and scholars) in the form of personal interviews (N = 37) informed potential effectiveness of the impression as a cross-platform measure of advertising delivery. Participants agreed that impression measurement could provide a solution to cross-platform measurement challenges. They also identified eight obstacles to adoption of impression-based measurement along with seven additional considerations for media measurement. Finally, an Advertising Process Model is proposed to organize and analyze the concepts.

Share and Cite:

Smallwood, E. (2022) Impression-Based Advertising: A Cross-Platform Solution. American Journal of Industrial and Business Management, 12, 1761-1787. doi: 10.4236/ajibm.2022.1212096.

1. Introduction

Options to deliver advertising messages have virtually exploded in the twenty-first century. Advertisers and their agencies have a broad array of media to deliver messages, including television, radio, print, websites, streaming services, mobile, Google, YouTube, Facebook, Twitter, Instagram, SnapChat, video games, “apps” of all kinds, and an almost-endless universe of new media options that continues to expand. These trends have stimulated even greater interest on the part of advertising organizations to find reliable methods of determining the best means of delivering advertising messages. While scholars have long been interested in advertising, media, and advertising effects on individuals, audiences, and cultures, their research has tended to focus on the psychological aspects of concepts such as persuasion and/or engagement, the advertising creative process, or campaign strategies. Formal, quantifiable, advertising delivery measurement has received relatively little attention.

The concept of measurement of delivery focuses on whether the message was actually delivered to a target audience member and then counts the resultant impressions. An impression can be thought of as a single exposure to a given message. Research dealing with measuring advertising delivery is needed in an era with so many varied forms of media and media platforms. This study explored advertising industry perceptions of the impression in cross-platform advertising.

This paper is organized as follows: first, the purpose/background and theories applied to advertising are reviewed. Second, impressions are viewed through the lens of advertising and general systems theory, including a related model followed by a description of the method. Third, the results are shared in the context of each of the research questions. Finally, an advertising process model is proposed and a discussion and analysis of the findings are presented.

1.1. Purpose

This study explored perspectives from a variety of advertising experts regarding the need for, nature of, and potential effectiveness of a cross-platform measure of advertising delivery: the impression. The study sought opinions of diverse experts about their experiences with advertising measurement systems and tools—and what might be important for a more effective impression-based, holistic, industry-metric suitable for the current and future advertising activities. The goal was to integrate the opinions of these experts to evaluate a system of advertising measurement that addresses the need for consideration of how well advertising has been delivered—in the context of advertising efforts that are increasingly multi-media in character—to audiences who increasingly make use of multiple media and/or devices.

1.2. Background of Problem

Because huge sums of money are spent on advertising, businesses and non-profit organizations that use advertising (as well as the advertising agencies that support them) demand that the measurement of advertising keep up with evolution of the media landscape.

Advertising and other forms of marketing, communication, and public relations have been historically both “planned and measured on a medium-by-me- dium basis, yet it is indisputable that modern consumers consume many, if not all, of these communication media concurrently” (Reinold & Tropp, 2012: p. 119) . Viljakainen (2013) refers to this differentiated, media-specific method as the silo approach—and reports that it is the approach used almost exclusively by both practitioners and scholars. She argues that the entrenched systems of media and measurement are resisting the shift to more holistic measurement alternatives which might benefit advertisers. More than 50 years ago, Christian and Ochs (1966: p. 59) observed “the need for comparable audience data has never before been fulfilled” though they argued that the concept should receive attention.

Measurement and reporting tools do exist for lots of media, but they continue to be separate and unconnected in the marketplace; Franz (2000: p. 461) bemoaned “we live in a multi-source media research world”. Multiple practitioners in the industry have acknowledged and complained about the lack of cross-plat- form metrics.

Unfortunately for advertisers, media continue to be measured separately and individually—on scales developed primarily for those particular media and to the exclusion of other media. Multiple measurement organizations have historically laid vertical claim to various media and often provide virtually monopolistic or at least “semi-exclusive” delivery metrics for a given media. For example, Nielsen Media Research is the dominant player in the television ratings industry in the U.S.; it provides ratings as a percentage of a designated market area or DMA. Arbitron (though now part of Nielsen) is the dominant player in the American radio industry; it provides ratings as a percentage of a different geographic construct—the area of dominant influence or ADI.

Google Analytics, owing partly to the company’s 2007 purchase of DoubleClick, is one of the major players in web measurement, along with Comscore (Rentrak). However, the online market is highly fragmented, and there are a significant number of players. Many digital publishers provide their own research measurement for advertisers, e.g., Facebook. In addition, streaming services such as Netflix and Amazon’s Prime Video also provide their own delivery metrics. Advertisers typically lack verifiable data and must simply trust (hope and pray) that the numbers are valid. However, throughout advertising history, those who purchase advertising have shown a clear preference for a “third party arbiter” of media delivery metrics (e.g., Nielsen or Arbitron) that can offer an element of impartiality.

“Multimedia understanding would seem to be critically important today, given that most distribution-based media measures are based on single media form identification” (Schultz, Block, & Raman, 2009: p. 5) . However, most media struggles with these hurdles individually, sometimes as individual companies. Thus, the advertisers—the ones spending the money—are still left without a comparison metric for even the simplest measure: message delivery. Not only is delivery one of the simplest concepts, it is also arguably a precursor to all other measures of advertising effectiveness. Cross-platform delivery is a vital concern for both scholars and practitioners.

Advertisers are “breaking out of silos, planning cross platform campaigns that frequently mix traditional media with PR activities, sponsorships, events, product placements, and other forms of promotion” (McDonald, 2008: p. 316) . But how do advertisers measure their cross-platform advertising delivery?

There appears to be a growing industry-wide appetite for a metric that can be applied to multiple media. In 2008, the World Federation of Advertisers (WFA, 2008) introduced what it called a “blueprint for consumer-centric holistic measurement” in which it listed goals for future audience measurement. The blueprint called for consistent information and measurement to get superior insights on multi-media behavior. However, the WFA warned that disparate data sets will hinder multimedia measurement. Unfortunately, disparate data sets are the norm as advertisers and media researchers still “tend to observe, investigate and measure advertising in splendid isolation” (Kerr & Schultz, 2010: p. 563) .

Omni-channel advertising needs a common measurement system; unfortunately, the measurement industry still does not support that need. Meanwhile, the need continues to increase based on advertiser usage, consumer demand, and media proliferation. “Cross-channel advertising has grown steadily and significantly as a means to reach consumers. Television, the Internet, and other channels are used together to market products” (Laroche, Kiani, Economakis, & Richard, 2013: p. 431) .

Chang and Thorson (2004) advocate that marketers should apply a multiple- source strategy since presenting information in varied contexts leads to ad messages being encoded in slightly different ways, which enhances mental retrieval ability and therefore increases awareness.

Scholars continue to recognize and point out the changes to media and their impact on advertising. “Profound changes in the media ecosystem mean renewed emphasis on multi-media campaign efficiency and effectiveness” (Romaniuk, Beal, & Uncles, 2013: p. 221; Assael, 2011) .

1.3. Theories Applied to Advertising

For an activity that is so prevalent and intertwined into modern life as advertising, it is strange that the academic community has brought to bear no grand theories of advertising to unite the field. There are many theories applied to advertising, but they are generally borrowed from other fields, such as psychology, sociology, and even anthropology, as well as business, economics, and other fields of study (Rodgers & Thorson, 2012) .

The most common theories applied to advertising study since 1980 are 1) dual-process models, including elaboration-likelihood, 2) involvement (AKA engagement), 3) information processing theory, 4) interactivity, and 5) source credibility (Kim, Kayes, Avant, & Reid, 2014) . The models used most are of the hierarchy approach; hierarchy of effects models assume a cognition leads to an effect and subsequently to a behavior (Vakratsas & Ambler, 1999) . These approaches focus on psychological aspects of advertising that can only begin after exposure to the message has occurred.

Integrated marketing communication (IMC), another concept and practice prominently emphasized in recent practice and scholarly literature, is also relevant to the current discussion. The IMC approach acknowledges that the lines between different marketing activities have begun to overlap. However, they also lament that there is no measure for cross-media advertising expenditures.

Even as the advertising field has evolved and as the media landscape has become more complex, a great deal of advertising research has remained over-fo- cused on such theories of persuasion and attitude change (Nan & Faber, 2004; Faber, Duff, & Nan, 2012) ; however, for advertising theory to move forward, scholars must study concepts that are particular to the field of advertising itself—and especially those that have the potential to address the complex realities that face practitioners today. One area that is in need of more research is delivery measurement.

As academics and practitioners alike struggle to find solutions to cross-plat- form advertising, they are aware that they lack the data and tools that are up to the challenge. Multimedia campaigns or cross-media campaigns are used by advertisers to optimize the “effectiveness of their budgets by exploiting the unique strengths of each medium” (Voorveld, Neijens, & Smit, 2011: p. 69) . Yet, there remains a disconnect between the desire to utilize cross-platform advertising and the capability of evaluating the practice. Marketers are deficient in the information they require to make intelligent media decisions in a complicated, multimedia environment (Taylor et al., 2013) .

1.4. Advertising Impression Measurement

Advertising can be measured from multiple perspectives and in different ways. For example, a cost-to-reach metric based on impressions is used for some advertising media. However, that impression-based approach is not often used to compare delivery across multiple media to a specific target.

1.5. Impression across Platforms

Any viable metric for advertising delivery must possess two simultaneous properties: 1) the potential to be broadly applied across multiple media platforms, and 2) the ability to account for individual exposures at a specific, individual level.

The impression meets these criteria. As a cross-platform measure, it has several benefits, including its elegant parsimony. The impression is uniquely suited to address what Smit and Neijens (2011: p. 124) call the essential advertising questions: “Audience research tries to answer questions such as, ‘How many people were exposed to my advertisement?’ and ‘How often were they exposed?’” Traditional reach and frequency do provide calculated estimates to try to answer these questions. However, in practice, the calculations are media-dependent and virtually impossible to apply across multiple media.

Further, the basic currency in media planning is the number of people reached by an advertising message carried by a particular media vehicle. Advertising exposures, (also called impressions) are “generally considered to be the most valid indicator for reach”; meanwhile, socio-psychological factors “such as persuasion and behavioral responses are not considered as valid because these are affected by factors beyond the control of the media, such as the attractiveness of the advertised product or service and the power of copy and artwork” (Smit & Neijens, 2011: p. 125) .

Many have observed that it is critical to measure across not just media formats, but also across devices (Varan et al., 2013; Rodgers & Thorson, 2012) . Advertisers must have the ability to compare the delivery of their messages across that plethora of media options.

1.6. General Systems Theory

Although there is not a theory that aligns perfectly with impression measurement, there is a meta-theoretical perspective that can offer a valuable framework from which to begin: general systems theory.

Systems theory approaches are widely applied to natural, social, and human sciences (Laszlo, 1972) . A general systems theory perspective, advocated by Von Bertalanffy (1972) , offers significant potential for studying complex systems and processes such as advertising. Systems theory offers a perspective that argues for the interrelatedness of concepts organized in a hierarchy of subsystems and supersystems (Smallwood, 1992) . GST places great importance on both process and flow while considering relationships among and between parts of the system to each other and their environments (inter-connectedness). Systems have three primary structural components: inputs, throughputs (AKA transformations), and outputs. Further, systems may also include smaller “subsystems” or be part of larger “supersystems” (aka “suprasystems”). By definition, systems have boundaries that separate them from their environments; in other words, there are things—concepts, elements, etc.—that are inside the system as well as those that are outside a given system. GST offers an appropriate lens to examine processes such as advertising. Multiple theorists have applied systems theory to the study of organizations and organizational communication (Barnard, 1938; March & Simon, 1958; Farace, Monge, & Russell, 1977; Katz & Kahn, 1978; Cummings, Long, & Lewis, 1987; Hazleton, 1992; Smallwood, 1992; Leischow & Milstein, 2006; Colapinto & Porlezza, 2012) . As such, there is theoretical justification for the use of GST in the study of the organizational communication activity of advertising.

Advertising and public relations can both be considered part of the larger concept of marketing. The so-called four P’s of marketing: price, place, product, and promotion—place advertising together with public relations into the category of promotion thus linking them closely together (Rodgers & Thorson, 2012) . An example of how the meta-theoretical perspective of GST has been used in a field under the marketing umbrella and closely related to advertising is the public relations process model of Long and Hazleton (1987) . This particular application of GST offered explanatory value for the present study.

The Public Relations Process Model

The public relations process model of Long and Hazleton (1987) proposes a theoretic and practical description and definition of public relations from a general systems theory approach (see Figure 1).

Figure 1. Public relations process model. Source: Long, L. W., & Hazelton, V. (1987) .

The model includes five overlapping, interacting dimensions of the environment from the supersystem:

Ÿ Political/Legal—This dimension is characterized by rules which govern organizational conduct and enforcement, including legislation and judicial processes.

Ÿ Economic—This dimension includes financial and monetary resources and constraints.

Ÿ Competitive—The competitive dimension includes an array of competitors, both internal to, and external to the industry.

Ÿ Technological—This dimension includes technology, devices, and/or knowledge systems (software) impacting the organization.

Ÿ Social—Finally, the social dimension includes the public and stakeholders of the organization.

The PR process model is a theoretical approach that specifically recognizes message delivery (as an output from the communication subsystem). “Physically, messages are tangible stimuli that can be perceived” (Long & Hazleton, 1987: p. 11) . It is the actual perception of the message, measurement of the delivery, and how it is reported that are of most relevance to the present study. The PR process model provides an applicable framework from which to understand message delivery better. While the present study is intended to solicit input from industry leaders, a GST perspective, informed by the PR process model, proved helpful in contextualizing, organizing, and evaluating the results.

What’s missing from both academia and practice is a single method to quantify delivery and compare cost-to-benefit ratios of advertising delivery on a multi-platform basis. Without such a tool, advertising practitioners and scholars will continue to try to create comparisons that are essentially like trying to compare fractions without a common denominator. The use of the impression as a common metric across all media might offer a path forward to fill this gap.

1.7. Method

1.7.1. Design

This study utilized a qualitative approach to examine current and potential delivery metrics for major advertising-supported media from the academic and popular literature. It narrowed the focus to measurement of impression delivery as a single metric that may allow comparability of various media vehicles (both content and device). The study solicited opinions from a variety of industry practitioners, as well as scholars, in the form of semi-structured personal interviews. Finally, these perspectives were integrated into an evaluation of the advertising impression measurement approach for advertising delivery, culminating in a proposed advertising process model.

1.7.2. Conceptual Justification for Sampling Strata

Rodgers and Thorson (2012) proposed several elements as part of the advertising process, including three primary components: advertising organizations (agencies), message sources (advertisers), and channels (media). From the practitioner perspective, there is also a need to include measurement vendors and advocacy organizations—which are playing a leading role in developing advertising measurement (Gladys Yu, March 16, 2016, personal communication & Jason Darwin, April 5, 2016, personal communication). Further, to help cross the practitioner-scholar divide, including the perspectives of advertising and communication scholars may yield additional insights. Therefore, the present study proposes such a path forward by soliciting evaluative viewpoints along that path from all sides of the advertising industry—calling on the five key groups outlined above to help offer a holistic perspective for the overall advertising community, particularly as it relates to advertising measures of cross-platform delivery and opportunities:

1) Advertising buyers (agencies)

2) Advertising sellers (media)

3) Advertising clients (advertisers)

4) Advertising measurement companies/advocate organizations (vendors)

5) Academia (scholars who study advertising)

The goal was to obtain five to seven participants from each of these subgroups and to facilitate comparisons across the groups as well as to synthesize a more holistic representation for the larger advertising community. As part of the effort to include a representative mix from across the industry, one objective was to achieve a mix of gender participation. Preference was given to participants who have at least 5 years’ experience in the field of advertising, as well as those working with multiple media. An additional preference was given to advertising clients with expenditures of more than $1 million per year on advertising utilizing multiple media. Participants were screened to meet these minimums and were also asked about their knowledge and experience in media measurement. Screening questions included length of time in current role and experience dealing with media measurement in the positions for the strata they represented.

1.7.3. Sample Protocol

Participants were selected from the five key areas indicated above across the advertising industry using the snowball method to fulfill a stratified, purposeful sample. This method aligns with recommendations by Coyne (1997) and Goulding (2005) . Actually, what Patton (2002) calls “purposeful” or “purposive” sampling is the “intended focus in qualitative sampling, and therefore a strength” (p. 230). A snowball sample was utilized beginning with industry contacts that the study author has built through 25 years in the advertising industry. Faculty and personal contacts were asked to identify additional potential participants and those participants, in turn, were asked to recommend other participants to fulfill sample goals.

Sample sizes are not often justified in qualitative research (Barnett, Vasileiou, Thorpe, & Young, 2015) . According to Patton (2002) , there are no hard and fast rules for sample size in qualitative inquiry. Mason (2010) argued that qualitative sample size should be enough to attain saturation—or that point at which no new concepts or significant ideas are added. Guest, Bunce, & Johnston (2006) found that saturation often occurred around 12 interviews. However, saturation can be difficult to pinpoint and is somewhat subjective. Further, as a practical matter, operationalization of a research study can be difficult without some level of data-gathering goals. In addition, Creswell (2012) suggested 20 - 30 interviews for grounded theory use in particular. In keeping with the need to obtain sufficient representation from each of the strata, this study will try to satisfy both the saturation requirement and the grounded theory recommendation above. Therefore, the initial goal was to interview at least 25 to 30 participants distributed across the strata.

1.7.4. Description of Participants

Thirty-seven advertising professionals participated in the study spread across the five identified strata. There were 13 females and 24 males; each of the five groups contained at least two females.

It is important to note that the opinions of the participants were theirs alone and not necessarily representative of the organizations and/or companies for which they are currently/were previously employed. However, as the participants worked for some of the major advertising players in the United States—including Bank of America, Dr Pepper/Snapple, Nielsen, Group M, Turner Networks, Starcom, Facebook, Charter Communications, Domino’s, Yahoo, and others—they bring a high level of credibility to this study. Collectively, the participants and the organizations they represent accounted for as much as 20% of advertising spending, billing, and measurement worldwide. Other participants represented esteemed industry organizations, such as the Media Ratings Council and the Interactive Advertising Bureau; these participants were keys in that they provided a relatively unbiased perspective across multiple client groups and media platforms. The academic participants included some of the most respected and broadly published scholars in the field of advertising. The opinions of these professionals who work in, and study, the field of advertising were the primary units of analysis. A list of the participants (grouped by strata) appears in Table 1 below. Also included were the names of the organization(s) with which each participant had a significant amount of experience. Most of them still worked at these organizations. Note that the average professional experience (or academic study) in advertising for the participants was in excess of 10 years. Virtually all served in fairly senior roles. Educationally, all of the participants had at least bachelor’s degrees. Many held master’s degrees and two (beyond the scholars) also had PhDs. Interestingly, only a small handful (about 10%) of the 37 participants actually studied advertising or communication prior to entering the field.

The 37 participants worked with firms that had annual advertising revenues/expenditures ranging approximately $500 million to nearly $30 billion.

1.7.5. Interviews

Semi-structured, in-depth interviews were used as the key method of field inquiry. Personal interviewing is an appropriate method for gathering rich, qualitative data (Creswell, 2012) . Further, as recommended by Cresswell (2009) , an interview protocol was followed. The specific protocol included header information (date, participants, etc.), interviewer instructions (so that the same procedures were followed for each interview), the planned interview questions, and probes for exploring some of the preplanned questions more fully, as well as a concluding statement of appreciation (Tuggle, 2014) . This study used the qualitative technique of personal interviews with a small number of open-ended questions which Cresswell (2009) argued can successfully allow participants to share their experiences. The interview protocol and guide helped create both standardization (for similar experiences among participants) and flexibility (to enable the study to gain as much relevant information as possible).

In-depth interviews allowed the discovery of perspectives from the participants. Indeed, Baehr (2005) argued that interviewing may be one of the best methods of gathering data with the fewest inherent problems. Each interview was planned for approximately 60 - 90 minutes. Face-to-face interviews were the preferred format. However, participants were drawn from major advertising locations across the U.S and internationally. Due to logistical concerns (costs and timeframe) of meeting with a wide variety of individual participants across a broad geography, a priority was given to getting access to participants wherever they were located. Therefore, telephone and/or Skype/Webcam were used on an as-needed basis—actually more than face-to-face interviews. The interviews were recorded and transcribed. Further, detailed interview notes/field notes were taken. Each participant was offered confidentiality and permission to use each participant’s information was secured. Each participant was also invited to complete a participant background questionnaire. Participants were selected from the five key groups indicated above across the advertising industry using the snowball method to fulfill a stratified, purposeful sample. This method aligns with recommendations by Coyne (1997) and Goulding (2005) . The sample began with

Table 1. Study participants by strata/organization.

industry contacts that the study author built through 25 years in the advertising industry. The interviews (averaging 60 - 90 minutes each) were recorded and transcribed; detailed interview notes/field notes were also taken.

For discussion, participants are referenced using a key of lastname-strata. The strata were Agency/Buyers = A; Media/Sellers = M; Advertisers/Clients = C; Researchers/Advocates = R; and Academics/Scholars = S. For example, a media buyer from an advertising agency who is named Smith would be referred to as Smith (A).

2. Results

Also included were the names of the organization(s) with which each participant had a significant amount of experience. Most of them still worked at these organizations. The average professional experience (or academic tenure) in advertising for the participants was in excess of 15 years. Virtually all served in fairly senior roles. Educationally, all the participants had at least bachelor’s degrees. Many held master’s degrees, and two (beyond the scholars) had Ph.D.s.

2.1. Findings by Research Question

2.1.1. Research Question 1

How do advertising professionals view advantages and disadvantages of the impression (CPM) as a single-measure of cross-platform message delivery?

Susan Brami (M), regional vice president, sales at a major telecommunications advertising company, observed “at the very basic level, advertising measurement is how many eyeballs are seeing the advertisement.” This is a theme that runs throughout the study and was commented on in a similar fashion by multiple participants. Many participants also confirmed that advertising measurement is currently operationalized in many different, siloed, media-specific methods. But they also typically observed that measurement of some sort is critical to the process of buying and selling advertising. Overall—and notably—the participants expressed far more advantages than disadvantages to an impression-based measurement approach.

According to Debbie Basham (A), senior vice president, director of audio and video investment at MediaHub, the advertising industry needs “a currency, a metric to transact against”. And one of the primary questions for the advertising community is whether traditional media will continue to be siloed in their measurement or whether they will move to impression-based measurement (CPM).

Advantages. A large majority of the participants agreed that there were benefits to a holistic, cross-platform metric such as the impression. Benefits include simplicity, comprehensiveness, efficiencies, comparability, and ease of understanding. Gwen Throckmorton (M), head of industry at Facebook, observed that a single impression-based metric across advertising platforms “would absolutely be a benefit because you’ll be able to actually understand what the triggers are in people’s engagement and what actually causes people to convert”.

In addition, as multimedia campaigns are the norm, a single measure of advertising delivery like the impression can help buyers and sellers begin to learn how media act and work (or don’t work) together. According to advertising scholar Esther Thorson (S), professor of journalism at Michigan State University.

When you’re looking for the magic and elusive single measurement, it would allow you to then get a truly effective handle on the question, not only of how advertising in one medium works, but how an integrated communication plan works.

Art Salisch (M), with Hearst Argyle Broadcasting, added that a single impression-based advertising metric would have great value because it would enable cross-platform comparison. He commented, “It’s one of those things that inevitably make so much sense on the buying side”.

George Mafredas (A), senior partner, director of research at Group M expressed the benefits of a using the impression as a single, cross-platform media metric this way: “It puts everything on an even playing field. It’s as simple as that. It’s video; it’s no longer TV, digital. It’s video. It’s audio. It doesn’t matter what the delivery system is.” He added that “measurement that can measure across devices and give us the accurate impressions” would be “nirvana”.

Scott Hawkins (C), executive director of marketing, Lenovo Data Center Group, echoed that sentiment,

Everyone in the industry, particularly those who are funding the work and buying the media, would appreciate a common platform that could pull all of that [advertising delivery] in into one view.

Brami (M) added that multiple media measurement needs a single, specific metric, “I don’t see how you sell cross-platform without doing it by impression”.

Danielle Zazula (R), vice president of business development at Comscore, concurred. She observed, “you can’t get to the, ‘What happened next?’ if you don’t measure advertising [delivery].” As Romaniuk (S) put it, we must ensure “our message gets out to people because we can only ever have an effect on the people that we’re reaching”.

Sara Erichson (R), executive vice president of U.S. Media at Nielsen, is one of several ad professionals who believe the shift to impression-based measurement has already begun because of the simplicity of the metric.

Disadvantages. Jack Wakshlag (M), founder of Media, Strategy, Analytics, and Research, as well as former chief research officer for Turner Broadcasting and head of research for the WB Network, was one who did have a particular concern. He suggested that three factors (reach, frequency, and duration of message exposure) are “the fundamental measures of advertising”. He referred to these as “How many, how often, how long”. According to Wakshlag, there is a disadvantage to using the impression alone as a cross-platform metric because it doesn’t answer all three of those questions.

Separately, at least two of the participants suggested that more traditional television metrics might more appropriately be applied across media. For example, Basham (A) shared that buyers sometimes convert advertising schedules into impressions (CPMs), but planners sometimes convert everything to the traditional television measure of ratings (GRPs)—and clients are often presented cross-plat- form metrics in the form of GRPs. Further, Jenni Romaniuk (S), research professor at the University of South Australia suggested:

TV’s been around a long time, it’s going to be around a long time. It may evolve, but as a medium it’s still in there. We have basic metrics for TV: reach, frequency, time spent viewing. I don’t see why they can’t be applied to every other medium.

Despite some concerns, the consensus was that impression-based delivery makes sense and that it should likely be the way advertising is measured.

2.1.2. Research Question 2

What are the challenges in creating, implementing, and adopting an impression-based measurement approach for multiple advertising media?

The participants suggested eight obstacles that might hinder the implementation and adoption of the impression as a single measure of advertising delivery.

Vested interests. One recurring theme in the area of challenges to moving to an impression-based advertising measurement approach across media was the existence of legacy systems and vested interests. Because advertising is a major economic force, there are immense financial implications to any change in the existing ecosystem. Duff (S) observed, “one of the problems is people always have to then see themselves as winners or losers.” Added Danaher (S), “There would definitely be losers and that’s probably why you haven’t seen it [significant movement away from siloed systems].”

Simply put: money is a powerful driver. Salisch (M) observed, “I think that everybody comes at it from their point of view of how they make their money and how their business works.” And with an advertising ecosystem spanning the globe, a host of extant business practices would likely be impacted. Not surprisingly, players are reluctant to lose whatever control they think they have. Changing the existing siloed structure will not be easy. According to Kahn (A) “You’d really have to have buy-in from every single media partner that you would want to do this with. And obviously, they all have something else in mind. They want to sell things the way that it looks best for them, which is one of the reasons why, as an agency, you’re very careful to use research that isn’t always sponsored by that particular group.”

Taneja (S) asserted “opinions from within the industry are colored by the part of the ecosystem they represent. …according to the Silicon Valley, if you stop your allegiance to how advertising was measured and this measurement [impressions] was acted upon in traditional media, they really think that they have it all solved.”

Although virtually every participant expressed dissatisfaction with the current measurement system, they were also somewhat leery of alternatives. Director of Advertising for Hendrick Automotive Group Brian Johnson’s (C) response was typical: “It’s just the ambiguity, right? It’s just the unknown.”

Harb (M), director of national sales at Time Warner Cable Media, added that advertising agency structure itself is “an obstacle to broadening the horizon and looking more cross-platform, even though clients probably would want to”.

Standards. Currently, there are different standards for how different media are measured and reported. “The bar is not the same for everyone. The standard that people may hold for TV, that they may hold for billboards, is much lower than it is for digital,” said Throckmorton (M). Even the amount of time that an advertisement must be seen to count as an impression differs across media. Blaise D’Sylva (C), vice president of media at Dr Pepper Snapple Group, further lamented the inconsistency:

YouTube or Hulu who says, “Hey, we’ll measure 30 seconds,” and you’ve got Facebook who says, “We’ll measure three,” and then you have Snapchat who says, “We’re just measuring an impression, so the second it comes up.” So you’ve got all these different pieces and how should they be measured.

The discrepancies in minimum viewing standards across media types are certainly a barrier to a consistent, or “fair”, cross-platform measurement system.

But there are those who are trying to level the playing field. George Ivie (R) CEO and executive director at Media Rating Council commented “One of the hardest things that we’re undertaking with these standards, is setting the processes for deduplicating them”.

Comparison. Another challenge identified by the several of the participants was how to aggregate and compare impressions from different media. Jacobowitz (M) observed that there is no vetted approach to combine impressions, “no accepted currency cross-media impression methodology”. He asked, “How are researchers supposed to compare cross-platform impressions on an even playing field when some of the metrics aren’t even in the stadium?”

Meyer (A) observed:

I’d like to believe that there are fundamental building blocks that are identical across media, and if you look at the root of it, going back to what I said initially, I think you can talk about individual impressions, and those are pretty consistent across media types.

Erichson (R) commented:

Having comparability metrics, I think, is by far the number one priority as you talk about the differences across platforms. I think that really trumps everything else. Every medium is measured a little bit differently, and that’s okay because each platform is different. So tailoring the measurement of the platform is okay to do. And whether you’re Nielsen or someone else, that makes all the sense in the world to do.

But how will cross-platform data be put together? The ability to combine, consolidate, and compare impressions from various media and devices is one of the critical needs to enable impression-based measurement.

Data access. Metrics must be accepted, comparable, and available. For example, impression data that Facebook and Google maintain are not made available to other media or advertisers; and keeping this data privately held inhibits impression-based advertising. Throckmorton (M) said:

The biggest obstacle that I can think of is, what is going to be the measuring body, the Nielsen of the world that actually has, or multiple vendors, that actually provide a consistent methodology that everybody’s willing to sign up for.

Romaniuk (S) added, “lack of transparency makes it really, really hard for someone to be confident in those systems.” Open access to media viewership data is an accepted part of the advertising ecosystem for most legacy media. However, online media doesn’t always look at such data as something that should be shared. Fielding (A) noted “the Googles and the Facebooks and the Amazons of the world, which do not view it [data] in that way. And in fact, they almost view it in the opposite way.”

Definition of impression. Another challenge is that there is disagreement as to exactly what an impression is. The simple definition of an impression as a single exposure sounds straightforward enough, but, according to some, it may be difficult to operationalize. Schultz (S), professor (Emeritus-in-Service) of integrated marketing communications at Northwestern University, argued that we really do not have a functional definition of an impression. Pearson (A) claimed that one of the most difficult things would be getting the different media to agree upon, “this is what TV will call an impression, this is what outdoor will call an impression, this is what digital will call an impression”. (Such concerns often refer to Wakshlag’s duration component.)

Complexity of media environment. The plethora of media options and devices for audience consumption and advertising use continues to expand and become more complex. According to Ivie (R), “the biggest one [challenge] is just that consumers are getting much more complex to measure. They have many more devices. The average person has four or five connected devices on their person and in their household today.”

Further, the measurement infrastructure has not been able to keep pace with the change. Erichson (R) commented “Our network clients have been concerned for a long time that as they go out there talking about the size of their audience to their programming on TV, that those numbers from Nielsen are missing a portion of their viewing because people are increasingly watching TV on different platforms right now and through different devices.” Danaher (S) agreed that complexity regarding measurement and technology is one of the primary challenges the industry faces. Schultz (S) agreed, and commented “Multiple impressions coming from incredible numbers of resources, and people talking to each other, social media and all those kinds of things, and so what we’ve got are old-time models and radically different systems and situations that people are using today,” he said.

Relative valuations. If common standards could be developed and if industry players could compare media to each other, there is still the question of economic valuation. Many participants commented on the challenge of how to assign value to impressions from different media. Taneja (S) suggested, “the biggest struggle with online measurement has been how to establish equivalence with the ways we were doing this for traditional media.” The worry is that an impression with video and sound is probably worth more than, say, a static impression from a non-moving banner ad. A similar question might be: “What is the worth of a radio impression that is sound-only compared to the worth of an impression from a magazine or a newspaper with visual only?” Many participants questioned how relative valuation would be handled. Most agreed with Hawkins (C) who expressed his opinion that “some portion of those common-definition impressions would be more valuable than others”. Richard Fielding (A), strategic media consultant and former vice president/director of the global research group for Starcom/MediaVest, put it succinctly: “all impressions are not created equal.”

Denise Dobyns (C), senior manager of customer relationship marketing at Electrolux, observed “I think using impressions to measure across media is a good idea. You would just have to know that they couldn’t be treated equally.” Andrew Deming (C), senior communications strategy and brand manager at Bank of America, agreed, and his company has already begun considering advertising using impressions. “We’re just converting everything to estimated impression levels, and we’re explaining it out that way. The tutorials now change to, ‘Not all impressions are created equal,’” he said.

Cost. There is a cost to any form of data measurement—and one challenge acknowledged by various study participants is that sometimes the cost can be prohibitive. According to Robert Winston (A) “from a very practical standpoint, measurement is expensive… It’s very, very expensive.” Impression-based measurement is not used for many media currently, so it would have to be created. Who will bear this additional cost? This would be an additional cost in an era when advertising agencies have reduced spending. As Thorson (S) observed, “It used to be that ad agencies all had a research department, now none of them have a research department.” While this may be somewhat of an exaggeration, her point is that resource constraints in the agency community emphasize the relevance of the cost challenge.

2.1.3. Research Question 3

What additional considerations will need to be addressed regarding media measurement in light of the dynamic media environment?

The participants observed seven supplemental considerations for impression measurement that generally dealt with issues beyond the actual measurement.

Scale. Buchheim (R) observed that the desire to measure cross-platform has to take into account the volume of data and the differences across media and platforms.

Let’s take for granted you want to measure it, you’re motivated to measure it; how do you do it when the interaction models are very different? Whether talking about ads and apps, which can be a little bit different, or video ads versus audio ads, versus something you see on the TV, versus just on your tablet or a phone versus the desktop, versus a laptop. It’s all different. And I think that has become almost paralyzing.

The reaction of “analysis paralysis” is certainly a risk as media, platforms, and data continue to proliferate. Johnson (C) commented “I don’t know if the metric is the challenge. It’s the data… it’s unlocking the data and harnessing the data that’s the challenge.” The huge scale of the data (aka “big data”), as well as the variability of data types and measurement methods, can be bewildering to even seasoned researchers and data scientists.

Data science vs. media & marketing. Several participants suggested that dealing with “big data”—the specialty of data science—brings both benefits and challenges for media. On the one hand, there are traditional marketers, and then, on the other hand, there are the data scientists. The two tend to have different perspectives on the world. The data scientists (more likely to be from the digital realm) think that as long as you can get enough data, you can get a good answer. Meanwhile, the marketers tend to be a little more skeptical. They ask questions of why, and what specifically is being measured?

For example, Jeff Boehme (R), chief client officer at Rentrak commented that “the rise of the data scientist is important and necessary but data scientists are not by definition researchers, and the problem is interpreting the data. So that you can have good data scientists, understand everything about the data set but have no idea about the practical application of it.” Further, Schultz (S) suggested that “the data scientists have brought a lot of power, a lot of number crunching ability to understand big data, to the forefront. But they often tend to lack some of the subtleties of media.” The participants saw opportunities for the data scientists and the media and marketing researchers to work together—as well as competition between the groups.

According to Romaniuk (S), the fallacy is that if you have enough data that you can solve any problem—“Right, which is not the case. Big, biased data is not better than small unbiased data”.

One way to begin to address the disconnect between data scientists and media/marketing professionals was offered by Hayes (S), assistant professor of advertising and public relations at the University of Alabama. He suggested that academics should develop programs to train communication people in data science to enable them to tell the marketing story.

Media evolution. Media and platforms are changing at such a rapid pace that it is difficult for many to keep up. In this “app every other day” environment of new and emerging media, platforms, and devices, measurement is often thought of well after the latest technology is launched—when there is a subsequent attempt to monetize the new app, or new media, or the device. As media evolve, there are a host of new challenges for those who are interested in measurement.

Participants commented on the difference between traditional media and new forms of media like “search” (e.g., Google, Yahoo, Bing, and other Internet paid search utilities) that are user-driven. Brami (M) observed:

Historically, media for the most part has been other information, entertainment that people consume. And ads are integrated somehow; you’re looking at something else, to get you to look at the ad.

Garramone (M) suggested that patterns of media and platform usage would likely emerge over the next three years and that the media themselves will be participants in the measurement evolution.

Schultz claimed that changes in media are going to happen even more quickly, and that many in the industry are unprepared:

You’re going to be talking to a lot of people who believe it’s going to change maybe gradually and that, “I’ll have time to adapt. I’ll have time to adjust”, and historically, they have, but I’m not sure they’re going to have that in the future.

Fielding (A) sees the major digital players as very different from traditional media. He explained:

One of the issues is you’ve got what I would say is the emergence of these new platforms that are ecosystems, but they also are media… They are media, but they’re a lot of other things as well.

Such consolidation of all aspects of media, content, delivery, and measurement present both challenges and opportunities to an industry that relies on data and openness. It seems clear that media will continue to evolve. It’s also likely that new measurement schemes will be proposed. Further, questions of data accessibility are not going away. Adopting a simple, comparable, cross-platform metric such as the impression might help the advertising industry manage the evolution.

Walled garden. Another theme that emerged from the interviews was the idea that the current media environment is characterized by some parties that are unwilling to share data. This evolution of some (especially digital) media entities into self-contained systems that do not share data has impacts across the advertising and measurement ecosystem. Wakshlag (M) observed:

In the television world, everybody uses a third party, and it’s a syndicated data system. So NBC knows exactly how many people watch CBS, or at least they have an estimate, a reasonably good estimate. The problem on the digital side is nobody knows what anyone else is doing.

Lack of information affects audience estimates, share of voice, and other metrics. But a few powerful players are impacting media and measurement in a way that’s different than has historically been the case. For example, as Fielding (A) observed, “Apple is a huge walled garden because that’s their fundamental business philosophy. They will not and they do not share data, and they won’t.”

Creative. Several participants suggested that the content of advertisements needs to evolve to be more relevant to consumers.

For example, Serena Lal (M) director of demand strategy at Yahoo recommended that publishers should “create content and ads that their consumers want to see”. In short, the content is relevant. Ivie (R) agreed, “We need to deliver content and advertising that’s more relevant to the consumer in a targeted way, if we want to keep consumers interested.”

Fraud. Almost every study participant from across the advertising ecosystem addressed the topic of fraud. Fraud is impressions that aren’t real—or aren’t from real people. They could be so-called bot traffic or bad data or outright misleading reporting. The digital media are especially susceptible to this challenge. One study participant [name withheld upon request for this comment] referred to the digital advertising arena as a “cesspool of fraud”.

Dobyns (C) commented, “I think that we need to get a lot more transparency in how that information is captured.” Concerns abound from all of the industry segments. For example, Basham (A) asserted that the industry needs “to be able to do a whole lot better job of measuring the fraud and knowing what it is”. Buchheim (R) noted the insidious impact of fraud was that “you can’t have accurate measurement if a good chunk—or really any significant portion—of the ads you’re delivering are fraudulent”. Most of the industry professionals in the study were so aware of inaccurate reporting that they incorporated it into their planning and buying. For example, Hawkins (C) commented “It’s kind of a given in the industry that you have to assume some of that risk when you’re investing in digital advertising.” Fraud—or lack of reliability—is an important concern among advertisers.

Bright, shiny, new things. Some participants noted that there is also a danger that new technology (flavor of the moment) be distracting. They can get in the way of advertising and measurement. Throckmorton (M) expressed the concern this way: “I just think it’s going to be really easy to get fascinated with the bright, shiny toy, with a bright, shiny thing.” Along those lines, Wakshlag (M) offered that advertising professionals need to be careful to try to “avoid, what I call, the bright shiny objects, because we’re fascinated by bright shiny objects.” In addition, Zazula (M) observed that the new tools/opportunities/media keep coming. “You name it,” she said, “there’s so much ad technology out there that we get so caught up with shiny new objects to the left.” And one could add those on the right, in the middle, etc. Much of media is at the forefront of the digital evolution/revolution—and technology plays a major role in new forms of media and delivery devices. As such, bright, shiny distractions are likely to continue.

3. Advertising Process Model (Figure 2)

The logical next step would be to propose an initial version of an advertising process model (APM). As with any model, there are a set of assumptions that

Figure 2. Advertising process model with subsystem components. Source: Smallwood, 2018 .

should be clearly identified. First, the advertising process model (APM) considers advertising to be a process from the meta-theoretical perspective of GST. Second, advertising is considered to be an organizational communication activity. Scholars agree that advertising is a form of communication (Stern, 1994; Rodgers & Thorson, 2012) . Third, this model also agrees with Rodgers and Thorson (2012) that advertising exists under the larger concept of marketing. Fourth, unlike many theoretical approaches to advertising, the APM does not take a purely psychological approach to advertising (although the model does allow for psychological approaches in the audience subsystem). It approaches the subject from a more holistic and objective viewpoint that incorporates both sociological and psychological perspectives. Fifth, the APM utilizes the definition of advertising laid out early in this study.

The APM has its origins in the approach used by the PR process model. As such, it incorporates the relevant applications of the study data that fit so well with that model. One can clearly see the obvious high-level similarities of an advertising process model to the PR process model. However, the APM seeks to modify the approach to fit advertising’s specific needs, strategies, goals, inputs, terminology, and measurement concerns in today’s dynamic media environment more fully. In addition, the APM utilizes elements similar to those recognized by Nan and Faber (2004) for communication-based advertising theoretical constructs, including source, message, media, reception, and feedback.

The APM is needed to represent the overall, conceptual and applied advertising process, including all facets of the practice and scholarly research. Because the APM conceptualizes advertising at a broad level, it has the ability to incorporate all of the various types and kinds of advertising, including commercial advertising, political advertising, advertising to children, and any other types of advertising. All of these—and others—can all be considered within the APM.

4. Discussion and Analysis

The findings in this study suggest two over-arching implications for the topic of advertising delivery: the need for cross-platform measurement and the obstacles to solving for that need.

First, the advertising industry—across all of its varied segments (including scholars) has expressed a clear and convincing need for cross-platform measurement. The impression, as examined in this study, seems to offer a potential path forward toward addressing that need. Therefore, it would be a worthy effort to explore studies to research, panels to debate, and committees to propose ways to implement and expand usage of advertising impression measurement. For example, buyers and sellers might ask that their software developers and data providers incorporate impressions and impression-based buying into their platforms. Further, agencies might present cross-platform delivery results to clients in the form of impressions across all media.

Second, the study uncovered considerable and significant challenges to implementing cross-platform measurement such as the impression. Eight categories of challenges suggested by the participants of this study include vested interests, setting agreed-upon standards, comparison, data access, definitions, complexity of the media environment, relative valuations, and cost. Seven additional obstacles proposed by the study participants should also be considered as challenges, including scale, data science vs. media & marketing, media evolution, walled gardens, creative content, fraud, and bright, shiny, new things. Collectively, these 15 challenge areas are formidable. But, the desire for efficiencies is also strong. Siloization takes many forms, but ultimately, it bogs down systems. Layers of vested interests serve to maintain existing monopolies, impacting revenue and restricting both the flow of data and innovation.

There are many additional steps that could be undertaken to address some of the challenges above. For example, standards bodies (such as the MRC) can continue to develop standards around impressions. Clients could push for open access to important delivery data. Entities of all flavors (including educational institutions) could invest in data science and media marketing efforts that work together. Further, investing in creative content that customers might really want to see (and possibly engage with) is a worthwhile endeavor to improve advertising.

Fundamentally, the advertising clients hold the “golden keys”; they can refuse to spend their advertising dollars with entities (media, agencies) that do not shift to holistic measures such as the impression—or provide reasonable access to data. If this begins to happen, then measurement providers would also be tasked by the media and agencies to provide cross-platform delivery in the form of impressions.

Thus, there is a communicated need—but also specific challenges that threaten to inhibit progress toward a solution. These are two important implications that this study has helped to delineate and categorize.

Final remarks. Four observations are worth noting: first, recruiting and categorizing the study participants into five groups did prove useful in several ways for this study. Although considerable, the lack of discreteness in the professional experiences (and responses to questioning) based on these categories was somewhat surprising. Both practitioners and, to some extent, scholars sometimes migrated across group lines over time. This exposure to multiple groups may have led to more agreement in responses than might have otherwise been expected. Still, participants from each group offered many special or unique insights based on their experiences.

Second, “siloization” of media and measurement remains a serious and powerful impediment to change. Fortunes have been (and will likely continue to be) made in media—often strongly fueled by advertising. Those who have current income streams are highly motivated to maintain their positions and views of the best way to measure advertising delivery.

Third, as Ockham’s razor suggests, the simplest solution is often the best as it provides “the straightest possible path to the truth” (Kelly, 2007) . Advertising impression measurement offers the simplest, most parsimonious, and straightforward answer to the challenge of cross-platform advertising delivery measurement yet devised.

Fourth, viewing advertising from a systems perspective presents a viable conceptual articulation and theory-building path to pursue—while the Advertising Process Model offers a first step down that path.

Acknowledgements

I appreciate the participation of advertising professionals and scholars without whom this article could not have been created.

Statement of Contribution

Impressions can be an “inclusive” metric advertising across media. The implications to marketing management include how marketers measure advertising (or more broadly, messaging). “Impression-based measurement” will impact aspects of advertising from planning and resource-allocation to measurement and evaluation. It has the potential to affect revenue share across advertising vehicles—and expedite the next generation of media buying & analytics software. The ability to understand and compare performance across all types of advertising will be essential for marketing leaders. Finally, a conceptual model offers a foundation to develop advertising more theoretically (regardless of platform).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Assael, H. (2011). From Silos to Synergy. Journal of Advertising Research, 51, 42-58.
https://doi.org/10.2501/JAR-51-1-042-058
[2] Baehr, K. J. (2005). Converged Media Ratings: Towards a New Method of Measuring Media Use (Doctoral Dissertation). Retrieved from ProQuest.
[3] Barnard, C. (1938). The Functions of the Executive. Harvard University Press.
[4] Barnett, J., Vasileiou, K., Thorpe, S., & Young, T. (2015). Justifying the Adequacy of Samples in Qualitative Interview-Based Studies: Differences between and within Journals, Symposium: “Quality in Qualitative Research and Enduring Problematics”.
http://www.bath.ac.uk/sps/events/Documents/27_jan_2015_slides/julie_barnett.pdf
[5] Chang, Y., & Thorson, E. (2004). Television and Web Advertising Synergies. Journal of Advertising, 33, 75-84.
https://doi.org/10.1080/00913367.2004.10639161
[6] Christian, R. C., & Ochs, M. B. (1966). Audience Measurement Concepts for Industrial Publications. Journal of Marketing, 30, 59-61.
https://doi.org/10.1177/002224296603000113
[7] Colapinto, C., & Porlezza, C. (2012). Innovation in Creative Industries: From the Quadruple Helix Model to the Systems Theory. Journal of the Knowledge Economy, 3, 343-353.
https://doi.org/10.1007/s13132-011-0051-x
[8] Coyne, I. T. (1997). Sampling in Qualitative Research. Purposeful and Theoretical Sampling; Merging or Clear Boundaries? Journal of Advanced Nursing, 26, 623-630.
https://doi.org/10.1046/j.1365-2648.1997.t01-25-00999.x
[9] Cresswell, J. W. (2009). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (3rd ed.). Thousand Oaks, Sage Publications.
[10] Creswell, J. W. (2012). Qualitative Inquiry and Research Design: Choosing among Five Approaches. Thousand Oaks, Sage Publications.
[11] Cummings, H., Long, L., & Lewis, H. (1987). Managing Communication in Organizations: An Introduction (2nd ed.). Gorsuch-Scarisbrick, Publishers.
[12] Faber, R., Duff, B., & Nan, X. (2012). Coloring outside the Lines: Suggestions for Making Advertising Theory More Meaningful. In S. Rodgers, & E. Thorson (Eds.), Advertising Theory (pp. 18-32). Routledge.
[13] Farace, R., Monge, P., & Russell, H. (1977). Communicating and Organizing. Addison-Wesley.
[14] Franz, G. (2000). The Future of Multimedia Research. International Journal of Market Research, 42, 459-472.
https://doi.org/10.1177/147078530004200407
[15] Goulding, C. (2005). Grounded Theory, Ethnography and Phenomenology: A Comparative Analysis of Three Qualitative Strategies for Marketing Research. European Journal of Marketing, 39, 294-308.
https://doi.org/10.1108/03090560510581782
[16] Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews Are Enough? An Experiment with Data Saturation and Variability. Field Methods, 18, 59-82.
https://doi.org/10.1177/1525822X05279903
[17] Hazleton, V. (1992). Toward a Systems Theory of Public Relations. In Ist Public Relations eine Wissenschaft? (pp. 33-45). VS Verlag für Sozialwissenschaften.
https://doi.org/10.1007/978-3-322-85772-9_3
[18] Katz, D., & Kahn, R. L. (1978). The Social Psychology of Organizations. John Wiley & Sons.
[19] Kelly, K. T. (2007). Ockham’s Razor, Empirical Complexity, and Truth-Finding Efficiency. Theoretical Computer Science, 383, 270-289.
https://doi.org/10.1016/j.tcs.2007.04.009
[20] Kerr, G., & Schultz, D. (2010). Maintenance Person or Architect? The Role of Academic Advertising Research in Building Better Understanding. International Journal of Advertising, 29, 547-568.
https://doi.org/10.2501/S0265048710201348
[21] Kim, K., Hayes, J. L., Avant, J. A., & Reid, L. N. (2014). Trends in Advertising Research: A Longitudinal Analysis of Leading Advertising, Marketing, and Communication Journals, 1980 to 2010. Journal of Advertising, 43, 296-316.
https://doi.org/10.1080/00913367.2013.857620
[22] Laroche, M., Kiani, I., Economakis, N., & Richard, M. O. (2013). Effects of Multi-Channel Marketing on Consumers’ Online Search Behavior. Journal of Advertising Research, 53, 431-443.
https://doi.org/10.2501/JAR-53-4-431-443
[23] Laszlo, E. (1972). The Systems View of the World: The Natural Philosophy of the New Developments in the Sciences. George Braziller.
[24] Leischow, S. J., & Milstein, B. (2006). Systems Thinking and Modeling for Public Health Practice. American Journal of Public Health, 96, 403-405.
https://doi.org/10.2105/AJPH.2005.082842
[25] Long, L. W., & Hazelton, V. (1987). Public Relations: A Theoretical and Practical Response. Public Relations Review, 13, 3-13.
https://doi.org/10.1016/S0363-8111(87)80034-6
[26] March, J., & Simon, H. (1958). Organizations. John Wiley.
[27] Mason, M. (2010). Sample Size and Saturation in PhD Studies Using Qualitative Interviews. In Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 11(3).
http://www.qualitative-research.net/index.php/fqs/article/view/1428/3027
[28] McDonald, S. (2008). The Long Tail and Its Implications for Media Audience Measurement. Journal of Advertising Research, 48, 313-319.
https://doi.org/10.2501/S0021849908080379
[29] Nan, X., & Faber, R. (2004). Advertising Theory: Reconceptualizing the Building Blocks. Marketing Theory, 4, 7-30.
https://doi.org/10.1177/1470593104044085
[30] Patton, M. Q. (2002). Qualitative Research and Evaluation Methods (3rd ed.). Thousand Oaks, Sage Publications.
[31] Reinold, T., & Tropp, J. (2012). Integrated Marketing Communications: How Can We Measure Its Effectiveness? Journal of Marketing Communications, 18, 113-132.
https://doi.org/10.1080/13527266.2010.489334
[32] Rodgers, S., & Thorson, E. (2012). Advertising Theory. Routledge.
https://doi.org/10.4324/9780203149546
[33] Romaniuk, J., Beal, V., & Uncles, M. (2013). Achieving Reach in a Multi-Media Environment: How a Marketer’s First Step Provides the Direction for the Second. Journal of Advertising Research, 52, 221-230.
https://doi.org/10.2501/JAR-53-2-221-230
[34] Schultz, D. E., Block, M., & Raman, K. (2009). Media Synergy Comes of Age—Part I. Journal of Direct, Data and Digital Marketing Practice, 11, 3-19.
https://doi.org/10.1057/dddmp.2009.13
[35] Smallwood, E. E. (1992). Perceptions and Resources: A Test of the Public Relations Process Model. Unpublished Master’s Thesis, Illinois State University.
[36] Smallwood, E. E. (2018). Advertising Impression Measurement: An Evaluation of Cross-Platform Advertising Delivery (Publication No. 10784581). Doctoral Dissertation, Regent University, ProQuest Dissertations and Theses Global.
[37] Smit, E. G., & Neijens, P. C. (2011). The March to Reliable Metrics: A Half-Century of Coming Closer to the Truth. Journal of Advertising Research, 51, 124-135.
https://doi.org/10.2501/JAR-51-1-124-135
[38] Stern, B. B. (1994). A Revised Communication Model for Advertising: Multiple Dimensions of the Source, the Message, and the Recipient. Journal of Advertising, 23, 5-15.
https://doi.org/10.1080/00913367.1994.10673438
[39] Taylor, J., Kennedy, R., McDonald, C., Larguinat, L., El Ouarzazi, Y., & Haddad, N. (2013). Is the Multi-Platform Ehole More Powerful than Its Separate Parts? Measuring the Sales Effects of Cross-Media Advertising. Journal of Advertising Research, 52, 200-211.
https://doi.org/10.2501/JAR-53-2-200-211
[40] Tuggle, M. N. (2014). Exploring the Role of Self-Directed Learning in Sales Professionals: A Qualitative Study. Regent University.
[41] Vakratsas, D., & Ambler, T. (1999). How Advertising Works: What do We Really Know? The Journal of Marketing, 63, 26-43.
https://doi.org/10.1177/002224299906300103
[42] Varan, D., Murphy, J., Hofacker, C. F., Robinson, J. A., Potter, R. F., & Bellman, S. (2013). What Works Best When Combining Television Sets, PCs, Tablets, or Mobile Phones? Journal of Advertising Research, 53, 212-220.
https://doi.org/10.2501/JAR-53-2-212-220
[43] Viljakainen, A. (2013). Show Me the Money! The Quest for an Intermedia Currency in the Nordic Countries. Journal of Media Business Studies, 10, 41-63.
https://doi.org/10.1080/16522354.2013.11073567
[44] Von Bertalanffy, L. (1972). The History and Status of General Systems Theory. Academy of Management Journal, 15, 407-426.
https://doi.org/10.2307/255139
[45] Voorveld, H. A., Neijens, P. C., & Smit, E. G. (2011). Opening the Black Box: Understanding Cross-Media Effects. Journal of Marketing Communications, 17, 69-85.
https://doi.org/10.1080/13527260903160460
[46] WFA (2008). Blueprint for Consumer-Centric Holistic Measurement. World Federation of Advertisers.
http://www.wfanet.org/media/pdf/Blueprint_English_June_2008.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.