When do arbitron ratings come out




















It has emerged as another medium that splits the advertising revenue pie as Figure 1. At the same time each medium is becoming increasingly more complex as convergence increases for example, Internet radio; IPTV.

What Table 1. When we combined these figures with the huge proliferation of media within each of the main advertising categories we can get a sense of the problems facing both advertisers and media companies.

Comparing to in the United States is especially instructive as to how complex things have become, in terms of choice of medium for advertising, since Creamer's day. In in the United States there were 3 Networks, 7, Radio Stations, 2, Print Options and 4 Outdoor Forms with analogue terrestrial, satellite and cable systems.

By contrast in there were 7 Networks plus Video on Demand,. This expansion of media channels has also seen an expansion in the measurement of different audiences as audience measurement companies have struggled to keep up. Nowhere is this more evident than in the proliferation in the number and range of panels that Nielsen alone uses to cover different types of media and circumstances in which media might be used. The problem with this diversity of measurement across many different types of media is, as Gale Metzger points out, that not everyone now agrees with the efficacy of the single number of Creamer's day, when there were fewer media and more homogenous audiences.

He observes:. If you're the Campbell's Soup advertising manager, and you advertise and want a measure of the audience reached, today's challenge is different than yesterday's. In the old days Campbell's Soup had a 40 per cent share of the soup market and they were getting 20 television ratings. The random duplication of those two phenomena is the cross product of 20 per cent and 40 per cent, or 8 per cent.

Today, a media plan that reaches 10 per cent of the market and a brand that's got a 5 per cent share, you're dealing with less than an expected 1 per cent incidence within the population of people who use both the brand and that medium. And because that dimension is very hard to measure with any kind of precision, it is the major issue of the day. Moreover, the very ways of measuring audiences have come under the microscope, not least in the internet environment, which has grown very rapidly as an advertising platform.

In , according to the Independent , internet advertising revenue exceeded television advertising revenue for the first time in the United Kingdom. Internet advertising grew 4. Total advertising spend in the United Kingdom fell 3. You might think that it is easy to measure what people do on the internet: you simply track where people go and what they Set Metered Markets Electronic boxes that track viewing but information about the view is in a diary.

Out of Home Measures TV viewing at work, bars, airports, and so on, using sounds from the programmes that are recorded automatically by special mobile phones. Project Apollo Multimedia consumption and purchasing — now cancelled. Nielsen BookScan US book industry data booksellers. Nielsen Mobile Bill Panel Activity on mobiles mobile bills. Nielsen Website where users rate TV shows, movies and so on. The Hub Former members of other panels who allow Nielsen to track them. Adapted from Story But it is not that simple.

Take the example of Meebo — a service which was designed to fill a gap in the market for an instant messaging service which would enable the interoperability of the different messaging services. Life is good for year-old Seth J. More impressively, the service attracts almost a million people every day, who swap more than 60 million messages.

However the company finds it difficult to prove its site is as popular as the company says it is. As the Businessweek article noted:. Alexa, a competing Web measurement service owned by Amazon.

Which is true? Probably neither. Sternberg's best guess is that the two rivals are about the same size. Yet even he doesn't know for sure. Businessweek There is no agreement on metrics for the internet. This is precisely why agreements emerged on radio ratings in the s — at the time various stations claimed that they had the best radio station but each would use a different way to measure the audience.

A standard way of measuring was required. Knowing, measuring and understanding media audiences have become a multi-billion dollar business. But the convention that underpins that business, audience ratings, is in contention. Joseph Creamer today would find a crowded market and he would be scratching his head about which business model might work.

Audiences are no longer limited to watching television via cable, terrestrial and satellite broadcasts. The legal and illegal viewing of television content on the internet is rapidly increasing, and internet protocol television IPTV is growing in popularity.

These developments have created the need to measure smaller, more fragmented audiences and more and more everyday exposure to media. There is now heated industry debate and experimentation with new and controversial ways of innovating the collection and analysis of the ratings to deal with this new reality.

Sampling in its turn has also become the subject of political and methodological battles over how to best represent people, in the census, on the internet and as audiences for broadcast television. At the same time, audience participation in research is declining. The traditional ratings convention is under pressure from all sides.

The chapters ahead demonstrate why and how audience ratings research became a convention , an agreement, and interrogate the ways that agreement is under pressure and is seeking to innovate to meet these challenges. With Google in alliance with Nielsen, one of the world's leading media researchers, planning to literally auction audiences off to the highest bidder, there are now attempts to establish new ratings-based coordination rules and currencies. The practices of ratings measurement have become the subject of court cases in the United States as different media companies seek ways of reshaping the ratings in ways that better reflect their view of their audience.

The crisis in the ratings convention matters because who is and who is not measured affects all aspects of media production, funding and consumption. The value of services, advertising expenditure, funding for content, technological developments, and the delivery and circulation of programming, are all dependent, to some degree, on the measurement and valuing of audiences.

As you will see, the contemporary controversy and crisis in the ratings convention recalls earlier controversies and crises, where there was the same querying of methodologies and technologies of counting and the same emergence of serious rivals using different methods from incumbent providers. What is different today are the ways in which all aspects of the ratings convention are in dispute at the same time rather than one particular issue dominating debate.

Previous studies have tended to see the ratings from one particular angle, for instance, measurement, to the exclusion of others. Now is the time to see audience ratings as they are, as a complex set of formal and informal agreements, a compact that governs a multibillion-dollar business. The authors will show how new uses for the ratings convention developed over time.

We analyse how the ratings have been used not only in commercial broadcasting, but also in public service broadcasting, in subscription TV and on the internet. And we demonstrate how the ratings have served as the coordination rule and currency for diverse industry stakeholders, in the process becoming integral to the ongoing operation, planning and development of the media industries. Audience ratings are important because they permit agreement among parties involved in the creation and use of ratings that audiences exist and that the numbers produced from ratings surveys represent what those audiences watch or listen to.

This is called syndicated research because a range of clients, from advertisers to stations, buy the ratings. The syndication reduces the cost to subscribers because they do not have to conduct their own research to get a picture of the whole market — unless, of course, like Joseph Creamer, a new entrant with a new technology in this case FM , there is a need for customized research to convince clients there is an audience.

Customized research, of course, is often used for other things, like getting feedback on pilots for TV programmes and other types of in-depth programme- or station-specific work. It is a special type of measurement in that the estimates it produces of the number of people reached are bought and sold.

Effectively, the numbers are the product. The numbers dimension and value the audience. This causes confusion on how best to evaluate performance and how to make transactions for advertisements.

As consumers change how they listen, arguably so should how the industry buys and sells audio. And as online audio continues to grow, then the need to sell air-time beyond the local market will also grow — both locally and across different markets. NPR releases broadcast ratings twice a year for its national shows, the basis of which NPR uses to generate revenue for corporate underwriting and to charge member stations to carry the newsmagazines.

Later this year, NPR will provide webcast metrics for member stations through a new partnership with AndoMedia. This will enable greater insights into the amount of digital listening to station streams, but will be a separate service from Arbitron. Technically yes, but the impact is minimal. Arbitron has only just started to include HD and Streaming listening from PPM-measured markets into its national broadcast ratings. Measuring exposure is one thing and the PPM does that quite well , but including that listening into the ratings which are based on broadcast listening behavior is another matter.

The PPM can collect listening data from new forms of radio HD, Internet streaming , but it struggles to capture audio from devices that requires a headset such as a smartphone or a mp3 player. The other part of the story here is that listening to audio via radio is still the most dominant platform. While NPR and Public Radio content can be accessed in many different ways, NPR raises its revenue almost exclusively through underwriting and programming fees based on Arbitron broadcast ratings.

The digital forms have yet to reach a critical mass, though every indicator suggests this will happen soon. The radio industry was dealt a blow from which it has never fully recovered. In sum, there can be little doubt that most of the damage done by the ratings, from the viewpoint of the audience, at least, results from misuse rather than from defects in the ratings themselves.

Yet in some respects, the rating services do fail—just as in others they provide a useful tool for the industry. There are four major polling techniques: the roster-recall method, the telephone-coincidental method, the diary method, and the mechanical recorder. Under the roster-recall method, used by The Pulse, Inc.

This method is fast and inexpensive, and it can include more people in the sample than any other technique Pulse interviews 67, families, compared with less than 1, for some other methods. But the roster-recall has disadvantages, too. The person interviewed generally is the housewife, and she often has no idea what programs attracted her husband and children. Also, the memory—or the interviewer—can play strange tricks.

Not long ago a rating service using the roster-recall method inexplicably came up with a complete set of ratings for the evening programs of a San Antonio radio station.

The catch was that the station goes off the air daily at sunset. The second method is the telephone-coincidental. Among its users are Trendex and, in part, Hooper who cross-checks with the third, or diary, method. They pick names in a set rotation from the telephone book and phone people to ask what program their set is tuned to at that moment. There is no memory loss, and the service is extremely fast.

Trendex, with interviewers in 10 key cities, can furnish information on a TV show the morning after it has appeared. Furthermore, since it is set up only in cities where there are three or more competing stations, Trendex can provide comparative, or share-of-audience, figures overnight.

Hazards of Phoning Viewers at Home But the telephone-coincidental method, too, has its disadvantages. Homes without telephones cannot be reached, nor can people with unlisted phones. In addition, Trendex 'Robert B. However, the biggest weakness of the system is the questionable reliability of the interviewers—who usually are untrained housewives, shut-ins or schoolteachers hired on a part-time, piece-work basis. The third system of producing ratings is the diary method, used mainly by Videodex and the American Research Bureau known to the industry as ARB , and also—along with the telephone—by Hooper.

Get The Watchlist delivered every Thursday. Email the author at kquinn theage. About time: listener-tracking watch to drag radio ratings into the modern age. Please try again later. The Sydney Morning Herald.



0コメント

  • 1000 / 1000