BuzzJack
Entertainment Discussion

Welcome, guest! Log in or register. (click here for help)

Latest Site News
> 
 
Post reply to this threadCreate a new thread
> How the charts were compiled in 1984
Track this thread - Email this thread - Print this thread - Download this thread - Subscribe to this forum
ben08
post 22nd March 2021, 02:48 PM
Post #1
Group icon
BuzzJack Enthusiast
Joined: 31 December 2007
Posts: 1,189
User: 5,152

Some of you may be interested in this article.

MUSIC WEEK SEPTEMBER 29 1984
The Gallup chart is the single most vital element of the music industry and inevitably it is almost constantly at the centre of controversy, usually, not of its own making. There are many myths an" misunderstandings about how Gallup compiles the chart, and in an effort to lay them all to rest, chart manager GODFREY RUST has compiled a blow-by-blow account of exactly how the hits are charted.

Charts: the inside story
RUMOURS. It's somehow appropriate that one of the longest-running chart albums should have that name. There have been rumours about the chart even longer than Fleetwood Mac's album has been in it. Of course, the chart is extremely influential (too influential, some have argued) in the working lives of many people in the record industry, and because it is compiled "behind closed doors" and unveiled to its nervous public at the push of a button at 8am every Tuesday morning, it is only natural that a great deal of speculation should surround it in a business that thrives on rumour and gossip. Since Gallup took over the industry chart 18 months ago a great many questions have been asked and answered — but in some corners of the business the chart remains a thing of rumour and mystery. Let me try and de-mystify it for you.

Gallup's job is basically very simple. We collect the week's sales data. We add it up. We check it. We discard some of it. We add it up again. Music Week and the BBC publish it. To do this requires a full-time team of five, 450 Dataport machines, three computers and the help of Gallup's computer staff and telephone interviewers. Most important, all of it is done in a way which is checkable and completely free of personal, subjective decisions. Take each step of the process in turn, and on the way I hope to dispel more than a few persistent rumours. First, we collect the sales data. There are 270 shops with Dataport machines from which Gallup draws its chart data (there are non-chart shops with Dataport machines as well, but more of those later). Of these 270 we collect from an average of 248 each week (figures for June this year). The remaining 22 are shops closed or being re-fitted, or with Dataports out of order, or with Telecom problems, or in the process of being removed or added to the panel. One of our computers is programmed to dial (using Telecom midnight lines) all chart shops automatically in the early hours of Thursday, Friday and Sunday mornings. Each telephone number is attempted up to four times to successfully collect the data. The whole exercise is carried out simultaneously in London and by another computer in Oxfordshire in case of computer failure in our main office. Every call is logged so that we know exactly which machines have been contacted on each night and the business of checking and maintaining communications (looked after by researcher Rick Smith) is a continuous one.

Having collected it all, we add the sales up. The computer does the number-crunching work while we provide it with the files it needs for two essential jobs: to identify the catalogue numbers, and to balance the raw sales data correctly, Gallup keeps two major files. One contains all product currently selling in any quantity. The other contains all the labels and prefixes currently in use. Researcher Danny Pirani keeps these two files fed with information. They are updated daily from samples and release information sent by record companies, and each Friday and Monday from the listing of unidentified catalogue numbers which have come from polling the Dataports. If a number which has been sold is not found on the product file, the computer searches the prefix file to identify the label to which it belongs. If that file can't be used we contact the shops which entered the sales. One way or another all numbers which record five more sales through the panel (and most numbers which record less) are identified. Once identified, all future sales of that number, file be attributed automatically to the right product. Records can have any number of alternative catalogue numbers, for special formats, import copies, even sleeve misprints. Duran Duran's Seven and The Ragged Tiger, for example, collects sales from any of the following: DDI (short LP number), EMC1654541 (full LP number), 1654541 GALLUP (LP number minus prefix),6A1654541 (Record Merchandisers label), CDP7460032 (compact disc) plus all the cassette equivalents. This is fairly typical.

This should get rid of one or two rumours. The idea that Gallup may ignore sales of a record if we haven't been told of its existence is nonsense: we hunt out everything, however obscure. Nor can we "forget to add in" the picture disc sales, or the 12-inch sales, or whatever other special format may be about; the process is automatic. Other rumours circulate when Gallup makes a phone call to a shop to ask about certain sales or records. There are many reasons for phoning our panellists, and it's true that one of them may be to investigate a possible breach of the BPI's code of conduct, but more often than not we are simply carrying out a routine check on catalogue number queries. Having identified the sales, the computer "balances" them to give them a representative picture of what is selling nationwide. This is where the much misunderstood word "weighting" first crops up, and I must explain how the sample is put together, for it should be made clear that most market research is weighted as a matter of course. The point of weighting is simple: to produce a result from a sample which represents the whole. Few samples are automatically representative in their own right and to produce unweighted figures from an unbalanced sample is about as useful as recording with out-of-tune instruments. If, for example, you wish to find the country's most popular politician you might go and ask 100 people. If 75 of them are men you will get a result biased towards men's opinions, because in the whole country men only account for about 50 per cent of the population. So you down-weight your 75 men's opinions and upweight your 25 women's opinions to get a result which comes out as if it was from a 50-50 sample. Every shop in the Gallup panel carries a weighting for a similar reason. The panel is balanced three ways — by type of shop (HMV, Our Price, Virgin, Woolworth, W H Smiths, Boots, Menzies and "others"), by size of shop (large, medium and small) and geographically (by TV region).

We know what the total balance of shops in the country looks like, so each week our panel is weighted to ensure it mirrors that as closely as possible. Some shops' sales are weighted up and some down. Every shop carries a weighting and its weighting will change slightly from week to week depending on the balance of the panel. If, for example, we have two less Woolworth shops this week than last because of Telecom problems, the remaining shops' sales will be up weighted to compensate and so on. A grid is built into the system so that the computer automatically adjusts the weight of each shop to compensate for the minor panel changes that happen each week. The question of balancing by region and by type of shop has created a good deal of misunderstanding over the years so it is worth dwelling on for a moment. A letter drafted for Gallup by a number of independent labels last winter expressed a common concern about "regional weightings" when it asked; "If a record sells more than the regional average in one area are its panel sales automatically reduced to the average norm?" Now, I can't exactly work out what the question means but I see what it is basically driving at: are you "penalised" for having a "regional breakout" on a record? Or for that matter, for having a record which sells only in independent shops? Or for being TV-advertised and therefore selling in multiple stores in one area? The answer is no — exactly the opposite is the case.



Take three records. One, let us say, sells 1,000 in London and South Coast disco specialists. The third sells 1,000 spread evenly throughout the country. Now the point of the balancing grid is to ensure that the three come out next to each other in the chart. It is the total over-the-counter UK sales, not where they sell, which matters. Take a few recent examples: Grandmaster Flash's White Lines sustained a mid-chart placing for several months basically because of huge sales in Lancashire, Michael Jackson's Off The Wall burst back into the Top 20 because of Midlands TV-advertising, with over 80 per cent of its sales in the Central area. Nino De Angelo's Guardian Angel charted recently almost entirely on sales which followed regional airplay in Northern Ireland and Lancashire. Divine — like the most current hi-energy product — was a blockbuster in South coast indies. Tin Tin continues to sell by the bucketful in Birmingham. Gallup's is a national chart, but that is not the same as a chart which only includes things that are "selling nationally". Few records sell across the board until they reach the Top 20 and sometimes not even then. The weighting grid doesn't penalise regional action, it protects it, because it makes sure that each region carries its weight. The age-old rumours like "it was kept out of the chart because it was only selling in the South" are completely myths. So are rumours along the lines of "the chart was based on Woolworths this week", or "the chart didn't have any Scottish shops in it" (or even one I heard that the chart was once entirely based on Scottish shops). A final point on regional and shop-type balancing. I have been given the impression that in some corners of the business there is an uneasy feeling that if Gallup is not fore-warned about a TV advertising campaign or a regional break-out our computers may get confused and we are likely to take arbitrary action against a record on the basis that "we think it looks a bit odd". If the sample is balanced and the sales are genuine we don't mind how odd it looks, and we never take arbitrary action. Please forget the myth of regional weighting.

By now it is Monday morning, the sales are added and balanced and we begin or check procedures. The point of these is to identify and discard any of our data which is unreliable or unrepresentative. It is done quite systematically. We are asking three questions: 1 Which shops have given us incomplete data? 2 Which shops have recorded unrepresentatively high sales on any particular record (and why!)? 3 Which records have clearly not been selling as well in non-chart shops as on the chart panel? This is how we get the answers. For question 1: there are three reasons why data from a shop may be incomplete — because of a Dataport problem, a communication (Telecom) problem, or because they haven't been entering all their sales. The first two problems are identified immediately from our computer logs. Then for each chart shop we look at the total sales recorded for the week, which must be close to its known average turnover, bearing in mind the seasonal ups and downs of the market. We then look at the daily totals, which must conform to a normal pattern for that shop. Finally we look at the keying-in pattern across each day. With its built-in time-pulse the Dataport shows in quarter-hour periods precisely how many sales were recorded and if necessary (as it sometimes is) we can place the particular entry of any sale within a few minutes. Of course shops vary considerably. Some conform to the national average sales pattern (Monday 13 per cent, Tuesday 11 per cent, Wednesday 12 per cent, Thursday 14 per cent, Friday 19 per cent and Saturday 31 per cent for July this year — normally weekends have a larger share during the winter), and others have very different trading patterns because of early or late closing or local conditions. Each shop is checked with these in mind, and telephoned to clear up any irregular entry patterns.

With these analyses we can diagnose the health of all our Dataports — whether they are being well or badly used. All shops that fail these tests are discarded for that week. The current (July) average figures are: 16 out of the 248 contacted are discarded, leaving 232 to be used in the final chart. Now we can also discard a whole pack of rumours. The following, with their variations, are all myths; "The chart was based on only 100 shops this week" — it rarely dips below 230, and this year's low is 218. "Dataports are breaking down all over Britain" .. . It is true that the Dataport has not proved to be the most resilient of machines, and the breakdown rate has been higher than originally expected, but it has never posed a serious problem for the validity of the chart. "Saturday sales aren't keyed in"/"Sales are keyed in at the end of the day"/ "Large shops only have to key in one sale in ten" etc. — shops are only included if they show a full week's data, properly keyed in. "Long catalogue numbers aren't entered" — It is worth noting that The Beach Boys' cassette reached the No 1 spot with the catalogue number TC2BBTV1648635. "Shops only key in chart product" — More than 50 per cent of album sales are on titles outside the Top 200. For Gallup, non-chart product is just as important as chart sales for producing the industry's market share figures: without these labels like Old Gold, Deutsche Grammaphon, Cambra, Chevron, MFP and Ditto would not feature as strongly as they do in our monthly and quarterly figures.

Now of course some shops don't key in all sales to their Dataport. A few hardly enter any; but these are never used in the chart. There is an easy way to spot a non-chart Dataport; it is one which isn't being used properly. For question 2; what about "freakish" sales in particular shops? Personal appearances, local bands, labels owned by the shops themselves, special offers — all these create untypical sales in an individual shop. Our computer identifies all cases where a single shop sells significantly more than any other shop on the panel, and a "ceiling" is put on the number of sales which will be accepted on that record from that shop. The remainder are discarded as being unrepresentative. This is done to a standard formula and it affects only those sales which are entirely untypical of any other shop. The reasons are normally known to us, and if not we will telephone to find them out. By Monday afternoon our telephone interviewers have collected the sales data from our panel of "check" shops, and our computers are ready to answer question 3: "Which records have clearly not been selling as well in non-chart shops as in the chart panel?" This is the part of the chart system which has provoked the most interest and the most misunderstanding: the main concern of the indie companies who wrote the letter mentioned above, was these checking procedures. Why and how does Gallup operate them? Can they be fair and objective? Yes, they can be and they are.
Go to the top of this page
 
+Quote this post
Gambo
post 22nd March 2021, 04:29 PM
Post #2
Group icon
BuzzJack Climber
Joined: 29 July 2014
Posts: 198
User: 21,106

Very interesting indeed and thanks for posting. If only we could now say, 37 years on, that all these issues surrounding official chart compilation had been thoroughly and transparently ironed-out! While it's not the same bag of problems that challenged Gallup in '84 that trouble Kantar in '21, it still feels like there are numerous complications in bringing a chart to a satisfactory conclusion and presenting in a way that manages (some might say 'massages') the vagaries of the current market, which are so much more complex than in the '80s thanks to the digital revolution and diverse means of consuming our music.

I'd like to see a similar article by Godfrey's current successor aimed at demystification of some of the present chart concerns, but I shan't hold my breath!

PS: I wonder whether they published that in a bid to defend their reputation pre-emptively at a time when ILR stations were just about to launch a serious competitor to the official countdown? The Network Chart began I think on Sun 30 Sep '84!


This post has been edited by Gambo: 22nd March 2021, 04:31 PM
Go to the top of this page
 
+Quote this post
chartjack2
post 22nd March 2021, 05:18 PM
Post #3
Group icon
BuzzJack Enthusiast
Joined: 19 November 2014
Posts: 1,410
User: 21,383

Very interesting read, thank you.

I know we have the Music week info - but I wish the full sales for full Top 100 was released every week.
Go to the top of this page
 
+Quote this post
fiesta
post 22nd March 2021, 07:07 PM
Post #4
Group icon
BuzzJack Enthusiast
Joined: 15 March 2006
Posts: 1,630
User: 232

I was reading another article in Music Week from December 1983, about a row that had erupted over chart weighting.
It concerned Roland Rat-(a puppet from children's TV). His record company claimed the Rat had been unfairly targeted when after Gallup had weighted the chart, his single Rat Rapping suddenly dropped down the chart.
I don't suspect there would have been many complaints from chart followers!
Go to the top of this page
 
+Quote this post
Robbie
post 22nd March 2021, 08:56 PM
Post #5
Group icon
BuzzJack Gold Member
Joined: 4 April 2006
Posts: 3,445
User: 366

Here's an older article which was first published in Music Week in either 1975 or 1976 and which was subsequently reprinted in Rock File 4. Its a good overview of how the charts were compiled in the mid 70s, in the BMRB era.

How The Charts Are Compiled

1drv.ms/b/s!ApaBhZNIN2ZmhM5wSIDAtK5zDkPR6Q?e=mqso0e

https://tinyurl.com/4y733h8n

(hopefully the link will work)

There is also an explanation of how the Billboard charts were compiled at the time.
Go to the top of this page
 
+Quote this post


Post reply to this threadCreate a new thread

1 user(s) reading this thread
+ 1 guest(s) and 0 anonymous user(s)


 

Time is now: 24th April 2024, 02:19 PM