Methodology for
AAR Style Poll Computations

Setting up our poll to determine trends in people's favorite romances turned out to be the easy part. The hard part came when 218 ballots rolled in and we had to figure out how to interpret them. Jennifer Schendel worked day and night using a manual sorting system, but as time rolled on it became clear that we needed more help. I tried to work out my own system for evaluating, and called my sister Sarah Novak, a graduate student in psychology who studies trends all the time, merely to verify that my technique would work. After spending ten minutes listening patiently to my incredibly baroque system, Sally said "Look. Just send me your data, and I'll take care of it for you." I was so grateful that I promised to refer to her as my personal Statistics Goddess. If you're interested, I will do my best to explain what she did.

The SG decided that the best and simplest test would be to determine correlations in the data. A correlation judges how similar two sets of numbers are - it is the linear relationship between two variables. Suppose I want to find out how similar my tastes are to LLB's. I draw a graph with an X and Y axis, each numbered 1 to 4 (representing the four possible scores.) I label the X axis "Mary's Scores" and the Y axis "LLB's scores." For every author, I plot a point somewhere on the graph accordingly. If Laurie and I rate every author the same, then all of our results will be either (1,1) (2,2) (3,3) or (4,4), and our graph will have a diagonal line exactly between the X and Y axis, with a perfect slope of 1. (Slope is y divided by x; since the numbers are exactly the same, y/x=1.) Our scores have a perfect correlation. On the other hand, if our ratings are the exact opposite of each other (meaning that whenever LLB gives an author a high score, I give her a low score), then the results will be (4,1) (3,2) (2,3) and (1,4), and our graph will have a diagonal line in the opposite direction with a perfect slope of -1. (I'll get negative numbers once I shift the graph so that the point where the lines cross (2.5,2.5) becomes (0,0), the point where the X and Y axes intersect.) In that case, our score has a perfect negative correlation." Perfect correlations never happen in real life, so it's possible to determine which authors are most correlated by measuring all of the correlations, and then comparing them to find the correlations that are closest to 1.

To check our correlations, the SG ran our data through a statistics program called SPSS. SPSS compared each author's ratings by the 218 readers to every other author's ratings by the same 218 readers. It produced a "correlation matrix" - an immense chart that looks like the mileage grid that shows the distance between cities in a road atlas. As on a mileage grid, every author had her own vertical row and horizontal row on the chart. Every square on the grid (all 1200 of them) was the intersection between two authors' scores, and showed their "Pearson Correlation Coefficient" - a positive or negative number somewhere between .001 and 1. Since no correlation is ever perfect, a score of .700 would be incredibly high; most of our "significant" scores fell between .185 and .365, with the highest correlation, .594, between Jayne Ann Krentz and Amanda Quick. The SG also ran a test for "2-tailed significance." If you ask three Jennifer Crusie fans whether they like Janet Evanovich, and two of them say "yes," does that really mean that two out of three Crusie fans will like Evanovich? "2-tailed significance" checks the size of the statistics sample to determine whether the odds of a result being pure chance are five out of a hundred. If the answer is "yes," it checks whether the odds for pure chance are as low as one out of a hundred. Since I was interested in providing the largest pool of choices possible, I chose to include authors whose scores only met the five-out-of-one-hundred test. However, the SG implied that a more ethical researcher would probably limit themselves to recommendations that met the one-out-of-one-hundred standard.

The SG had this to say on the subject of statistical significance:

"As I alluded to earlier, what you count as significant depends on what's at stake. If you're talking about risking people's lives, any chance is too much. But if you're talking about a chance of wasting a couple of hours of people's time because they read a romance novel that they didn't particularly like even though you thought they would, well...that's not going to keep you up at night, is it. If people want a recommendation for a good book to read, usually they go on the opinion of one or two other people, so getting 218 people's opinions has got to be a better alternative than that."

Mary Sophia Novak