Am on my way home from Philadelphia, where the 2016 conference of the American Political Science Association (APSA) was held. This conference is the biggest and most important political science conference in the world. The dominance of american universities and journals in the field is such, that the event is a must-go for many political scientists. This is the place where the big names gather, and where the newest and hottest research is presented.
Yet i cannot say i was overwhelmed. Yes, there were impressive papers and yes, i learned a great deal and found new ideas. But I also saw why the American brand of political science is loathed in some quarters, and why a caucus of ‘new political science‘ at the conference called for political science to take up new directions – which, by the way, they have been doing for fifty years. Despite the considerable research budgets and the often fancy methodological wizardry, I often wondered at panels whether the right questions were being asked.
Both the status as well as the loathing of the ‘american-brand’ of political science has to do with the centrality of quantitative research methods. Paper presentations were full with sentences like ‘i constructed this database’ or ‘i measured x by using data which i collected through y’. The word ‘data’ was probably the most regularly used word of the conference, closely followed by words like ‘hypothesis’ and ‘regression’. The efforts and energy (and money!) that went into collecting various forms of data was impressive. And often useful too: i saw new measures of electoral integrity, for example, which helps us to study and compare the extent to which polls are fraudulent in different countries. I saw, similarly, impressive new research on informal politics such as surveys on the role of brokers in providing access to services. The hesitance of some social scientists (often: anthropologists) to adopt more quantitative methods often prevents them from engaging in comparative research, or identifying general patterns.
In that sense i am all for quantification. In my own research I have tried to find ways to quantify and compare patterns of informal politics whenever possible. There is unnecessarily strong divide within social science between ‘qual’ and ‘quant’ people, strengthened by disciplinary divides and a strong dose of distrust of each other’s research methods. For example, in one of these receptions during the conferences, I had a drink with someone who said that ‘when i read these qualitative papers, I feel that the stories I read are just selected to support the argument of the authors’. I could see where he was coming from – while a good ethnographic paper never hinges on one story but rather on the cohesiveness of a bulk of observations. Yet I thought his comment was funny, because it mirrors the distrust towards quantitative research that I sometimes feel – that authors operationalize their variables in such a way it maximize the chance of finding a ‘significant’ result. There can be bad science in each research tradition, in the sense that researchers do sometimes mask the degree of uncertainty in their findings.
The problem, however, is that often research methods are prioritized not because of considerations about how to best tackle the topic at hand, but rather because they are fashionable within a researcher’s discipline. There is pressure to conform. This is a form of ‘group think’: people who are raised in a particular research tradition, go to conferences where they meet the same kind of people who share the same methodological preferences. Within this ‘group’ a shared idea of what ‘good science’ is develops, which becomes a template for (junior) researchers to follow. This can be inspiring and helpful for junior researchers, but such methodological fashions can also be a straightjacket that prevents people from asking new questions.
I regularly encountered this ‘straightjacketing’ of research at the APSA conference. It was present in the overdosis of regression tables that came my way, and the very limited amount of real life illustrations in the papers. This straightjacket was also present in way in which the above mentioned researcher said that ‘I get most of my insights from my fieldwork (i.e. the qualitative stuff), but it is difficult to get a purely qualitative paper published’. He is right: purely qualitative papers rarely make it to a (american) political science journal. Given the stringent word-length limits of most journals, even the publication of a mixed-method paper is difficult. As the job-market in the US is killing, you simply need to adopt quantitative research methods to stand a chance. I experienced the straightjacketing myself: while I presented a paper that tried to combine ethnographic material with the results of an expert survey, I struggled with the difficulty of doing these two things within the short time frame of a 15 min presentation.
This focus on quantification would not be an issue if we would be able to answer all our questions in this way. But this is where the problem lies. The selection of papers that I saw at APSA suggests that the research agenda is getting skewed in a way that, particularly, issues of power and inequality often get short-shifted. It is not a coincidence that the above-mentioned group of ‘new political scientists’ largely adopt qualitative research methods. My field, the study of informal politics, is an area where this is particularly salient. Informality is something which, because of its very nature, is difficult to observe, let alone quantify. As a result the study of formal politics – parties, elections, media, public debates, etc – is much more developed compared to the study of, say, the study of patronage networks, informal business-politics linkages, or the clientelistic provision of public services. While these issues are hardly less impactful on the character of politics and government, much work remains to be done, and the study of dimensions of informal politics remains a relatively small field.
A second problem is that the drive for quantification often leads to a disregard of mechanisms and social processes. A surprisingly great number of papers I heard, adopted a very physics-like approach to the social world: they ask how much output variables (say: quality of public services) covary with input variables (say: voter turnout). If a significant result is found, the author concludes that the input variable must have an effect. Yet many papers do so without paying much attention for the actual ways in which this effect occurs. Some sort of rational-choicy argument is concocted from behind a university desk, which often has little to do with the choices and challenges that people face in real life. This lack of attention for actual, on-the-ground observable mechanisms and processes not only leads to misunderstandings, and a very dry, mechanistic view of the world.
A third, related, concern is that these quantitative research methods are so time- and money-consuming that there is often little intellectual energy (and word-space) left for seeing and discussing the broader picture. An particular trenchant example was a paper on electrification in India, which made use of a very impressive dataset of night-time satellite images to show the changing rate of electrification and power outages across India. Very cool maps and very cool that you can actually do that. But then the eventual conclusion was underwhelming: that power outages happen more in areas that have been recently connected to the grid. That is not surprising, as it seems very likely that power plants struggle to keep up with demand in areas with expanding demand. Yet such an analysis draws attention away from more relevant issues, such as, why there have been so little investment in this capacity in poorer India, and thus does little to understand the highly unequal quality of public services in poorer parts of India. I think the author had ideas about this – it just did not fit in the paper.
That being said, i greatly enjoyed seeing all sorts of new methods that are being devised to study informal politics quantatively. Survey experiments, broker surveys, games: very cool stuff. If only if was integrated a bit more with insights from fieldwork or interviews. My feeling after this conference is that there is a great need for political science journals to promote the publication of more mixed-methods papers. While many research projects adopt mixed-methods, in practice mixed-methods-papers are the exception rather than the rule. By loosening the tight restrictions on word-length, and by adopting a review process that is more sensitive to the challenge of combining qualitative and quantitative research methods, political science could gain a boost. And the study of informality in politics would become a bit easier.
Dag Ward (zondag 28 mei)
Ik ben zelf bezig een blog te maken en ik zie hou jij dat hebt gedaan
Een vraag die ik check of op een blog als deze (nu dat ik zelf een een WordPress model heb) ook de reacties van 1-2 pagina’s op de stellingen van ons 7 juni seminar kan plaatsen.
groet Joop
LikeLike