Sunday, November 23, 2014

"Get me off your f_____ mailing list!"

I'm sure we have all had this sentiment, given the increase in garbage emails inviting us to attend bogus conferences and publish in bogus journals. Fed up with this, two authors created a paper that consists primarily of the phrase "Get me off your f_____ mailing list," repeated several hundred times. They submitted it to the journal, International Journal of Advanced Computer Technology, whose editor accepted the paper!! This is hilarious. See the nice discussion on Scolarly Open Access, and don't neglect to read the comments. There is also some discussion on IFL-Science and elsewhere.

The discussants at Scholarly Open Access suggest use of the random text generator at Scigen that will create bogus computer science papers, appropriate for bogus conferences and journals. For more humanities-oriented readers, try the Postmodern text generator - every time you access the site, a new postmodern text is generated.

Thanks to Julie and Rudy for alerting me to this hoax. Wow, it just occurred to me that perhaps some archaeology papers I've seen lately are hoaxes. Hmmmmmm.........


Sunday, November 16, 2014

READ THIS ARTICLE !

Lund, Christian  (2014)  Of What is This a Case? Analytical Movements in Qualitative Social Science Research. Human Organization 73(3):224-234.


I just read this article, and it is fantastic. Alison Rautman suggested it: Thanks, Alison! Yeah, maybe its weird to get excited about epistemology, but given the sorry state of argumentation in archaeology, we really need to talk more about epistemology. A good place to begin is with methods of case study analysis.

Many, or perhaps most, archaeological studies are examples of case study research. That is, we are analyzing a small number of cases in order to draw conclusions and make general points. In my previous post on case study research, I suggest that archaeologists would do well to pay attention to the methodological literature on case study research in the social sciences. Now, that is a rather large literature, and much of it applies only tangentially to the kinds of data and concepts we use in archaeology. I always suggest that people reading John Gerring's (2007) textbook as a very useful introduction that has relevance for archaeology. Now I will add Christian Lund's new paper.

Lund uses a simple and clear scheme to analyze a number of issues in case study research. Here is his basic scheme:
 Most of this brief article consists of discussion of how scholars move between the specific and the general, and between the concrete and the abstract, in the course of research. Lund uses examples form his own ethnographic research on political economy in Africa, but most of what he says has broad applicability to archaeology and history. Here are some of the relevant concepts:


“Generalization is an attempt to see resonance with events and processes, largely at the same level of abstraction but in different temporal or spatial contexts.”

Abstraction “is an attempt to identify inherent decontextualized qualities or properties in the studied events.”

Theorization “is about moving from observation of empirical events, through concepts, to be able to say something about the inherent qualities and dynamics in contexts other than the ones studied. That is, there is both an element of decontextualization or abstraction and an element of transfactual corroboration in the process.” (all, p.229)

Scholars move back and forth among these concepts, among the cells in the above table, in their efforts to make sense of their data. Here is how Lund fills out his scheme for his particular research project:

I got all sorts of ideas and insights from this article. Rather than laying it all out here, I will recommend that you read the paper. I will just mention two points made in the conclusions (p. 231):

  1. "It is the movement between [the cells] and their articulation that produces epiphanies and analytical knowledge"
  2. "To discuss one's work with others on a regular basis may be the most important practice to gradually hone in on the potentiality for generalization, abstraction, and theoretical of the case."
Go read this article, and then talk about your research with friends, colleagues, and family. It will help you develop insights and advances.

Gerring, John  (2007)  Case Study Research: Principles and Practices. Cambridge University Press, New York.

Lund, Christian  (2014)  Of What is This a Case? Analytical Movements in Qualitative Social Science Research. Human Organization 73(3):224-234.

Friday, November 7, 2014

Social Science History Association, annual meeting

I am posting from Toronto, where I am attending the annual meeting of the Social Science History Association.  I've been a member of the SSHA for a few years; when I resigned from the American Anthropological Association in protest of their anti-science stance, I joined SSHA. I actually attended my first meeting in the 1980s, and published a paper in their journal, Social Science History, in 1987. But this is the first meeting I've attended since then. This has been an interesting weekend.

Professionally, there are some things the SSHA does well at their meeting, much better than the Society for American Archaeology meeting. Their sessions are all two hours in length. Most contain four papers of 20 minutes, plus time for a discussant, as well as time for discussion with the audience. Some of these discussions are run formally, with questions and answers, and some are more of a free-form discussion between presenters and audience. This format produces sessions much more intellectually satisfying experience than those at the SAA meeting, which has rushed papers, often ten or fifteen in a session, and no time for discussion.  All sessions fit into a single schedule grid, with 15 minutes between 2-hour time slots. This give time for continued discussion after a session, time to talk to people between sessions, and time to get to the next session. Sessions run on time.

The SSHA also has various alternative formats. They have a bunch of "author meets critics" sessions, with review of recent books, discussions with the author, and with the audience, and they have some roundtable events without formal papers.

Intellectually, many sessions have good coherence. The SSHA is organized into a series of 15 or 20 "networks." These are topic-based groups of members; I am in the urban and macro-historical change networks. There are also networks on gender, historical geography, cultural history, politics, and a bunch of others. Panels are reviewed by the networks, and the network coordinators assemble sessions from loose papers. My talk was submitted to the urban network, but they put me in a session organized by the historical geography network because of the GIS theme. The papers were diverse but very interesting and coherent in terms of analyses of movement using historical GIS data.

Each annual conference has an overall theme. Although all sessions do not have to relate to the theme, many do. This year the topic is "inequality," and I managed to hear a talk on inequality by Andrew Abbott, a sociologist I admire. Very interesting. They also evidently have a session about Charles Tilly every year, always overflowing. Tilly was an important part of the SSHA, and many of his colleagues and students are active members. There were four very interesting talks on extending Tilly's work in new directions. The chair was Daniel Little, author of the best social science blog, Understanding Society (it is listed in the right-hand panel here). In the audience discussion I talked about the lack of interest in Tilly's work in the field of anthropology. I mentioned that a paper I wrote (with Frannie Berdan) applying Tilly's model of durable inequality (Tilly 1998) to the Aztecs was rejected by American Anthropologist. Its sitting in a (virtual) drawer right now. Then, after the session, the editor of Social Science History came up and said that she'd love to get our paper applying Tilly's ideas to the Aztecs for the journal!
Charles Tilly
I also met Richard Harris for the first time. So now we both have to stop telling the story of how we had co-authored a paper with someone we have never met (Harris and Smith 2011)! Our paper was a response to a clueless survey of the field of urban studies that left out history and comparison. Our reply: "History matters" for urban studies.

I also met a colleague from ASU, urban historian Philip Vandermeer, for the first time. Its strange when you have to go to Toronto to meet a colleague from across campus.

This meeting was a very different experience from the SAA in that I know very few people. I'm not sure I'd want to go every year (next year the theme is pluralism, not exactly a major theme of my research). There just aren't enough papers from before the 18th century, or focusing on nonwestern settings.  But it has been fun and interesting. I had to overcome my archaeological inferiority complex. What am I doing here with a bunch of heavy-duty social science historians? Why would they care about archaeology? But this is a thoroughly interdisciplinary crowd, and in their estimation archaeology is great if contributes to answering questions of interest.


The image of a cat next to a net is an in-joke from the Tilly session.


Harris, Richard and Michael E. Smith  (2011)  The History in Urban Studies: A Comment. Journal of Urban Affairs 33(1):99-105.

Smith, Michael E.  (1987)  Archaeology and the Aztec Economy: The Social Scientific Use of Archaeological Data. Social Science History 11:237-259.

Tilly, Charles  (1998)  Durable Inequality. University of California Press, Berkeley.

Sunday, October 19, 2014

Open Access Week

This coming week is "Open Access Week". Check out the central website, called Open access week. The promise and importance of open access was one of the main reasons I started this blog in 2007. Over the years I think I have grown cynical about the lack of progress in open access on most fronts, but I remain committed to the concept. I was asked by librarian Anali Perry to respond to several questions about open access; my responses (and several others) will be posted on the library website this week. Here are my replies:

 
What is your experience with open access publishing?

I write about open access publishing in my blog, “Publishing Archaeology” (see URL below) and I speak out within my scholarly community (archaeology) through papers and workshops at conferences, publishing in newsletters, and such. I have posted papers in online open access “journals” (non-peer reviewed). I post most of my papers, somewhat inconsistently between my personal ASU website, Academia.edu, and the Selected Works site. I like to try out new scholarly programs and sites to see if they are useful for promoting open access and the values and benefits of OA. Academia.edu turned out to be a great site, but Researchgate turned out to be not at all useful, but with many annoying traits, so I unsubscribed. Selected Works has a very attractive interface, but seems less widely used the Academia.edu and slightly more difficult to use. I have a deep personal and professional commitment to open access (that is one reason I started my publishing blog in 2007), although I have become somewhat cynical over the lack of progress, and even signs of retrenchment or anti-progress, in the past few years.


Do you believe that open access to scholarly research is important? Why or why not?

If scholarly research is important, then open access is important. One does research in order to build knowledge that is communicated to others: colleagues and the public. Open access contributes in a strong way to the basic and fundamental goals of research and publication. Much of my research is funded by U.S. taxpayers, and they have a right to know what I have done with the funds, and to see my results. Traditional publishing in journals used to serve the goals of research/publishing very well, but today with the Internet we can promote the goals and values of research far more widely, and traditional journal publishing only serves a limited sector of our potential audience. Furthermore, commercial journals now serve to limit access to published papers by refusing to engage in open access (without a big fee).

I do research and fieldwork in Mexico. As such I work as a guest of the Mexican government and the Mexican nation. Most of the journals I publish in, however, are not available to my Mexican colleagues or the Mexican public. They are locked behind a pay wall, and people in Mexico (and most of the rest of the world) simply cannot afford the fees required to get access. When Academia.edu and Selected Works provide access statistics, my Spanish-language papers often have a higher download rate than my English-language papers. I interpret this as a function of the lack of availability of journal articles around the world. Most of my U.S. colleagues can get access to online journals through a university website, but that is not true in Mexico. Posting my papers online is the only way around this obstacle, yet that very simple and basic example of scholarly activity—making my own papers available online—is being turned into a crime.


What do you see as the biggest barrier to open access publishing options for scholars?

Let me list three barriers to open access publishing. First, the commercial publishers who lock up published papers behind a paywall are perhaps the largest barrier to open access. Modern academic research is the only realm where one works with compensation from the public and from one’s own time and resources, then gives the results for free to a large corporation, who then make profits from one’s work while preventing others from seeing it. Does this sound right? Not to me.

The second barrier to open access is apathy and ignorance by researchers. Most researchers just want to get on with their research without being bothered by setting up websites, posting papers, or dealing with the ethical and professional issues of open access.

The third barrier is universities that fail to recognize the substantial gains they could make if they embraced open access. Few universities have an institutional repository where all papers published by faculty (and students) are archived. While journals have the legal right to suppress the public posting of article pdfs, authors have the right to send pdf reprints to colleagues. The “reprint button” is a way around the barrier, by automating send sending of reprints while maintaining the lack of open posting of pdfs. How would universities (such as ASU) benefit from embracing open access, setting up a repository, and promoting other open access ideas and procedures?

First, research carried out at the university would become better known. Citations will increase (this has been shown quantitatively) and overall familiarity with university research will increase. This promotes science and scholarship and its availability to colleagues and the public. Faculty will benefit from this. Second, by boosting the research profile, it will increase the prestige of the university and its faculty. More people will see more of the activity taking place at the university. One of the basic missions of universities—creating new knowledge through research, will thus be promoted more explicitly and more intensively. Third, people outside the university will become more familiar with what the university is doing, and the university can thus have a greater impact on such people in the local region. Fourth, the global reach and engagement of the university will be improved with open access, as constituents around the world getter better access to the research findings of faculty and students. Fifth, the public display of research that is at the cutting edge of individual disciplines, and research that breaks new ground by synthesizing multiple disciplines, will benefit by finding a wider audience, which encourages communication and synergies.

In the case of ASU, these benefits of open access (and this is just a quick off-the-cuff list; there are surely more) fit with many of the principles of the New American University (http://newamericanuniversity.asu.edu/). I continue to be surprised at the lack of action on open access at this university.


What advice or recommendations about open access publishing (or scholarly publishing in general) would you give to early career researchers?

My first piece of advice would be that conducting research and publishing is more important than worrying about open access. I know of at least one colleague who put so much time into an OA project that they failed to produce sufficient scholarship to get tenure (and they were denied tenure). Everyone is grateful for this person’s professional contributions, but that person, and probably the discipline generally, would probably be better off if they had spent more time getting their own scholarship in order. That said, one rarely has to make a stark choice between basic scholarship and OA activities. I would advise early career researchers to make their publications available in one or more repositories or websites. Publish in OA journals, agitate within professional societies for OA policies and practices. Young scholars are generally highly media savvy, and they should explore the growing number of options for scholarship and scholarly communication, including OA and OA-related activities.

Resources:

My blog, Publishing Archaeology:  (http://publishingarchaeology.blogspot.com/)
An online, open access, paper:  Smith, Michael E.  (2011)  Why Anthropology is too Narrow an Intellectual Context for Archaeology. Anthropologies 3: (online).  http://www.anthropologiesproject.org/2011/05/why-anthropology-is-too-narrow.html.

My personal website:  http://www.public.asu.edu/~mesmith9/
My site on Academia. Edu:  https://asu.academia.edu/MichaelESmith
My site on Selected Works: http://works.bepress.com/michael_e_smith/





Sunday, October 12, 2014

How to make a weak argument

Suppose you are writing up some archaeological results. You will be making a bunch of arguments--statements that draw on data and theory to come to some conclusion of interest. Most works contain a number of arguments, often at different levels. For example you make claim that you found 41 pieces of obsidian in the lowest level and only 14 in the uppermost level. This is an argument, but it is not a particularly interesting one. You may later make a more interesting argument suggesting that the decline in obsidian was due to changing commercial routes that now avoided your site, or perhaps you will argue that the decline came from a reduction in blood-letting rituals that employed obsidian blades.

Now, suppose you decide that you want to make a weak argument that few of  your colleagues will find convincing. While I am of course being sarcastic here, as I was in my post, "How to give a bad conference paper", this is a serious point. Why? Because it often seems that archaeologists must at some level be making this decision. They make weak arguments. So, the purpose of this post is to help them out by reminding them of tips and tricks to make bad arguments. I will mostly provide links to past blog posts where I discuss these issues.

(1) Use analogy incorrectly.


Do not develop a formal argument by analogy, based on a sample of source examples and carefully extrapolated to your archaeological data. Ignore Lewis Binford's suggestion to treat an analogy as a hypothesis to test, and whatever you do, make sure you avoid Alison Wylie's brilliant and definitive discussion of the role of analogy in archaeology. Cherry-pick one analogical case from somewhere in the world and claim that it supports your case.

You could check out my previous post on this topic, although be warned that it is a reverse argument: it assumes that you might want to make strong arguments and use analogy well.

(2) Make post-hoc interpretations.


Don't bother to set up initial hypotheses or expectations. Who knows what you fill find when you dig into the ground, anyway? Do your fieldwork or lab analysis, then scratch your head and try to dream up a nice-sounding interpretation. Slap some currently fashionable idea onto your data, and voila, you are done.

You could check out my earlier post on post-hoc arguments, or the one on trying to prove that you are wrong.

(3) Use empty citations to back up your shoddy scholarship.


Don't use citations to other works to supply data and cases that provide a foundation for your arguments. Instead, cite sources that that have no empirical data, but rather offer opinions and speculations that agree with your argument. Avoid citing studies with data that go against your views or models; instead cite those that agree with your ideas but lack any data. These are called empty citations.

You can check my prior post on this topic, and please follow the links there to Ann-Wil Harzing's original discussion of empty citations. Oops, I am being straight here, not sarcastic.


I find it really depressing that the archaeological literature (particularly the archaeology of complex societies) is so full of weak arguments. This acts to prevent the development and accumulation of reliable archaeological findings, which impedes the empirical advancement of our field. The sloppy use of analogy, post-hoc interpretations, and empty citations are all part of the picture. We need to get our act together. If you have not read and carefully studied Wylie (1985), you should do that immediately. And then check out chapter 7 of Booth et al (2008). Check out some of the methodological works from the social scientists I cited in my previous post. And finally, read my article on this topic (once I get around to writing it.......)

Booth, Wayne C., Gergory G. Colomb and Joseph M. Williams  (2008)  The Craft of Research. 3rd ed. University of Chicago Press, Chicago.

Wylie, Alison  (1985)  The Reaction Against Analogy. Advances in Archaeological Method and Theory 8:63-111.


Tuesday, October 7, 2014

How would you know if you are wrong?

I haven't been posting lately. I've been busy running a bunch of research projects, and I'm teaching a new grad seminar on theory in archaeology. We've finished with the epistemology part of the class (what is theory? how to you construct a good argument? what is an explanation? how should you use analogy?), and have started on the theory part. We are focusing on theory that can be applied archaeologically, and on how one goes about applying theory.

One benefit of the epistemology part of the class is that it has helped me organize my thoughts, and given me a better understanding of just what is wrong with much of the work published in archaeology today. In short, many archaeologists don't know how to make a solid argument. I've talked about this previously. They don't know how to put data together with theory to reach a rigorous conclusion about what likely happened in the past and why it happened. So, I plan to write an article about this problem. Right now I view the paper partly as instructions for students (here is how to make a good argument, and here are some pitfalls to avoid), and partly as a critique for the profession. One difficulty in writing this paper will be how to handle examples. Maybe I am getting soft in my old age, but I am not anxious to get a bunch of colleagues pissed off at me for featuring their work as negative examples. I seem to do a pretty good job of annoying people without deliberately poking a bunch of others with a stick. But would a journal editor accept an article about methodological and epistemological problems that did not include a lot of real examples?

Anyway, I think I'll use this blog to organize some of my thoughts for my paper on arguments. So let's start with a quote from economic historian Steve Haber:



  • “the fundamental question of all serious fields of scholarly inquiry [is]: How would you know if you are wrong?”(Haber 1999:312)
If you look at just about any social science methodology textbook, you will find discussions of this point: Gerring (2007:74-5), Ragin and Amoroso (2011:39), Luker (2008:53). Research should be framed in such a way that one's argument or hypotheses can be wrong. This is a kind of relaxed version of Karl Popper's (1934) criterion of falsifiability. Popper had a very strict concept of hypotheses that could be falsified with a crucial experiment. Subsequent philosophers of science showed that his scheme was too rigid for the social sciences. But nevertheless, it is still important that you construct research and arguments that can be shown to be incorrect.

Here is how Andrew Abbott (2004) discusses the problem of ensuring that you can be wrong:


  • “it is surprising how many researchers—even graduate students in their dissertations—propose arguments that can’t be wrong. For example, research proposals of the form, ‘I am going to take a neo-institutionalist view of mental-hospital foundings’ or ‘This paper analyzes sexual assaults by combining a Goffmanian account of interaction and a semiotic approach to language’ are not interesting because they do not propose an idea that can be wrong. They boil down to classifying a phenomenon or, seen the other way around, simply illustrating a theory.” (p.216)

Abbott goes on to remark that,


  • “Thinking without alternatives is a particular danger in ethnography and historical analysis, where the natural human desire to develop cohesive interpretations (and the need to present a cohesive interpretation at the end of the research) prompts us to notice only those aspects of reality that accord with our current ideas.” (p.216)

Does this sound familiar? As archaeologists we certainly want to write cohesive narratives, and this desire may lead us away from considering alternative ideas. One of the problems with using post-hoc interpretations, or what Lewis Binford called post-hoc accomodative arguments (see my prior discussion here), is that they can't be wrong. They are made up after the fact, after the research is done, and they are designed to fit the results you found. So almost by definition, such interpretations must be correct. They fit the data, and you haven't compared them to any other interpretation. Now there is nothing wrong with devising some interpretations after all the facts are in. But such interpretations should serve as input for further research that is designed to test them against new data. Otherwise they will remain a particularly weak argument.

So, how do you avoid doing research and constructing arguments that can't be wrong? Here are two suggestions:

(1) Devise multiple working hypotheses, and test them all. The one that best survives its battle with the empirical world, the one that is the last hypothesis still standing, is your best explanation. This concept of "multiple working hypotheses," usually with a mandatory citation of Chamberlin, comes up occasionally in archaeology. In fact, this method is part of the approach known as "strong inference." Want to hear more? Stay tuned. This will be one subsection of my paper, and I'll talk about it here before too long.

(2) Choose your theory wisely. If your theory is very abstract and philosophical, at a high epistemological level, then you can't test it against data, and you can't show that your interpretation is wrong. On the other hand, if your theory is of the middle range or low range (what I have called "empirical theory" - Smith 2011), then it CAN be tested against the empirical record, and it can be falsified.This is another well-established point in social science epistemology (see, for example, Ellen 2010, or see the discussion in my 2011 paper), although the post-processualists don't believe that there are different epistemological levels of theory (see footnote 4 to my 2011 paper on this).

For example, suppose I am working on regional ceramic exchange in Postclassic central Mexico. I could decide to test a middle-range theory, such as the proposition that "increasing involvement in the long-distance exchange networks of a commercialized economy will lead to higher levels of local and regional exchange at my site." I know from past work, and work in other areas, that the Late Postclassic period was a time of growing commercialization and increasing long-distance trade throughout Mesoamerica. I find more long-distance imports in my site through time. So I do some petrographic analysis and INAA of a sample of sherds from different time periods. I may find my hypothesis supported (that is, there are higher frequencies of regional imports through time), or I may find that I was wrong (frequences of local/regional imports declined when I expected them to increase). This kind of research, using middle-range theory, can lead to results that either confirm or deny my initial hypothesis. (or, they may lead to complex results that leave me scratching my head.....).

Now, consider an alternative use of theory. Suppose I choose to use a high-level abstract theory: post-structuralism. The growing influence of commercial exchange created social contradictions and tensions in local society. Political institutions at my site were instantiations of socially embedded practices. People negotiated their identity by using ceramics from different places of origin in arenas of display and tournaments of value. My analytical tests reveal the places of origin of regionally imported ceramic types, consistent with the notion of a fragmented and contested, yet socially embedded, economy. Now, can this interpretation be wrong? How could I convince a skeptic that it is correct and supported by data? High-level, abstract social theory simply cannot be tested, and it cannot be directly evaluated with data.

Anyway, sorry to go on at such lengths. I'm sure that clever social theory types will recognize my postcolonial scenario as a bunch of gobbledygook. But I think this issue of "how would you know if you are wrong?" lies at the heart of many problems of research and writing in archaeology today. Now, maybe some archaeologists don't see the need for rigorous testing of hypotheses or for designing arguments that can be proven wrong. Maybe a scientific epistemology is not valuable to these people. Maybe they don't want to be competitive for grants from the National Science Foundation. Fine. But for the rest of us, we should strive to make better arguments. If you haven't read Booth et al. (2008), that is probably the best place to start.
 
My paper about arguments will also cover good and bad uses of analogy, the argument template contained in Booth et al (2008), empty citations, natural experiments, strong inference, causal mechanisms, and perhaps even Monty Python's Argument Clinic (I've always wanted to cite a Monty Python sketch in a serious scholarly paper, and this may be my best chance yet).
Abbott, Andrew
2004    Methods of Discovery: Heuristics for the Social Sciences. Norton, New York.

Booth, Wayne C., Gergory G. Colomb, and Joseph M. Williams
2008    The Craft of Research. 3rd ed. University of Chicago Press, Chicago.

Ellen, Roy
2010    Theories in Anthropology and "Anthropological Theory". Journal of the Royal Anthropological Institute 16: 387-404.

Gerring, John
2007    Case Study Research: Principles and Practices. Cambridge University Press, New York.

Haber, Stephen
1999    Anything Goes: Mexico's "New" Cultural History. Hispanic American Historical Review 79: 309-330.

Luker, Kristin
2008    Salsa Dancing Into the Social Sciences: Research in an Age of Info-glut. Harvard University Press, Cambridge.

Popper, Karl R.
1934    The Logic of Scientific Discovery. Harper and Row, New York.

Ragin, Charles C. and Lisa M. Amoroso
2011    Constructing Social Research: The Unity and Diversity of Method. 2nd ed. Sage, Thousand Oaks, CA.

Smith, Michael E.2011    Empirical Urban Theory for Archaeologists. Journal of Archaeological Method and Theory 18: 167-192.