Recently, the whole country of Egypt (according to news headlines) banned Ridley Scott’s new film, Exodus: Gods and Kings.
Aside from the flagrant and frustrating use of a post-colon reminder that the story itself involves some sort of ‘vs.’ between ‘Gods’ and ‘Kings,’ the film itself looks like nothing more than another live-action retelling of a Biblical account. Definitely not something worth banning. In fact, when we watched it the other day, in a rather empty cinema here in Edinburgh, it served its purpose well, both as an entertaining and rather well-acted film that not only kept me busy for a few hours, but also gave me a reason to regale my viewing partner with my vast knowledge about the original story. It’s little victories like this, little boosts of the academic ego, that really make life worth living.
So for me, the film was rather innocuous. A fun retelling by a director (who, in the world of Hollywood, seems to get both credit and blame for an undertaking that involves hundreds of people working side by side for years with the singular goal of producing a finished product) whose previous films I’ve enjoyed regardless of their critical successes/failures (for example, take a look at A Good Year; critically disliked, but I’d argue thoroughly enjoying). Yes, God is a snarky and bratty little English boy who seems frustrated to the point of tantrum. Yes, most (though not all) of the actors speak with some sort of English/Australian accent hybrid (it’s a sand and sandals film, after all). Yes, John Turturro (the Jesus himself) plays Seti I, father of Ramses II, pharaoh of Egypt. Even Sigourney Weaver shows up for a few minutes, playing Tuya, Ramses’ vindictive and angry mother, before vanishing somewhere into the tapestry of the glorious set-pieces. Yes, the majority of the actors are caucasian, as if to remind (as Ridley Scott sort of did) the viewers that for a blockbuster film to be successful, one needs blockbuster actors, who happen to be white (for example, look a Yul Brynner in The Ten Commandments, or The King and I, or Kings of the Sun, or just about anything else he was cast in). Yes, the plagues are rationalised by Ewen Bremner’s ‘Expert’ as the result of a natural phenomenon begun by hungry and destructive crocodiles; and the parting of the red sea occurs after what we might presume is a meteorite creating a tsunami, that then retracts the water far enough to allow the Israelites safe passage across. Is all of this reason enough to ban the film? Perhaps for some, but then again my perception of the events depicted within does not carry the same sort of meaning it does for others.
My greatest curiosity, then, about this style of criticism has to do with why individuals seem to take blockbuster films such as this so seriously, as if Scott’s Exodus is somehow supplanting the one in the Bible. We heard a bit of this a while back with Darren Aronofsky’s Noah which, aside from depicting a hoard of fallen angels as giant rock creatures that assist Noah and his family in building the Ark, also did (what I think) is an incredible job of merging the Biblical account of creation with images of the grandeur of evolution.
Yet, where I would draw the line between Noah and Exodus, aside from the artistic views of those involved in either, is the statement made by Russell Crowe’s Noah at the start of the above clip: “I’m going to tell you a story.” Aronosfky knows that the Noah story is nothing more than that, and admits it. Yet, he also likewise tells us that the evolutionary progress from big bang to homo sapiens is nothing more than a story as well.
When we consider this alongside Scott’s Exodus, we might come to similar conclusions: these are stories, both film and Biblical myth in equal measure, so that when they are retold like this, when they part ways from the original, perhaps the anxiety or disappointment felt by certain individuals is the worry that an inaccurate or ‘creative’ retelling betrays the original’s fallibility. That is, if Ridley Scott can re-write the story so easily, then is the original nothing more than a template, a plastic and bendable thing able to be re-created, and thus void of what we might perceive as some sort of ‘sacred’ something? Again, I would say yes and no. After all, it all depends on the individuals involved and the discourse being used and/or amended. Case in point: when the critically disliked and epic-looking Troy came out a few years back, with its predominant white cast and highly adapted re-telling of the Trojan war (which ‘historically’ took place around the same time as the Exodus out of Egypt), the film wasn’t banned for its inaccuracies or insults to history. It was merely considered a bad adaptation of a myth.
Within the altarpiece of the Church of Our Lady in Bruges, Belgium there is a renowned sculpture of the Madonna and Child by Michelangelo. For two Euros, you may see it for yourself. One rather remarkable part of this experience is just how cold it is within this marble space, cold enough that you can see your breath on the air, and for the guard stations to be equipped with heat lamps. Yet, this was not my immediate line of thinking. Nor was it the beauty of the church or Michelangelo’s marble. In fact, my bizarre train of thought—which I later divided into three interwoven sections—was as follows:
It’s rather cold in here.
That Baby Jesus is naked, that must be unpleasant.
I think that Baby Jesus is uncircumcised, that’s odd.
This was immediately followed by a number of quasi-academic inquiries:
Should he, as a Jewish boy beyond eight days old, be snipped?
As well, why did this seem familiar to me?
When I voiced these sudden and reactionary thoughts out loud, my traveling companion was not altogether impressed by my criticism. She did, however, oblige enough to take a picture.
Later, as we were admiring the chocolate nativity in Burg Square, just next to the Basilica of the Holy Blood where, twice daily, you can view a reliquary that holds the blood of Christ collected by Joseph of Arimathea at Christ’s crucifixion, it suddenly dawned on me where I might have made the connection between the poor, cold, and uncircumcised Baby Jesus and previous penile confusion.
It all comes back to Michelangelo’s David. This massive statue of the soon-to-be King of Israel, housed at the end of a long corridor in the Accademia di Belle Arti di Firenze, is renowned for its detailed depiction of the young shepherd as he prepares to take on Goliath. For centuries, students and artists have studied Michelangelo’s ability to capture in chiseled stone the contours and fluid movement of muscle and tissue.
Yet, just like his Baby Jesus in Bruges, there is the obvious—more so, in fact, by mere size—foreskin. Why is David, the Messiah, the Anointed One, uncircumcised? Is this not a horrific mistake, an overlooked bit of anatomical inaccuracy? Does this not, then, cause us to question the validity of Michelangelo’s depiction as nothing more than an incorrect alteration of Biblical ‘fact?’ Perhaps yes and no.
In fact, it seems less a specific amendment to the Biblical depiction of David, and perhaps more a product of Michelangelo’s environment. The model, after all, was a Florentine youth, which accounts for the haircut and features. As well, the reigning Catholicism of Michelangelo’s day was less concerned with the rites of circumcision, than of what the story tells us about Christ’s lineage. For further details, one might peruse some of literature on this subject:
For this post, the point that I’d like to make once more deals with interpretation and ‘fact.’ Clearly, Michelangelo’s depictions of Jewish men are incorrect in that he has failed to represent them as they originally were. Rather, he has given them his own ‘spin.’ Thus, his interpretations have ‘re-written’ historical details. That is, if we take these depictions as representative of actual people in actual places at actual times, they are giving us the wrong information which, over time, might shift into being the ‘right’ information as we move further and further away from the context within which they were made. This depicts a sort of ‘shrinkage,’ the dwindling away of intention coupled with the building up of inaccuracy. It is in this way also where we, the interpreters of interpretation, begin to re-write history, even if by mistake.
If anything, David’s penis reminds me to proceed with caution, to view these sorts of fictions carefully, and to always be wary of where in these representations the line between fictions made-up and fictions made-from might be blurred.
Ever since I first learned of the term, I have not been the most avid fan of ‘non-religion.’ It’s always felt a bit too general, a little too ambiguous, and fairly equivocal in its meaning. Perhaps my greatest critique, though, is its use of ‘religion.’ As a relational term, the ‘non-religious’ individual is defined by their relationship to ‘religion’ which, for quite some time now, has been a term we just can’t seem to define with any certainty. So, for me, using ‘non-religion’ is like saying we’ve somehow figured out what ‘religion’ is, even if that just reflects our acceptance that it is a category ‘defined’ in yet an equally broad or general manner. One of my favorite requests of colleagues who us it, then, is to provide a definition of religion against which they are using ‘non-religion’ relationally. This has provided fun discussions, and at times erudite descriptions and defenses. I’m still not quite convinced.
While this post is about my dislike of ‘non-religion,’ it is also a criticism of the discourse within which the term ascended: the theoretical approach of defining and examining tricky terminology by creating, using, and promoting new terms, which I discussed briefly in last week’s post on Rumsfeldian Atheism. So, while ‘non-religion’ might seem to get the brunt of my discussion here, it is also aimed at terms like ‘ir-religion,’ ‘un-belief,’ or ‘positive and negative’ Atheism. To borrow their own language, then, I am using ‘non-religion’ here in a relational manner, allowing it to stand in as the direct representative for what I determine as ‘terminological abstractions.’
Which brings us to this post, and a look back. My first face-to-face encounter with ‘non-religion’ was at the Non-Religion and Secularity Research Network’s conference in 2012, held at Goldsmith’s University in London. I was very new to the field, and was thus a bit ill-prepared, so my attempt to criticize the term itself was perhaps a bit too mired in tangential humor. However, I still think the argument stands, which is why it is presented herein. First, though, and before delving into my criticism, I believe ‘non-religion’ deserves a fair introduction, which I present here with minimal commentary.
The term itself, upon which the research organization The NSRN has built its foundation, stems from Lois Lee’s Doctoral Thesis, “Being Secular: Towards Separate Sociologies of Secularity, Nonreligion, and Epistemological Culture,” as well as a number of subsequent publications. However, for the definition of ‘non-religion’ I will be using two sources connected to the NSRN, one from a description of their research agenda, and the other from their glossary of terms.
From the ‘about’ section of their page:
The two concepts of nonreligion and secularity are intended to summarise all positions which are necessarily defined in reference to religion but which are considered to be other than religious (see Lee, 2012). Thus, the NSRN’s research agenda is inclusive of a range of perspectives and experiences, including the atheistic, agnostic, religiously indifferent or areligious, as well as most forms of secularism, humanism and, indeed, aspects of religion itself. It also addresses theoretical and empirical relationships between nonreligion, religion and secularity.
From the glossary:
Something which is defined primarily by the way it differs from religion. E.g.s might then include atheism, ‘indifference’ to religion and agnosticism would all be examples. Humanism would not be an example (although empirical cases of humanism may well be considered profoundly nonreligious in practice). Alternative spirituality would not be included where this spirituality is defined fundamentally by its autonomous principles and practices.
With these two examples we get a better idea about why the term itself was constructed and how it might be made useful. They also provide what I feel is the ‘double-edge’ issue of using this sort of terminology. On one end, it provides a pragmatic, even practical, signifier that can summarize and house any and all sorts of relatable concepts under a general canopy. In this way, when we discuss individuals who share ideologies such as ‘Atheism’ or ‘agnosticism’ or ‘humanism,’ but do not wish to be labeled as such, using a term like ‘non-religion’ alleviates the issue of externally defining an individual rather than simply allowing them to internally define themselves. This, perhaps, works best when conducting sociological or survey-based quantitative research. On the other hand though, using a general term, even in all its practicality, might create larger issues concerning clarity. As well, and like I cited in my introductory critique, this also leads to a somewhat normative notion about what we mean by ‘religion.’ This, perhaps, is more problematic when conducting qualitative research.
So, while I definitely see the merits in using such general terminology, I still believe the bad outweighs the good. Moreover, I have frequently felt that constructing a new term, rather than focusing on a singular term that would then contribute to the discourse being formed by our collective examinations, seemed more like an impractical abstraction. Classifying all of us under a canopy might make practical sense in a sociological manner, but for the sake of clarity—perhaps even ethnographic clarity—this sort of generalization does more harm than good.
This argument took up the root of my presentation at the NSRN conference, which, with all its tangential and anecdotal non-sensory aside, I hope will make better sense of my argument.
In 1877 Othniel Charles Marsh, a professor and paleontologist at Yale University, documented and published the discovery of a number of large vertebrae that he associated under the genera ‘sauropod.’ He named this specimen, Apatosaurus, or ‘deceptive lizard.’ Soon after, he documented another find, the largest, partially in-tact fossilized remains of any sauropod ever discovered. He named this one Brontosaurus, or ‘thunder lizard.’ While this might seem like an innocuous series of events, the discovery of these two dinosaurs speaks directly to the issue of terminological disparity, mostly because the latter dinosaur, Brontosaurus, never technically existed. Rather, what Marsh labeled as an entirely new species—Brontosaurus—was really just an adult specimen of the smaller Apatosaurus vertebrae. Thus, the Brontosaurus never really existed. It has always been an Apatosaurus.
While on the surface this presents an issue of taxonomic accuracy, which I will discuss below, the underlying problem concerning accuracy doesn’t become a major issue until a century later in October 1989. In that year, and as a promotional ‘tie-in’ with the video cassette release of Universal Picture’s The Land Before Time, the United States Postal Service released four ‘dinosaur stamps’ with the images of a Pteranadon, Tyrannosaurus Rex, Brontosaurus, and Stegosaurus.
For the Postal Service, these stamps were meant to provide more scientific depictions of the dinosaurs featured in the film. For the scientific community, however, they merely represented a misguided insult. Not only did they dismiss the fact that the Pteranadon was, in fact, not a dinosaur, but their perpetuated use of ‘Brontosaurus’ demonstrated an allegiance to familiarity rather than accuracy. After all, these were teaching aids, and they were teaching the wrong information.
Of course, the US Postal Service is not alone in its guilt. This is an issue that has carried on worldwide, demonstrating a discursive allegiance to the generally familiar mistake. For example:
This becomes an especially troubling issue when one considers the role commercial marketing plays in the discursive construction of conceptual identities. Consider, for example, the beloved ‘Littlefoot” in The Land Before Time, and the 13 sequels that have perpetuated his likeness as a ‘Brontosaurus.’
Then again, the blame of perpetuating this mistake is not solely the fault of stamps and blockbuster animated films.
In fact, the popular misidentification of Brontosaurus has been happening since 1914’s Gertie the Dinosaur helped spawn a number of Lost World themed comics all depicting a sauropod titled ‘Brontosaurus.’
Equally guilty is the marketing campaign of Sinclair Oil, which has used the image and name of the Brontosaurus since their two-ton animatronic sauropod was unveiled at the Chicago World’s Fair in 1933, and then re-cycled again in the New York World’s Fair’s Dinoland in 1964.
Walt Disney, of course, also has a hand in furthering this mistake, specifically for his use of the term ‘Brontosaurus’ in 1940’s Fantasia, a film that not only perpetuated the incorrect name, but also featured a battle between a Tyrannosaurus Rex and a Stegosaurus, an impossible interaction as the latter had been extinct for at least 80 million years before former ruled the Cretaceous period.
Even today, this controversy carries on in books and toys and hideous t-shirts, proving that when marketed properly, an incorrect term can over-power and even supplant an accurate one.
While I might conclude here, using the metaphor of the perpetuation of an incorrect, yet popularized term as a warning about the use of constructed definitions for the sake of generality, it is the genesis of this disparity, not just the disparity itself, that I believe offers an even clearer argument.
The Bone Wars
Between 1872 and 1892 two men, Edward Drinker Cope
and Othniel Charles Marsh,
vied for paleontological superiority, going to outrageous—almost comical—lengths to out-accomplish one another with discoveries and publications. They lied about their findings, stole specimens, sabotaged each other’s digs, and forged their data. They constructed whole skeletons using a ‘splitting’ technique, the combination of fossilized remains from completely unrelated sources, mixing bones of different age, sex, and species to create a more complete—and generalized—specimen. For example, Marsh used the skull of a Camarasaurus to complete the incomplete skeleton of his Brontosaurus, altering the way he and other paleontologists assessed the eating habits and environments of his greatest find.
Brontosaurus body with Camarasaurus head
Brontosaurus body with Apatosaurus head
Moreover, this equally led to a vague description, and drawing, further occluding the facts about the correlation between Apatosaurus and Brontosaurus
Brontosaurus excelsus, gen. et sp. nov.
“One of the largest reptiles yet discovered has been recently brought to light, and a portion of the remains are now in the Yale collection. This monster apparently belongs in the Sauropoda, but differs from any of the known genera in the sacrum, which is composed of five thoroughly co-ossified verte-bras. In some other respects it resembles Morosaurus. The ilium is of that type, and could hardly be distinguished from that of M. robustus, excepting by its larger size. One striking peculiarity of the sacrum in the present genus is- its comparative lightness, owing to the extensive cavities in the vertebrae, the walls of which are very thin.
The lumbar vertebras have their centra constricted, and also contain large cavities. The caudals are nearly or quite solid. The chevrons have their articular heads separate. The sacrum of this animal is, approximately, 50 inches (l-27m) in length. The last sacral vertebra is 292°TM in length, and 330mm in transverse diameter across the articular face. A detailed description of these remains will be given in a subsequent communication. They are from the Atlantosaurus beds of Wyoming. The animal was probably seventy or eighty feet in length.” 
As might be expected from this sort of confrontation, their feud bred factions, so that the next generation of palaeontologists, whose job it was to make sense of this chaos, took up sides within either camp.
One of these individuals was Henry Fairfield Osborn, a contemporary of Cope’s, who took it as his personal duty to destroy Marsh’s reputation and undermine all of his findings, particularly his sauropod specimens. To do this, he divided Marsh’s collections into synonymous taxonomies, using terminology that seemed similar, but still different, so as to deconstruct the larger concept into something that appeared otherwise ambiguous or dubious. What this also meant was a shift in terminology, not only removing Marsh’s influence in how these specimens were labeled, but altering them in such a way as to support his own stipulations.
Later, and in order to condense Osborn’s taxonomies into something more cohesive, Elmer Samuel Riggs, conducted his own survey, concluding even more decidedly—and objectively, as well—that many of the discoveries made by both men were equally synonymous. Most pertinent to this discussion here, he proclaimed with finality that Marsh’s notorious Brontosaurus was not in fact a unique species, but was rather a mislabeled adult skeleton of the previously discovered Apatosaurus.
After examining the type specimens of these genera, and making a careful study of the unusually well-preserved specimen described in this paper, the writer is convinced that [Marsh’s] Apatosaur specimen is merely a young animal of the form represented in the adult of the Brontosaur specimen. …In fact, upon the one occasion that Professor Marsh compared these two genera he mentioned the similarity between…their respective types. In view of these facts, the two genera may be regarded as synonymous. As the term “Apatosaurus” has priority, “Brontosaurus” will be regarded as a synonym 
With just a few sentences, Riggs made the closing statement on the issue of the Brontosaurus, demoting it from an identified thing, to a synonymous mistake.
Yet, and even though attempts at correcting this inaccuracy are constant reminders of the Apatosaurus’ true identity,
Brontosaurus still lives on. This is perhaps mostly the result of public discourse, of the way a term is consumed and propagated, and thus crystalized by its very usage. It is also, I might add, a warning against using synonymous—generalized—terminology in place of more correct terms.
One might think that this critical little anecdote about the dangers of terminological creativity is my attempt at promoting the term ‘Atheism’ above the term ‘non-religion.’ This would be, as I hope to elucidate, an incorrect perception. Rather, my criticism is not made here to promote my own work, but rather to suggest a bit more caution.
That is, I would argue that the ‘bone wars’ represents an ideal correlation to the discourse that develops out of an emerging field, such as the study of Atheism, non-religion, humanism, secularity, etc. Likewise, I think it in many ways echoes the difficulty in attempting to find a singular group identity out of the variants that we produce in our research. Like the larger field of Religious Studies, we are each providing a discursive sample of a larger entity, so that a general definition, such as ‘non-religion,’ though pragmatically used to provide a canopy under which we might all co-exist, is just as disparaging as generalizing the term ‘religion.’ Of course, one might then argue that even when we are actually researching something quite unique in the larger field of Religious Studies, we are still doing so under the canopy of a pragmatically ambiguous ‘religion.’ Which I agree. However, I do not see this as the end result of using the term ‘non-religion.’ Mostly, this is because our acceptance of the term ‘religion’—though not everyone has accepted this—comes with the caveat that we have progressed along a distinct tract beginning with sui generis notions about the substantive vs. functionalist quality of ‘religion,’ and arrived at a point with no real conclusive and final ‘definition.’ Which is the point, I think. For this reason, I avoid using the term ‘non-religion’ because I do not beleive adding a further ambiguous term to our discourse provides any sort of assistance in the process. Does this mean the study of Atheism, non-religion, humanism, secularity, etc., falls under the canopy ‘religion?’ I’d say yes. Which is likely where I separate myself from the NSRN.
So, in the end, this discussion is not so much about my issue with using the term ‘non-religion’ as a replacement for terms such as ‘Atheism,’ but is rather an argument that a synonymous umbrella is not really all that necessary. After all, we have at least a vague idea about what a ‘dinosaur’ is, even when that concept is amended and altered and changed within the discourse on what might constitute an Apatosaurus or Brontosaurus. Like ‘religion,’ ‘dinosaur’ is a fluid, plastic term, a discursive entity that does not need to be defined, but that is rather imbued by the discourse on entities like Apatosaurus or Brontosaurus. Which for me works for Atheism and ‘religion.’ That might not work for everyone, which I accept. Yet, I’d much rather contend with the disparity between ‘Atheism’ and ‘religion’ than place myself under a terminological umbrella that seems like an established concept merely given a new name. That’s a bit too much like calling an Apatosaurus something it isn’t.
 See also Lois Lee, “From Neutrality to Dialogue: Constructing the Religious Other in British Non-Religious Discourses” in Maren Behrensen, Lois Lee, and Ahmet S. Tekelioglu, eds., Modernities Revisited (Vienna: IWM Junior Visiting Fellows’ Conferences 2011); Lois Lee, “Research Note: Talking about a Revolution: Terminology for the New Field of Non-Religion Studies” (Journal of Contemporary Religion, Vol. 27, No. 1), 129-139; and Stephen Bullivant & Lois Lee, “Interdisciplinary Studies of Non-Religion and Secularity: The State of the Union” (Journal of Contemporary Religion, Vol. 27, No. 1), 19-27.
A few years back a friend asked if I wanted to be a part of a panel he was organizing for the Sociology of Religion Study Group (SOCREL) Conference held at the University of Chester. At the time, I had never attended an academic conference, and was keen on developing my CV, so my emphatic and immediate agreement to participate somewhat overshadowed the fact that I was a bit out of my purview. As I would later discover, the topic of the panel was to be on ‘Conspiracy Theories and Religion,’ a topic about which I knew very little beyond the few aspects that might have inadvertently popped up during my master’s research on New Religious Movements. Therefore, and in an effort to quickly cobble together some sort of correlative connection between Atheism and Conspiracy Theories, I threw together the following theoretical approach. In the years since, I’ve mostly forgotten about this theory, until I was reminded by a recent Facebook discussion pertaining to Pascal’s Wager.
As a reminder, and which will become important shortly, this ‘wager’ is one of many that make up the mathematician Blaise Pascal’s pragmatic approach to the existence of God. To summarize, it can be divided into four conclusions that lead to either infinite or finite results:
If an individual believes that God exists, and God does exist, that person achieves an infinite result: Heaven.
If an individual does not believe that God exists, and God does exist, that person achieves an infinite result: Hell
If an individual believes that God exists, and God does not exist, that person achieves a finite result: neither reward nor punishment.
If an individual does not believe that God exists, and God does not exist, that person achieves a finite result: neither reward nor punishment.
In conclusion, Pascal ‘wagered’ that a life lived in the belief that God existed, whether or not He actually did, would lead to both a life lived in happiness on earth—without persecution, etc.—as well as a life lived in heaven. If it turned out that God did not exist, then the individual who didn’t believe so, but still lived as if He did, would experience no real loss. On the other hand, were God to exist, the individual who did not believe so, and lived as such, would be privy to unhappiness on Earth, as well as in Hell. This led to the argument that the former outweighed the latter in terms of a pragmatic and happy life.
The link between this sort of thinking and the ‘Rumsfeldian Atheism’ I will define below can be made via similar logical conclusions. However, how this sort of logic might also assist us in making sense of how we might define Atheism as exhibiting differing types of ‘Atheisms,’ is a bit more difficult.
A little background, then, in two parts.
As the United States Secretary of Defense under George W. Bush, Rumsfeld had the difficult job of justifying a ground incursion on Iraqi soil. This was a particular issue because the reasons he had stated before—evidence of weapons of mass destruction in Iraq—were without evidential proof, and were thus unverified. Therefore, to further justify what would come to be known as the ‘Bush Doctrine,’ Rumsfeld made the argument that the lack of evidence for something did not equate that something as not existing. In other words: an absence of evidence was not the evidence of absence. This argument, as we soon discovered, reasoned the utility of a pre-emptive strike, an incursion made to rout out threats before they could be actualized. His argument, though quite logorrhean, is as follows:
Now what is the message there? The message is that there are known ‘knowns.’ There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. So when we do the best we can and we pull all this information together, and we then say well that’s basically what we see as the situation, that is really only the known knowns and the known unknowns. And each year, we discover a few more of those unknown unknowns.
There’s another way to phrase that and that is that the absence of evidence is not evidence of absence. It is basically saying the same thing in a different way. Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn’t exist. And yet almost always, when we make our threat assessments, when we look at the world, we end up basing it on the first two pieces of that puzzle, rather than all three.
Here’s a short clip of his statement (the full version is difficult to find, and most clips have been edited or amended for humorous effect)
His logic here is pretty straight-forward, which I have amended as such:
Known Knowns: Things that we know exist. (Chemical and Biological Weapons manufacturing)
Known Unknowns: Things we know we don’t know. (The development of Chemical and Biological Weapons for the purpose of selling to American enemies, such as terrorist organizations)
Unknown Unknowns: Thing we don’t know we don’t know. (Are there Weapons Manufacturing we don’t know about just yet—are there threats we may not have perceived yet?)
The first category is justified by evidence. We know these things are true. For instance, we know Iraq used Chemical Weapons (mustard/nerve agents) against the Iranians and Kurds between 1983 and 1988, as well as tested Biological Weapons (anthrax, aflatoxin, botulinum) that were to be destroyed between 1988 and 1991.
Evidence of Weapons Tested
Likewise, Iraq also continuously tried to establish un-sanctioned nuclear weapons facilities, as well as enhanced their soviet scud missiles and launching towers for longer-range attacks.
Map of Nuclear Facilities
The second category is a direct result of the things we know from the first. For instance, knowing that Iraq had used similar weaponry, as well as had built manufacturing plants for nuclear and biological weaponry, these sorts of later images justified the fact that there may be things we don’t know: known unknowns.
Now, given this information, and by accepting there might be things we know we don’t know, we are inevitably led to conclude that perhaps there are things we don’t know we don’t know, which might lead to imminent and deadly threats. It is better, then, and because of this existing evidence, to live one’s life believing that there are things we might not know exist, and shape our perceptions into a pre-emptive preparedness. This, in essence, is not unlike a Pascalian notion. We brings us back to Atheism
For those un-familiar with my work on Atheism I am quite the advocate for dispensing with ‘defining’ the term, and the promotion of a more discursive analysis, what I quite precariously refer to as an ‘ethnographic approach.’ By this, I mean I would rather allow the individual Atheist define him or herself, rather than have that individual be defined by an external observer. One of the leading reasons for this defense is because of the way our own discourse on studying Atheism has seemed to lean more toward the latter.
While this discussion might extend beyond the limits of this present forum, how we came to this point can be briefly drawn out via two distinct categories: historical and theoretical. That is, if we take the discourse on defining the term ‘Atheism’ and treat it like a ‘field of discourse,’ we get a better idea about how the scholars who have done this defining over the last century have followed along a particular progression. In fact, the locus of this turn from defining Atheism via the way individuals have either historically been defined by others, or defined themselves, and theoretically stipulating what the term might mean in a ‘general’ capacity, is found in the way scholars have tried to cope with the differentiation between ‘ancient’ and ‘modern’ Atheisms. This has proven most troubling when the meaning of the former—a political term of censure or imputation given to an individual whose ideas or actions seem threatening to the status quo—and the meaning of the latter—a theological based and ‘parasitic’ conclusion made via re-emergent rational-naturalism that shifts the concept of ‘God’ from omniscient object to subject of inquiry that is then found evidentially false—is combined into a categorically mistaken conglomerate.
Out of this emerges a formulaic theoretical stipulation, what I have determined as the ‘positive vs. negative’ paradigm. For the last few decades just about every scholar who has written about Atheism has adopted this formula, determining an Atheist as someone who either positively asserts themselves as such, or someone who is an Atheist either by their ‘non-theistic’ beliefs—a rather normative and Western-centric idea—or through their ignorance or lack of knowledge about the existence of ‘God. In this way, Atheism has become a term that denotes a philosophical generality, so that it might be used to define any sort of denial, rejection, skepticism, or doubt. This is also why we find people defining ‘Atheism’ as a rejection of any and all sorts of religious or supernatural thinking, or the notoriously troubling notion of ‘Atheist religions’ defined by their innate differentiation from the three Abrahamic religions of Judaism, Christianity, and Islam. As a scholarly ideology, it has been standardized, which is evidenced by its use in Martin’s (2007) Cambridge Companion to Atheism and Bullivant and Ruse’s (2013) Oxford Handbook of Atheism. Even Wikipedia has adopted it.
Though I should also note that my intention here is not to argue that this paradigm is wholly ‘incorrect.’ Rather, I have found that it’s formation, promotion, and advancement provides an intriguing insight into how theoretical thinking alters how difficult to determine concepts like Atheism or religion come to embody the meanings they have. For the former, this is a direct result of a generalization, a pragmatic attempt at making sense of a term that we ourselves have convoluted with our own theorizing. In fact, prior to the advent of this paradigm, Atheism was always defined via historical examples, using individuals as sources. It was not until the 1970s, and Anthony Flew’s Presumption of Atheism, that we began to see the term as encompassing an explicit or implicit nature. Which, really, makes its usage seem all the more precarious as Flew’s initial treatment—as we see repeated by Eller’s (2004) Natural Atheism and Baggini’s (2003) A Very Short Introduction—was made in order to argue that Atheism was mankind’s default position, as all people are born ‘negative Atheists’ because they are simply ‘without’ the belief that God exists.
While this discussion is one I tend to repeat with vigour, and though more of it will undoubtedly continue throughout this blog, this intro will have to suffice for now.
If we adopt Rumsfeld’s Pascalian logic from above, the positive vs. negative paradigm takes on a whole new meaning. In fact, we might even say it adopts a quasi-conspiratorial logic. If nothing else, it helps us make a bit more sense of how we might find ourselves thinking that there are differing types of Atheism across a polarity between explicit and implicit.
Let us begin with the Known Knowns: Atheism and Theism. This represents a dependent binary, the Theist and the Atheist equally ‘knowing’ what they believe: God exists and God does not exist. This is where we find ‘positive Atheism.’
Then, let us look at the Known Unknowns: Agnosticism. Here, if we define the term as a methodology—like Huxley originally did in 1893—used to answer the question of the existence of the Theist’s God, the ‘agnostic’ would fall under the purview of the known unknown. This individual acknowledges the existence of the Theist’s belief in the existence of God, as well as the Atheist’s rejection of that belief, but is not willing to commit to either side. In other words, and based on the first category, they know something that they acknowledge they don’t know in the way the Theist or Atheist does.
Finally, we arrive at the Unknown Unknowns: Negative Atheism. Defined as either an implicit absence of belief—due to a complete ignorance—or an implicit or explicit ‘lack’ of belief—leaning predominately on the etymological alpha privative ‘A’ in Atheism—this individual does not know what they do not know. In other words, they do not know that they do not know what the Theist or the Atheist believes, and are thus not only without the knowledge of the belief that God exists, but are without the knowledge of that knowledge as well.
If this sounds somewhat inane and confusing, that’s the point. While Rumsfeld’s argument about the threats we might not know about seems somewhat justified given the context in which it was made, my use of his categories was, and is, a critical one. It was adopted to point out the convolution we inflict upon ourselves in our attempts at theorizing around an issue, such as how to define a term that seems more and more confusing the more and more we try to define it. Scholars of religion know this all too well, as defining that term has generated the essential basis upon which we have built our ‘theories of religion.’
Yet, my use of it has meant more than just that. It’s also meant to point out that when we are examining or analyzing something that seems uncertain or confusing, the worst thing we can do is try to over-theorize about it. Rumsfeld, as well as the Bush administration, both learned this the hard way—some might say—and I think the academic study of Atheism is heading directly down that path. Rather than take a step back and try to understand the concept with which we are dealing, we seem overly destined to mark ourselves as presenting something unique or different. That is, rather than looking back at how this term has been defined by those who came before, and thus discover the manner with which we have progressively ended up with these sorts of abstractions, we seem happily set on making the discourse all that more excessive and incoherent—logorrhean—by adding to it with precarious and inane concepts like ‘ir-religon’ or ‘non-religion.’
In the end, I think we can learn a lot from Rumsfeld and his logic. If we just took the time to acknowledge that the discourse in which we are both analyzing and contributing to is merely a construct built upon a particular foundation, the less we might find ourselves sounding like someone trying to justify a judgment that we’ve already made.
Last Spring I attended the 6th Israeli Conference for the Study of Contemporary Religion and Spirituality at Tel Aviv University, and presented my usual paper on the definition of Atheism and the use of fiction as ethnography. While the conference itself proved a better experience than I had expected, it was not without anxiety. After all, what might an individual who studies Atheism expect when visiting one of the cradles of Western monotheism? Would I be welcomed? Shunned? Ostracized? Might I be perceived as a threat? An enemy? An infidel? In fact, when I reflect on the short time I spent there, both in Tel Aviv, and wandering the streets of the Old City in Jerusalem, I have repeatedly found myself remembering aspects of that trip in ways I’m sure aren’t completely accurate, as if these questions have somehow transformed into a construction I might use in order to justify certain stereotypes about that part of the world.
These thoughts came to mind recently as we wrapped up our course on the Ethical and Religious Debates in Contemporary Fiction, particularly with our final text, Howard Jacobson’s The Finkler Question. Despite winning the Booker Prize in 2010, this has proven a difficult novel to teach with, partly because Jacobson’s use of humour and stereotyping have often fallen flat with many of our students. To summarise, the text provides an outsider’s perspective of a world he will never truly be a member of, offering us an insight into how he perceives that world, while at the same time providing a means with which to interpret that world itself.
Focusing on three lead characters—Julian Treslove, Sam Finkler, and Libor Sevcik—Jacobson’s fiction alters our perception of this insider/outsider paradigm on a number of occasions. Yet, this is not what I wish to isolate herein. Rather, as I read this text for the second time, I found myself considering how humour itself not only seems ingrained in making sense of and/or interpreting ‘Jewishness,’ but also how simplistically it seems this humour might turn from stereotyping to offensive when it changes from insider to outsider. For example, Treslove (the gentile to Finkler and Sevcik’s Judaism) openly refers to Jews as ‘Finklers,’ based on his idea that his life-long friend is the paragon of Jewishness. So, throughout the text, his references carry a humorous and personal separation from the more malignant sounding sorts of phrases that might be deemed verbally violent. For example, when he finds himself having an argument with Hephzibah, his ‘Jewess’ girlfriend about his incessant assumption that some horrible experience is just waiting for him to discover it, he describes her, and her humour, as thus:
That was what it was to be a Jewess. Never mind the moist dark womanly mysteriousness. A Jewess was a woman who made even punctuation funny. He couldn’t work out how she had done it. Was it hyperbole or was it understatement? Was it self-mocekry or mockery of him? He decided it was tone. Finklers did tone.
Yet, when he tries to emulate her, he fails. He is unable to recreate her ease of tone, her ability to make his own punctuation funny: “it could have been that Finklers only permitted other Finklers to tell Finkler jokes.” Which brings me to the locus of this particular discussion. Is there a subtle line between humour and offence, and is that line more easily blurred for certain individuals?
For our tutorial I prepared three examples with which to approach this question. The first comes from the comedic genius Mel Brooks. In it he sings his way through the horrors of the inquisition, while at the same time making humorous light of both the plight of the Jews massacred during the auto-da-fé, as well as the Catholics who were responsible for these atrocities.
Next, we watched the following clip from the Seinfeld episode entitled ‘The Yada Yada,’ which originally aired on 24 April 1997. In this episode, Jerry is offended by his dentist’s conversion to Judaism, not as a Jewish person, he assures, but as a comedian. His dentist, he is certain, merely converted for the jokes. As ever erudite with its philosophical undertones, the episode is an ideal example of the sort of line-blurring between insiders and outsiders presenting humorous and stereotyping interpretations of themselves and others.
The third comes from an episode of the sitcom Frasier. In this clip, Frasier comes to learn that his girlfriend ‘Faye’ was under the assumption that he was Jewish. This is problematic for Faye’s mother, who we learn would prefer her daughter dating a Jewish man. As per the humour of the show, Fraser, his brother Niles, and father Martin each take up stereotypical ways of sounding or acting ‘Jewish’ in order to keep Mrs. Moskowitz happy.
Now, in each of these clips humour and stereotyping are used to describe a type of ‘Jewishness.’ Yet, with the latter, we find an interesting situation that is separate from the others. Perhaps more akin to Treslove’s attempts in The Finkler Question, in the Frasier clip the humour is coming from gentiles pretending to be Jewish for humorous effect. Is this offensive? Anti-Semitic?
As data, these clips, as well as Jacobson’s novel, are peculiar sources. Yet, I equally like to think that they serve as reminders that we all construct stereotypes and assumptions that contribute to our larger perceptions about what particular identities look like. I know I was guilty of this in my time in Israel, but I also know that it is in stereotyping and interpretation where we begin to create our ethnographic perceptions. Thus, I further wonder if our outsider perceptions are offensive in the sense that we are trying to tell Finkler jokes without the benefit of an inherent Finkler tone?
 Howard Jacobson, The Finkler Question (London: Bloomsbury, 2010).
Religion, Critical Theory and Conspiracism | Lecturer in Religious Studies at the Open University | Co-founding Editor of the Religious Studies Project | Editor, Implicit Religion | Bulletin Editor of the British Association for the Study of Religion.