Note:Today’s blog post will be the last one for a bit – due to my workload piling up for my PhD, I’ve decided to take a month long hiatus from blogging. Hopefully, having the entirety of February to catch up on my lab-work (and also celebrate my birthday on my downtime because I’m a February baby! Please send all birthday coffees here thank you) will get me ahead of schedule and I can get back to weekly blog posts come March. Thanks again for all of your support and hopefully I’ll see some of you when I get back – until then, remember you can also follow me on Twitter where I’m sure I’ll occasionally pop in to complain about animal bones!
Okay…I know I said that I wouldn’t use that extremely bad, extremely old joke to introduce a blog post…but this one is basically a companion piece to the previous OM NOM NOM post on gnawing, so it doesn’t count…I think.
Well, I promise I won’t use it again after this, okay? Okay.
Anyway, let’s talk about butchery.
“Butchery” is basically what zooarchaeologists call any physical characteristics that may indicate that the bone has been modified by humans. There can be many reasons why bones will be modified, but most commonly its for consumption. Here’s a brief overview of three common butchery marks that can be found on faunal bone in the archaeological record:
Cut marks look like thin striations in the surface of the bone. They are mostly associated with activities like skinning/de-fleshing. Based on other characteristics, zooarchaeologists can determine whether a cut mark was made by a stone blade or a metal blade. Stone blades create shallow v-shaped marks with parallel striations (Potts and Shipman 1981), while metal blades will made deeper, slightly angled v-shaped marks (Greenfield 1999).
Slightly different from cut marks are chop marks – these are marks that were made by blades that hit the bone at a perpendicular angle, causing a V-shape that’s much broader than a cut mark (Potts and Shipman 1981).
One very specific form of butchery that’s pretty easy to identify is marrow cracking or marrow extraction. Marrow is a valuable product that can be extracted from various bones simply by breaking into the shaft. We can recognise bones that have been cracked or butchered for marrow by the fractures and splintered fragments left behind (Outram 2001). Depending on the tool used to break the bone, “percussion notches” can also be found along the fractures.
Obviously there’s much more when it comes to butchery marks, but these three are arguably some of the common forms of butchery that you run into as a zooarchaeologist. To be honest, there’s something really wonderful about finding bits of butchery when you’re excavating – running your fingers along the striations in the bone, it’s amazing to think that hundreds, thousands of years ago, someone created these marks…probably with a stomach as hungry as mine, too.
I’m gonna be honest, I get so hungry when I work with animal bones sometimes…is that weird? It’s weird, right. Hm.
Greenfield, H.J. (1999) The Origins of Metallurgy: Distinguishing Stone from Metal Cut-marks on Bones from Archaeological Sites. Journal of Archaeological Science. pp. 797-808.
Outram, A.K. (2001) A New Approach to Identifying Bone Marrow and Grease Exploitation: Why the “Indetereminate” Fragments Should Not Be Ignored. Journal of Archaeological Science. pp. 401-410.
Potts, R. and Shipman, P. (1981) Cutmarks Made by Stone Tools on Bones from Olduvai Gorge, Tanzania. Nature pp. 577-580.
First, a confession: a few years ago, I did read Marie Kondo’s book and attempted to use the KonMari method to wrangle my large collection of “stuff” that I had managed to cultivate after only a year of living in the UK. Turns out, I am secretly a hoarder and everything sparks joy, so it didn’t really work for me.
With Marie Kondo’s new television show out and causing lots of discourse, it got me thinking about…what else? Archaeology! For those who don’t know, Marie Kondo’s method of decluttering and tidying (also referred to as the KonMari method) is based off of the idea that you should keep items that “spark joy”; by employing this particular mindset, clients are able to minimise their belongings to smaller collections that are more consistent with what they visualise as part of their everyday lives (Kondo 2014).
But what about archaeological objects? Do we ever think about if they once “sparked joy”?
One thing that always bugged me about archaeology, particularly as an undergraduate student just learning the basics, was how much emphasis was placed on utilisation within interpretation – the main questions are usually “how was this used?” or “how did this make survival easier?” What about, “how did people in the past see this object?” or “did they like this object? Like, a lot?”
Of course, that’s not to say that archaeologists haven’t been discussing this very topic. Or, at the very least, they have been discussing around it. For example, as we move towards post-processualism in archaeology, we find that discussions of material culture turn towards examining the symbolic aspects that need to be interpreted from the artefacts, rather than observed (Hodder 1989).
However, could we possibly develop a Marie Kondo Framework in archaeological interpretation? Kondo’s methodology is based heavily on philosophical and aesthetic theories – is there any way we can carry this over into archaeology? Arguably, there must have been some artefacts that were deemed important and valuable not because it was a tool or made of rare material; instead, these were valuable due to sentimentality, or aesthetics, or hell, maybe they were just a bunch of lucky stones for all I know.
Well, it’s complicated – particularly because philosophy gets involved. In a lot of ways, this question is similar to asking what “worth” means in an object. Is it about the materials used to make it? Or the personal worth, which can be dictated by emotions and experiential context? Is there even a solid definition of “impersonal worth” that can be used as a basis, reflecting the universal concept of what the value of an object is (Matthes 2015)? Yeah, my brain hurts too.
There is also the issue of ethics, in that questions of the personal in archaeology can easily lead to bias. Perhaps to you, this statue may look like it has symbolic significance. Maybe it was a deity that looked over the residents of this house, or perhaps a good luck charm that kept bad omens away? It’s easy to assign grand visions of high spiritual value and sentimentality to an artefact…that could easily just have been something an ancient person’s child made and was kept around like a drawing on a fridge. Ultimately that’s the big issue with artefacts and interpretation – as you delve deeper into the more philosophical and abstract, you end up with countless other questions regarding the “essence” of an artefact that undoubtedly cannot be answered (Shanks 1998).
However, I’d argue there are some approaches that can come close to getting a better idea of what the personal value of an artefact was. There are small indicators, of course – for example, you could argue artefacts that are worn and mended made reflect excessive amount of use and the desire to keep said artefact even after breaking. There are also some methodological approaches to examining possible concepts of value, such as utilising ethnographic studies and extrapolating results from this (Tehrani and Riede 2008).
We will never truly understand how people in the past felt about certain things, particularly prior to written record. But we occasionally get hints here and there, and that’s exciting! I think perhaps a Marie Kondo Framework is less about discovering what people in the past found joy in, and about remembering that people in the past did feel joy. And many other things! And although we may not be able to calculate that using lab analysis or statistics, we also shouldn’t lose sight of the fact that the people whose lives we are recovering through excavation are still people.
Hodder, I. (1989) The Meanings of Things: Material Culture and Symbolic Expression. HarperCollins Academic.
Kondo, Marie. (2014) The Life-Changing Magic of Tidying Up: A Simple, Effective Way to Banish ClutterForever. Vermilion.
Matthes, E.H. (2015) Impersonal Value, Universal Value, and the Scope of Cultural Heritage. Ethics 125(4). pp. 999-1027.
Shanks, M. (1998) The Life of an Artifact in an Interpretive Archaeology. Fennoscandia archaeologica XV. pp. 15-30.
Tehrani, J. and Riede, F. (2008) Towards an Archaeology of Pedagogy: Learning, Teaching, and the Generation of Material Culture Traditions. World Archaeology 40(3). pp. 316-331.
One of my goals for 2019 is to try and make my work evenmore accessible – including conference and journal papers! I know that those can be hard to read due to jargon and the general sleep-inducing nature of the academic writing style, so I’ll be writing accompanying blog posts that are more accessible (and hopefully more fun!) to read with just about the same information. And if you’re a nerd, I’ll also add a link to the original paper too. Today’s blog post comes from a paper I presented at the 2018 Theoretical Archaeology Group Conference – you can find the full text here.
If you think about the word “anarchist”, you probably have a very specific image that comes to mind – some sort of “punk” masked up and dressed all in black, probably breaking windows or setting fires. And while that may be accurate praxis for some who wave the black flag (and also completely valid!), I’d argue that is doesn’t necessarily do the actual concept of “anarchism” justice…although, to be honest, I do love to wear black clothes
So then…what is anarchism? And how can it relate to archaeology?
To use Alex Comfort’s definition (1996), anarchism is “the political philosophy which advocates the maximum individual responsibility and reduction of concentrated power” – anarchy rejects centralised power and hierarchies, and instead opts for returning agency to the people without needing an authority, such as a government body. Anarchy places the emphasis on communal efforts, such as group consensus (Barclay 1996).
So, how does this work with archaeology? Why would you mix anarchy and archaeology together? For starters – this isn’t a new concept! There have been many instances of “anarchist archaeology” discussions, from special journal issues (Bork and Sanger 2017) to dedicated conference sessions (see the Society for American Archaeology 2015 conference). There have also been a few instances of anarchist praxis put into archaeological practice: for example, there is the Ludlow Collective (2001) that worked as a non-hierarchical excavation team, as well as the formation of a specifically anarchist collective known as the Black Trowel Collective (2016).
To me, an Anarchist Archaeology is all about removing the power structures (and whatever helps to create and maintain these structures) from archaeology as a discipline, both in theory and practice. We often find that the voices and perspectives of white/western, cis-heteronormative male archaeologists are overrepresented. Adapting an anarchist praxis allows us to push back against the active marginalisation and disenfranchisement of others within our discipline. This opens up the discipline to others, whose perspectives were often considered “non-archaeology” and therefore non-acceptable for consideration by the “experts” (i.e. – archaeologists) In Gazin-Schwartz and Holtorf’s edited volume on archaeology and folklore, this sentiment is echoed by a few authors, including Collis (1999, pp. 126-132) and Symonds (1999, pp. 103-125).
And hey, maybe logistically we’ll never truly reach this level of “equitable archaeology” – after all, this is a long, hard work that requires tearing down some of the so-called “fundamental structures” of the discipline that have always prioritised the privileged voice over the marginalised. But adapting an anarchist praxis isn’t about achieving a state of so-called “perfection”; rather, it’s a process of constantly critiquing our theories and assumptions, always looking for ways to make our field more inclusive and to make ourselves less reliant on the problematic frameworks that were once seen as fundamental.
It’s a destructive process for progress…but hey, isn’t that just the very nature of archaeology itself?
Barclay, H. (1996) People Without Government: An Anthropology of Anarchy. Kahn and Averill Publishers.
At the time of writing this blog post, we are only three days into 2019. I’ll be honest – I’ve experienced 25 years on this planet and I still make New Year’s resolutions. The usual ones, of course: exercise more, consume less sugar, etc. And, of course, these resolutions usually make it until mid-February before I completely ditch them and continue to eat chocolate bars every day without touching my running shoes. I know New Year’s resolutions are silly gimmicks, marketed by gyms and health apps to make lots of money come January 1st. But I have always liked to utilise the New Year as a time for restarting my daily routines, renewing goals – I mean, I have an entire year ahead of me with so many possibilities, right?
So in honour of the New Year, let’s look at how we measure time in archaeology.
There are many ways that archaeologists create chronologies, and we often combine several methods to get a better idea of what a site’s timeline was like. Possibly the easiest way to “see” time across a site’s archaeological record is to look at the cross-section of a trench during excavation. The stratigraphy of an archaeological site can usually be seen as a series of “layers”, almost like a cake…if the cake was made out of various soils, organic material, and artefacts. These layers provide us with a general ideal of the order in which materials were deposited – this includes both natural and anthropogenic materials. It may be easier to think of archaeological stratigraphy as a sort of “visual starting point” for further developing a chronology for the site (Harris 1989). In an ideal world, we could simply look at the layer on the bottom to determine the “beginning” of the site’s history…but of course, things are never that simple.
During post-excavation, there are numerous methods available to an archaeologist for further dating. Having a typology (read more on typologies here) of a certain artefact, such as pottery, can help an archaeologist get a general idea of what time period they are currently dealing with. Within archaeological science, there are a variety of lab-based methods for dating: radiocarbon, potassium-argon, uranium, etc.
Of course, these methodologies aren’t perfect, nor are they definite. In fact, archaeologists differentiate between absolute and relative chronologies. Absolute chronologies provide us with approximate dates, often from lab-based methods such as radiocarbon dating. On the other hand, relative chronologies (for example, using a typology to determine an approximate period of creation and use) can be used to determine general time periods using the relationship between a previously occupied site (and its material remains) and an overall culture (Fagan and Durrani 2016).
Additionally, there are many external factors that can affect the recovered context of a site, thereby complicating the timeline – for example, burrowing creatures may cause some artefacts to fall into the contexts of others. There have also been many cases of re-using older artefacts and spaces, which can complicate the timeline further (you can read more on recycling and re-using the past here).
Overall, however, archaeology has been a useful tool for conceptualising the beginnings of things – while we cannot establish with certainty the absolute start of agriculture or domestication, for example, we have been able to develop an approximation of how early humans were practising such concepts.
And let’s be real – time itself is a fascinating concept. While we have this sort of “standardised” method of calculating and measuring time today, we cannot truly account for past perspectives on time. Of course, we can find material evidence that may illustrate the physical act of “keeping time” in the past, but how did people in the past really experience time? Think about how quickly an hour can go by today, just by watching random videos on YouTube or Facebook on your smartphone. Remember how much longer an hour felt when we didn’t always have access to the Internet at all times, prior to smartphones and other such devices? What about someone in the past who has a completely different mindset to us – how did they experience an hour?
…honestly, I could probably prattle on for hours and hours about this (and how would you experience that??).
Anyway, hope you all had an easy transfer from 2018 to 2019 this past New Year. Here’s to another year of writing incoherent, rambling posts that you hopefully find entertaining at the very least. And thank you all for supporting and reading my work last year, too – hope to see you all back again at the end of 2019!
Fagan, B.M. and Durrani, N. (2016) In the Beginning: an Introduction to Archaeology. Routledge.
Harris, E.C. (1989) Principles of Archaeological Stratigraphy. Academic Press.
In the Elder Scrolls video game series, there are many fantastical creatures and monsters inhabiting the world, both friendly and hostile to the player character. One of these monsters (or perhaps that’s a bit too judgemental?) is the vampire, whose curse (or blessing?) is passed to others through a disease called “sanguinare vampiris”, also known as “porphyric hemophilia” in earlier games. The player character can become infected with this disease and become a “creature of the night”, obtaining all the advantages and disadvantages of vampirism.
Vampire lore was elaborated on extensively in Skyrim, specifically through the downloadable content Dawnguard, which places the player character in the middle of a conflict between vampires and vampire hunters. In this DLC, it is explained that there are many individual clans of vampires across the world, with the most powerful vampires known as “pure-bloods”. A pure-blooded vampire will have been granted their powers from the Daedric Prince (basically one of the Elder Scrolls deities) Molag Bal directly. The DLC also introduced the “Vampire Lord” form – this is considered the ultimate form of vampirism and is usually a power that only pure-blooded vampires have.
The idea of the “vampire” is a relatively old one, of course. In Europe, it seems that vampirism became a topic of interest during the 18th century, with the word “vampire” officially entered into the Oxford English Dictionary in 1734. Many early stories of vampires appear to have originated from German and Slavic folklore, although there are many instances of vampire-like creatures in stories around the world (Barber 1988).
Literature and film eventually created what we may consider today to be the “archetypal vampire” – Polidori’s The Vampyre, Sheridan Le Fanu’s Carmilla, and Bram Stoker’s Dracula provided the textual background for the modern day vampire, while F.W. Murnau’s Nosferatu and Tod Browning’s Dracula ultimately solidified the visual characteristics associated with the monster that are still used to this day. However, we still occasionally get new “twists” on the old formula in popular culture – from “sexy, brooding vampires” (see Anne Rice’s The Vampire Chronicles series or Stephanie Meyer’s Twilight series) to more hilarious takes on vampire culture (see Jermaine Clement and Taika Watiti’s mockumentary What We Do in the Shadows).
When it comes to “deviant” burials, or burials that differ from normative burial practices, it’s easy to draw negative assumptions of the deceased, particularly when combined with “flights of fancy” of local folklore. Among these deviant burials, many have been interpreted as possible “anti-vampire burials”; this was a term first used in 1971 off-handedly by Helena Zol-Adamikowa and eventually popularised throughout Slavic archaeological literature to refer to most burials that defied funerary norms (Hodgson 2013).
Some of the various evidence used to support these “anti-vampire burials” include protective burial goods (like sickles), stones left atop of bodies, stakes or knives stabbed through the chest, decapitations, and, perhaps one of the more prolific examples, stones or bricks placed within the mouth of the deceased (Barrowclough 2014).
While there are undoubtedly examples of how pervasive the idea of vampires or, more generally, the undead were throughout folklore in “deviant” burials, there should also be a bit of caution in generalising all non-normative burials in this way, of course! There has been plenty of debate even regarding the evidence mentioned above. But perhaps the most solid thing to come out of all of this archaeological research is how such abstract concepts can ultimately be reflected in the material culture that remains.
Oh, and that apparently if you run into a vampire, you should definitely stuff a brick in their mouth.
Barber, P. (1988) Vampires, Burial, and Death: Folklore and Reality. Yale University Press.
Barrowclough, D. (2014) Time to Slay Vampire Burials? The Archaeological and Historical Evidence for Vampires
Bethesda Game Studios. (2011) The Elder Scrolls V: Skyrim.
Hodgson, J.E. (2013) ‘Deviant’ Burials in Archaeology. Anthropology Publications. 58. pp. 1-24.
Lately, archaeologists have been a bit concerned about memes. No, not because they’re trying to perfect their comedic skills – rather, there’s been a relatively recent rash of popular memes that were derived from several big archaeological finds. For example, a nearly complete human skeleton was recovered in Pompeii, originally interpreted to have been crushed to death while fleeing the eruption of Mt. Vesuvius in 79 CE. The image used to publicise this excavation – a skeleton whose head has been obfuscated by a stone slab – ended up being used by many as a meme on social media like Twitter and Facebook. This led to a further discussion by archaeologists across the Internet on respecting human remains and whether or not it was ethical to make memes out of recovered bodies, regardless of the age and unknown identity (Finn 2018).
Let’s talk about late capitalism and how it shapes the average young person’s everyday life, shall we?
Millennials have had the utmost misfortune to reach young adulthood (the “pivotal years”, as many call this time period) during late capitalism. This means that, as a generational group, they are significantly poorer than previous generations (O’Connor 2018), with a growing number unable to even save money (Elkins 2018) from a severe lack of fair wages. This is the generational group that is leaving higher education with high amounts of debt, only to find a feeble job market that demands long hours for little pay. It’s a pretty bleak future that young people seem to have inherited, so it’s honestly hard to blame them for developing such a morbid sense of humour that utilises iconography and imagery associated with death to express such futility in a way that’s become palatable for everyone else.
What interests me the most as an archaeologist is how this affects our perception of death and dying in modern times. Morbid memes may be contributing to a sort of desensitisation of dying, to the point where it has become no longer taboo or fearful to speak of the dead – in fact, people actively make fun of the dead and the concept of dying. I would argue that this could be seen as the opposite effect that the Positive Death Movement is having, which strives to cultivate a more positive and respectful attitude towards death. I think, as archaeologists, we definitely need to push back against the meme-ification of the dead as violation of ethics – but I also think we should consider why this has become a trend, how the socio-political characteristics of the world at large can cause these things to become popular, and how we can take this approach and apply it to our interpretations of the past.
Hi, welcome back to the early to mid 2000’s where we still use jokes like “om nom nom” unironically!
Just kidding, I won’t subject you to bad jokes like that for this entire post. Anyway, it’s come to my attention that for a blog called “Animal Archaeology”, I don’t really write that much about the archaeology of animals, huh? Well, today will change that! Here is a brief introduction to how we identify gnaw marks on certain bones – because humans aren’t the only species to eat other animals, don’t ya know?
Rodent gnawing is probably the easiest one to recognise. Due to those huge incisors of theirs, rodents leave behind a very distinct pattern of close striations on the bone. Be warned, however! It can be easy to mix this up with cut marks, or vice versa.
Cats do indeed gnaw on bones! And they have a pretty peculiar way of doing so – when they hold onto a bone, they’ll use their canine teeth, which will often leave a puncture mark! Given their smaller size, these marks will often be a bit small and usually won’t go entirely through the bone (although if you’re dealing with a bigger feline, like a lion, you may find yourself with bigger and deeper puncture marks!). Cats will also do a bit of a “nibble”, leaving behind a very pitted and rough looking texture.
This is possibly something you can check right now if you have dogs as pets – take another look the next time they chew up a bone. Canine species like dogs and wolves will produce gnaw marks similar to felines in that they will often cause a puncture hole in the bone with their teeth. However, canine species will usually produce much larger holes in comparison. Another key characteristic is that canine species will slobber – when they gnaw on bones, they often produce what can only be described as “an upsetting amount of saliva” – however, this is great for zooarchaeologists, as it can leave behind a very polished look to the bone, which is very distinct. So, next time see you a beautifully polished archaeological bone…it was probably covered in ancient dog spit.
Yes, occasionally we do find human gnaw marks, although now we’re a little bit out of my jurisdiction! So, our teeth look weird – well, at least compared to non-human teeth. So the kind of gnaw marks we leave are a bit…wonkier? Is that the right word? Just bite into an apple and see what you leave behind, it’ll depend on how your incisors look, as we often lead with them to bite down onto something. Personally, I have pretty large buckteeth, so I’d hate to be the zooarchaeologist looking at my left behind teeth marks trying to figure out what the heck happened!
Parkinson, J.A., Plummer, T., and Hartstone-Rose, A. (2015) Characterizing Felid Tooth Marking and Gross Bone Damage Patterns Using GIS Image Analysis: An Experimental Feeding Study with Large Felids. Journal of Human Evolution. 80. pp. 114-134.
Yeshurun, R., Kaufman, D., and Weinstein-Evron, M. (2016) Contextual Taphonomy of Worked Bones in the Natufian Sequence of the el-Wad Terrace (Israel). Quarternary International. 403. pp. 3-15.
Is there a “Perfect Pokemon”? Well, I guess technically there is the genetically engineered Mewtwo…but what about “naturally occurring” Pokemon? Can Trainers “breed” them for battle?
A form of “Pokemon breeding” has been a vital part of the competitive scene for years. Players took advantage of hidden stats known as “Individual Values”, or “IV’s”, which would influence a Pokemon’s proficiency in battle. These stats could be changed based on training and utilising certain items in-game. In order to have the most control over a Pokemon’s IV’s, it is best if a Player breeds a Pokemon from the start by hatching them from an Egg, allowing for modification of stats from the very start. This is in contrast to usiong caught Pokemon, which are often above Level 1, so some of their important stats have already been changed “naturally” (Tapsell 2017).
But what about real life animal breeding? More specifically, “selective breeding” – this refers to human-influenced or artificial breeding to maximise certain traits, such as better production of certain materials (for example, milk or wool) or better physicality for domestication (stronger builds for beasts of burden, etc.). This is in contrast to natural breeding or selection, in which the best traits towards survival and adaptation are passed through breeding, although these traits may not be best suited for human use of the animal. Selective breeding is most likely as old as domestication itself, but its only been recently (at least, in the past few centuries) that humans have more drastically modified animal genetics (Oldenbroek and van der Waaij 2015).
But can we see selective breeding archaeologically? For the most part, this sort of investigation requires a large amount of data – zooarchaeologists can see dramatic modifications to bred animals by examining large assemblages of animal remains over time. Arguably one of the best examples of this can be seen in looking at dog domestication and how breeding techniques have drastically changed aspects of canine anatomy (Morley 1994).
Zooarchaeological data can be supplemental by other sources of evidence, such as text and material remains. Perhaps the most powerful innovation in archaeological science, however, is DNA analysis – using techniques such as ancient DNA (aDNA), we can see specific genetic markers to further investigate exact points of change (MacKinnon 2001, 2010).
The most recent additions to the Pokemon video game franchise, Pokemon: Let’s Go Pikachu and Let’s Go Eevee have not only streamlined gameplay, but have also made the previously “invisible stats” more visible and trackable to the chagrin of some seasoned Pokemon players. However, for new players this is undoubtably a welcome change…now if only we could make it just as easy to see in real life zooarchaeology!
MacKinnon, M. (2001) High on the Hog: Linking Zooarchaeological, Literary, and Artistic Data for Pig Breeds in Roman Italy. American Journal of Archaeology. 105(4). pp. 649-673.
MacKinnon, M. (2010) Cattle ‘Breed’ Variation and Improvement in Roman Italy: Connecting the Zooarchaeological and Ancient Textual Evidence. World Archaeology. 42(1). pp. 55-73.
Morey, D.F. (1994) The Early Evolution of the Domestic Dog. American Scientist. 82(4). pp. 336-347.
With the addition of Hearthfire as downloadable content, Skyrim allowed players to build and live in their own customisable homes. One of the options for buildable rooms included a “trophy room”, where players can erect trophy versions of some of the creatures that can be killed in-game. This ranges from real world game like bears to the more mythical beasts like dragons. Yes, even in a game where you can kill and mount living tree creatures called “Spriggans”, the very human fascination with animal remains still exists!
Hunting trophies appear to be somewhat culturally ubiquitous, and can be found throughout the archaeological record. Although most discussion on trophies in the Prehistoric tend to focus on headhunting and human remains (see Armit 2012), we do have plausible evidence that some recovered animal remains from sites were most likely kept as hunting trophies.
Of course, animal remains were used quite often in Prehistoric life in ways that went beyond decor and trophies – modified bones reveal that it was common to create tools (needles, pins, combs, etc.) out of hunted animals. Another common interpretation for animal bones and other associated remains found in more “domestic” contexts is that they may have had some sort of ritual use – for example, there are many instances of animal bones deposited in pits and building foundations (Wilson 1999). Arguably some of the most famous examples of ritual use of animal bones are the Star Carr deer frontlets – these cranial fragments with the antlers still attached were possibly worn as headdress or masks during rituals, perhaps as a way of evoking a form of transformation by the wearer (Conneller 2004).
Hunting trophies as we understand them today were popular as far back as the medieval period, where hunting for sport not only resulted in trophies of animal remains – there were also “living trophies”, in which big game and exotic animals were captured and kept in menageries. The popularisation of natural history exhibits and taxidermy in the 19th and 20th centuries also brought with it a new wave of displaying hunted animals, both for education and for the sake of, well, showing off your hunting skills. However, this wasn’t the only way to display one’s hunted game – it was also quite popular to commission paintings of hunting trophies, which would eventually evolve into the popularisation of photographing one’s kills (Kalof 2007).
Ultimately, if we look at the concept of “trophy animals” as a whole, what can we learn about human-animal interactions throughout history? The concept of a “trophy”, regardless of the method in which it is displayed, is centred around the objectification of the dead animal. It is also often a sign of power and a visual reminder of the sort of hierarchies in place in society – after all, trophy rooms and hunting for sport are often associated with masculinity and elite status. Unsurprisingly, there are also associations with hunting trophies and colonialism, with many photographs showcasing white men in pith helmets next to their “exotic” game in colonised regions of the world (Kalof and Fitzgerald 2003).
But here, in our fantasy video game, our trophies stand – perhaps problematic by nature of their associations in real life – but also as reminders of the system in which Skyrim runs, where I fondly remember how that one snow bear managed to kill me at Level 3 at least a dozen times. And now that snow bear is stuffed in my house. How the tables turn.
Armit, I. (2012) Headhunting and the Body in Iron Age Europe. Cambridge University Press.
Conneller, C. (2004) Becoming Deer: Corporeal Transformations at Star Carr. Archaeological Dialogues 11(1). pp. 37-55.
Kalof, L. and Fitzgerald, A. (2003) Reading the Trophy: Exploring the Display of Dead Animals in Hunting Magazines. Visual Studies 18(2). pp. 112-122.
Kalof, L. (2007) Looking at Animals in Human History. Reaktion Books Ltd.
Wilson, B. (1999) Displayed or Concealed? Cross Cultural Evidence for Symbolic and Ritual Activity Depositing Iron Age Animal Bones. Oxford Journal of Archaeology 18(3). pp. 297-305.
Note: I struggled about whether or not to write about this game due to the issues surrounding its development and the poor treatment of workers (for more information, please read this article from Jason Schreier). However, I think it marks an interesting development in the ever-growing world of virtual archaeologies, so I proceeded to write about it. That being said, please show support for the unionisation of game workers by visiting Game Workers Unite.
Red Dead Redemption 2 (Rockstar Studios 2018) has only been out for a short while, but many players have been praising the level of detail that has gone into the game. One of the most striking features, at least to me as an archaeologist, is the fact that bodies actually decay over time. That’s right, video game archaeologists – we now have some form of taphonomy in our virtual worlds!
But wait, what is “taphonomy“? Well, you may actually get a few slightly differing answers from archaeologists – we all mostly agree that taphonomy refers to the various processes that affect the physical properties of organic remains. However, it’s where the process begins and ends that has archaeologists in a bit of a debate. For the purposes of this blog post, I’m gonna to use a definition from Lyman (1994), which defines taphonomy as “the science of the laws of embedding or burial” – or, to put it another way, a series of processes that create the characteristics of an assemblage as recovered by archaeologists. This will include not only pre-mortem and post-mortem processes, but processes that occur post-excavation, as identified by Clark and Kietzke (1967).
Let’s start with the pre-mortem processes, which are often ignored in discussions of overall taphonomy – firstly, we have biotic processes, which sets up the actual conditions of who or what will be deposited in our final resulting assemblage – this can include seasonal characteristics of a particular region, which will draw certain species to inhabit the area (O’Connor 2000), as well as cultural factors, such as exploitation and, unfortunately, colonisation/imperialism (Hesse and Wapnish 1985).
Now, let’s use some poor ol’ cowboys from Red Dead Redemption 2 as examples of post-mortem processes – Content Warning: Images of (digital) human remains in various stages of decay are about to follow, so caution before you read on!
With our biotic processes providing us with these cowboys who have moved West for a variety of reasons, we now need to determine our cause of death to continue with taphonomy. This falls under thanatic processes, which causes death and primary deposition of the remains (O’Connor 2000). In our example above, we would probably be able to find osteological evidence of trauma due to the cowboy getting shot to death.
In time, we soon see the work of taphic processes, or the chemical and physical processes that affect the remains – this is also sometimes referred to as “diagenesis” (O’Connor 2000). Much of what we consider to be “decay” when we think of decomposition will fall under this category of processes. Sometimes this will also affect the remaining structure and character of bone that will eventually be recovered.
Now, imagine we take this body and, as seen in the YouTube video from which these images come from, toss it down a hill. Okay, this is a bit of an over-the-top example, but it showcases another category of processes known as perthotaxic processes. These processes causes movement and physical damage to the remains, either through cultural (butchery, etc.) or natural (weathering, gnawing, trampling, etc.) methods. Similar to these processes are anataxic processes, which cause secondary deposition and further exposure of the remains to other natural factors that will further alter them (Hesse and Wapnish 1985).
The above image shows the remains of the cowboy finally reaching his secondary place of deposition after being tossed from the top of the hill and now drawing the attention of scavenger birds – this showcases an example of an anataxic process, as the body is being scavenged due to exposure from secondary deposition.
At this point, we begin to see how all of the aforementioned processes have affected our current archaeological assemblage-in-progress: we clearly have physical and chemical signs of decay, with physical alteration due to post-mortem trauma (tossing off of a hill) and exposure (including gnawing from other animals). This results in some elements going missing, some being modified, and others being made weaker and more likely to be absent by the time the body is recovered archaeologically.
Now, we also have two processes that occur during and after archaeological excavation that, again, often get overlooked: sullegic processes, which refer to the decisions made by archaeologists for selecting samples for further analysis (O’Connor 2000) and trephic processes, which refer to the factors that affect the recovered remains during post-excavation: curation, storage, recording, etc. These are often ignored as they don’t necessarily tell us much about the context surrounding the remains, but they are vital to consider if you are working with samples that you did not recover yourself or have been archived for a long time prior to your work.
Environmental differences will also affect the sort of variety within the overall taphonomic process – for example, wet environments (say, like the body of water seen in the image above) will cause the body to become water-logged, which may speed up certain taphic processes and create poorer preservation. More arid environments, like a desert, may lead to slightly more preservation in some cases due to the lack of water that may damage the bones.
Although the game certainly speeds up these processes and streamlines them in a way that removes some of the other variables that you would see in real life, I’d argue that Red Dead Redemption 2 might currently be the most accurate depiction of taphonomy that exists within a virtual world and may present new opportunities for developing models that could aid in furthering our understanding of how remains may decay under certain circumstances.
At the very least, it could make it easier and less smellier to do taphonomic experiments!