Note:Today’s blog post will be the last one for a bit – due to my workload piling up for my PhD, I’ve decided to take a month long hiatus from blogging. Hopefully, having the entirety of February to catch up on my lab-work (and also celebrate my birthday on my downtime because I’m a February baby! Please send all birthday coffees here thank you) will get me ahead of schedule and I can get back to weekly blog posts come March. Thanks again for all of your support and hopefully I’ll see some of you when I get back – until then, remember you can also follow me on Twitter where I’m sure I’ll occasionally pop in to complain about animal bones!
Okay…I know I said that I wouldn’t use that extremely bad, extremely old joke to introduce a blog post…but this one is basically a companion piece to the previous OM NOM NOM post on gnawing, so it doesn’t count…I think.
Well, I promise I won’t use it again after this, okay? Okay.
Anyway, let’s talk about butchery.
“Butchery” is basically what zooarchaeologists call any physical characteristics that may indicate that the bone has been modified by humans. There can be many reasons why bones will be modified, but most commonly its for consumption. Here’s a brief overview of three common butchery marks that can be found on faunal bone in the archaeological record:
Cut marks look like thin striations in the surface of the bone. They are mostly associated with activities like skinning/de-fleshing. Based on other characteristics, zooarchaeologists can determine whether a cut mark was made by a stone blade or a metal blade. Stone blades create shallow v-shaped marks with parallel striations (Potts and Shipman 1981), while metal blades will made deeper, slightly angled v-shaped marks (Greenfield 1999).
Slightly different from cut marks are chop marks – these are marks that were made by blades that hit the bone at a perpendicular angle, causing a V-shape that’s much broader than a cut mark (Potts and Shipman 1981).
One very specific form of butchery that’s pretty easy to identify is marrow cracking or marrow extraction. Marrow is a valuable product that can be extracted from various bones simply by breaking into the shaft. We can recognise bones that have been cracked or butchered for marrow by the fractures and splintered fragments left behind (Outram 2001). Depending on the tool used to break the bone, “percussion notches” can also be found along the fractures.
Obviously there’s much more when it comes to butchery marks, but these three are arguably some of the common forms of butchery that you run into as a zooarchaeologist. To be honest, there’s something really wonderful about finding bits of butchery when you’re excavating – running your fingers along the striations in the bone, it’s amazing to think that hundreds, thousands of years ago, someone created these marks…probably with a stomach as hungry as mine, too.
I’m gonna be honest, I get so hungry when I work with animal bones sometimes…is that weird? It’s weird, right. Hm.
Greenfield, H.J. (1999) The Origins of Metallurgy: Distinguishing Stone from Metal Cut-marks on Bones from Archaeological Sites. Journal of Archaeological Science. pp. 797-808.
Outram, A.K. (2001) A New Approach to Identifying Bone Marrow and Grease Exploitation: Why the “Indetereminate” Fragments Should Not Be Ignored. Journal of Archaeological Science. pp. 401-410.
Potts, R. and Shipman, P. (1981) Cutmarks Made by Stone Tools on Bones from Olduvai Gorge, Tanzania. Nature pp. 577-580.
At the time of writing this blog post, we are only three days into 2019. I’ll be honest – I’ve experienced 25 years on this planet and I still make New Year’s resolutions. The usual ones, of course: exercise more, consume less sugar, etc. And, of course, these resolutions usually make it until mid-February before I completely ditch them and continue to eat chocolate bars every day without touching my running shoes. I know New Year’s resolutions are silly gimmicks, marketed by gyms and health apps to make lots of money come January 1st. But I have always liked to utilise the New Year as a time for restarting my daily routines, renewing goals – I mean, I have an entire year ahead of me with so many possibilities, right?
So in honour of the New Year, let’s look at how we measure time in archaeology.
There are many ways that archaeologists create chronologies, and we often combine several methods to get a better idea of what a site’s timeline was like. Possibly the easiest way to “see” time across a site’s archaeological record is to look at the cross-section of a trench during excavation. The stratigraphy of an archaeological site can usually be seen as a series of “layers”, almost like a cake…if the cake was made out of various soils, organic material, and artefacts. These layers provide us with a general ideal of the order in which materials were deposited – this includes both natural and anthropogenic materials. It may be easier to think of archaeological stratigraphy as a sort of “visual starting point” for further developing a chronology for the site (Harris 1989). In an ideal world, we could simply look at the layer on the bottom to determine the “beginning” of the site’s history…but of course, things are never that simple.
During post-excavation, there are numerous methods available to an archaeologist for further dating. Having a typology (read more on typologies here) of a certain artefact, such as pottery, can help an archaeologist get a general idea of what time period they are currently dealing with. Within archaeological science, there are a variety of lab-based methods for dating: radiocarbon, potassium-argon, uranium, etc.
Of course, these methodologies aren’t perfect, nor are they definite. In fact, archaeologists differentiate between absolute and relative chronologies. Absolute chronologies provide us with approximate dates, often from lab-based methods such as radiocarbon dating. On the other hand, relative chronologies (for example, using a typology to determine an approximate period of creation and use) can be used to determine general time periods using the relationship between a previously occupied site (and its material remains) and an overall culture (Fagan and Durrani 2016).
Additionally, there are many external factors that can affect the recovered context of a site, thereby complicating the timeline – for example, burrowing creatures may cause some artefacts to fall into the contexts of others. There have also been many cases of re-using older artefacts and spaces, which can complicate the timeline further (you can read more on recycling and re-using the past here).
Overall, however, archaeology has been a useful tool for conceptualising the beginnings of things – while we cannot establish with certainty the absolute start of agriculture or domestication, for example, we have been able to develop an approximation of how early humans were practising such concepts.
And let’s be real – time itself is a fascinating concept. While we have this sort of “standardised” method of calculating and measuring time today, we cannot truly account for past perspectives on time. Of course, we can find material evidence that may illustrate the physical act of “keeping time” in the past, but how did people in the past really experience time? Think about how quickly an hour can go by today, just by watching random videos on YouTube or Facebook on your smartphone. Remember how much longer an hour felt when we didn’t always have access to the Internet at all times, prior to smartphones and other such devices? What about someone in the past who has a completely different mindset to us – how did they experience an hour?
…honestly, I could probably prattle on for hours and hours about this (and how would you experience that??).
Anyway, hope you all had an easy transfer from 2018 to 2019 this past New Year. Here’s to another year of writing incoherent, rambling posts that you hopefully find entertaining at the very least. And thank you all for supporting and reading my work last year, too – hope to see you all back again at the end of 2019!
Fagan, B.M. and Durrani, N. (2016) In the Beginning: an Introduction to Archaeology. Routledge.
Harris, E.C. (1989) Principles of Archaeological Stratigraphy. Academic Press.
Is there a “Perfect Pokemon”? Well, I guess technically there is the genetically engineered Mewtwo…but what about “naturally occurring” Pokemon? Can Trainers “breed” them for battle?
A form of “Pokemon breeding” has been a vital part of the competitive scene for years. Players took advantage of hidden stats known as “Individual Values”, or “IV’s”, which would influence a Pokemon’s proficiency in battle. These stats could be changed based on training and utilising certain items in-game. In order to have the most control over a Pokemon’s IV’s, it is best if a Player breeds a Pokemon from the start by hatching them from an Egg, allowing for modification of stats from the very start. This is in contrast to usiong caught Pokemon, which are often above Level 1, so some of their important stats have already been changed “naturally” (Tapsell 2017).
But what about real life animal breeding? More specifically, “selective breeding” – this refers to human-influenced or artificial breeding to maximise certain traits, such as better production of certain materials (for example, milk or wool) or better physicality for domestication (stronger builds for beasts of burden, etc.). This is in contrast to natural breeding or selection, in which the best traits towards survival and adaptation are passed through breeding, although these traits may not be best suited for human use of the animal. Selective breeding is most likely as old as domestication itself, but its only been recently (at least, in the past few centuries) that humans have more drastically modified animal genetics (Oldenbroek and van der Waaij 2015).
But can we see selective breeding archaeologically? For the most part, this sort of investigation requires a large amount of data – zooarchaeologists can see dramatic modifications to bred animals by examining large assemblages of animal remains over time. Arguably one of the best examples of this can be seen in looking at dog domestication and how breeding techniques have drastically changed aspects of canine anatomy (Morley 1994).
Zooarchaeological data can be supplemental by other sources of evidence, such as text and material remains. Perhaps the most powerful innovation in archaeological science, however, is DNA analysis – using techniques such as ancient DNA (aDNA), we can see specific genetic markers to further investigate exact points of change (MacKinnon 2001, 2010).
The most recent additions to the Pokemon video game franchise, Pokemon: Let’s Go Pikachu and Let’s Go Eevee have not only streamlined gameplay, but have also made the previously “invisible stats” more visible and trackable to the chagrin of some seasoned Pokemon players. However, for new players this is undoubtably a welcome change…now if only we could make it just as easy to see in real life zooarchaeology!
MacKinnon, M. (2001) High on the Hog: Linking Zooarchaeological, Literary, and Artistic Data for Pig Breeds in Roman Italy. American Journal of Archaeology. 105(4). pp. 649-673.
MacKinnon, M. (2010) Cattle ‘Breed’ Variation and Improvement in Roman Italy: Connecting the Zooarchaeological and Ancient Textual Evidence. World Archaeology. 42(1). pp. 55-73.
Morey, D.F. (1994) The Early Evolution of the Domestic Dog. American Scientist. 82(4). pp. 336-347.
Note: I struggled about whether or not to write about this game due to the issues surrounding its development and the poor treatment of workers (for more information, please read this article from Jason Schreier). However, I think it marks an interesting development in the ever-growing world of virtual archaeologies, so I proceeded to write about it. That being said, please show support for the unionisation of game workers by visiting Game Workers Unite.
Red Dead Redemption 2 (Rockstar Studios 2018) has only been out for a short while, but many players have been praising the level of detail that has gone into the game. One of the most striking features, at least to me as an archaeologist, is the fact that bodies actually decay over time. That’s right, video game archaeologists – we now have some form of taphonomy in our virtual worlds!
But wait, what is “taphonomy“? Well, you may actually get a few slightly differing answers from archaeologists – we all mostly agree that taphonomy refers to the various processes that affect the physical properties of organic remains. However, it’s where the process begins and ends that has archaeologists in a bit of a debate. For the purposes of this blog post, I’m gonna to use a definition from Lyman (1994), which defines taphonomy as “the science of the laws of embedding or burial” – or, to put it another way, a series of processes that create the characteristics of an assemblage as recovered by archaeologists. This will include not only pre-mortem and post-mortem processes, but processes that occur post-excavation, as identified by Clark and Kietzke (1967).
Let’s start with the pre-mortem processes, which are often ignored in discussions of overall taphonomy – firstly, we have biotic processes, which sets up the actual conditions of who or what will be deposited in our final resulting assemblage – this can include seasonal characteristics of a particular region, which will draw certain species to inhabit the area (O’Connor 2000), as well as cultural factors, such as exploitation and, unfortunately, colonisation/imperialism (Hesse and Wapnish 1985).
Now, let’s use some poor ol’ cowboys from Red Dead Redemption 2 as examples of post-mortem processes – Content Warning: Images of (digital) human remains in various stages of decay are about to follow, so caution before you read on!
With our biotic processes providing us with these cowboys who have moved West for a variety of reasons, we now need to determine our cause of death to continue with taphonomy. This falls under thanatic processes, which causes death and primary deposition of the remains (O’Connor 2000). In our example above, we would probably be able to find osteological evidence of trauma due to the cowboy getting shot to death.
In time, we soon see the work of taphic processes, or the chemical and physical processes that affect the remains – this is also sometimes referred to as “diagenesis” (O’Connor 2000). Much of what we consider to be “decay” when we think of decomposition will fall under this category of processes. Sometimes this will also affect the remaining structure and character of bone that will eventually be recovered.
Now, imagine we take this body and, as seen in the YouTube video from which these images come from, toss it down a hill. Okay, this is a bit of an over-the-top example, but it showcases another category of processes known as perthotaxic processes. These processes causes movement and physical damage to the remains, either through cultural (butchery, etc.) or natural (weathering, gnawing, trampling, etc.) methods. Similar to these processes are anataxic processes, which cause secondary deposition and further exposure of the remains to other natural factors that will further alter them (Hesse and Wapnish 1985).
The above image shows the remains of the cowboy finally reaching his secondary place of deposition after being tossed from the top of the hill and now drawing the attention of scavenger birds – this showcases an example of an anataxic process, as the body is being scavenged due to exposure from secondary deposition.
At this point, we begin to see how all of the aforementioned processes have affected our current archaeological assemblage-in-progress: we clearly have physical and chemical signs of decay, with physical alteration due to post-mortem trauma (tossing off of a hill) and exposure (including gnawing from other animals). This results in some elements going missing, some being modified, and others being made weaker and more likely to be absent by the time the body is recovered archaeologically.
Now, we also have two processes that occur during and after archaeological excavation that, again, often get overlooked: sullegic processes, which refer to the decisions made by archaeologists for selecting samples for further analysis (O’Connor 2000) and trephic processes, which refer to the factors that affect the recovered remains during post-excavation: curation, storage, recording, etc. These are often ignored as they don’t necessarily tell us much about the context surrounding the remains, but they are vital to consider if you are working with samples that you did not recover yourself or have been archived for a long time prior to your work.
Environmental differences will also affect the sort of variety within the overall taphonomic process – for example, wet environments (say, like the body of water seen in the image above) will cause the body to become water-logged, which may speed up certain taphic processes and create poorer preservation. More arid environments, like a desert, may lead to slightly more preservation in some cases due to the lack of water that may damage the bones.
Although the game certainly speeds up these processes and streamlines them in a way that removes some of the other variables that you would see in real life, I’d argue that Red Dead Redemption 2 might currently be the most accurate depiction of taphonomy that exists within a virtual world and may present new opportunities for developing models that could aid in furthering our understanding of how remains may decay under certain circumstances.
At the very least, it could make it easier and less smellier to do taphonomic experiments!
If you’ve been reading this blog for a while, you probably have a good idea of what zooarchaeology is (and if you’re new, feel free to read that post here). But it’s not just about looking at animal bones and identifying them…well, okay, it’s a lot of that. But there’s lots more to it than just that.
Let’s get scientific, shall we?
Back in the United States, I was introduced to archaeology as part of the humanities – my BA degree was in classical archaeology and anthropology, so I didn’t really get much training in the practical aspects of the discipline, let alone any of the scientific approaches to archaeological analysis.
Cut to a few years later and I’m desperately trying to relearn what an electron is! That’s not really an exaggeration, either – by the time I was in my MSc program for Archaeological Sciences, it had been probably five years since I had my last science class. It was definitely a struggle at times, but completely doable with an extra bit of studying and work towards understanding and grasping concepts that seemed so far out of my reach when I first began.
Even though I knew exactly what I was getting into, it was still a bit of a surprise to me that by the end of my MSc year, I was in the lab doing independent work for my dissertation research. I was investigating fishing activity in the Orkney Islands, using scanning electron microscopy (or SEM) to examine small fish vertebrae for evidence of consumption (digestion, burning, butchery), and stable isotope analysis of carbon and nitrogen to see whether or not these fish were locally caught and contributed majorly to the inhabitants’ diet. I spent most of my summer watching fish bones dissolve in the isotopes lab, extracting collagen, and using the biggest microscope I’ve ever used in my life – it was certainly a change of pace for someone who, just two years ago, was writing ethnographic pieces as part of my anthropology degree!
So, if you’re looking into archaeology as a career and feel as though you’re lacking in your science training, fear not! For starters, archaeology is a vast discipline that draws from both the humanities and the sciences, so it isn’t necessary, although it is probably helpful to have a more rounded idea of the field as a whole. But if you’re really interested in the science side and feel woefully ignorant, I’d like to believe that I’m an example of someone who was completely science illiterate who can now comfortably refer to themselves as an archaeological scientist. It’s totally possible!
To wrap-up, here are a couple of examples of utilising archaeological science for the purposes of zooarchaeology – of course, this isn’t an exhaustive list at all, but these are arguably the most popular scientific approaches to zooarchaeological research:
Stable Isotope Analysis
Stable isotope analysis isn’t a new method – its origins can be traced back to the 1970’s – but its still a popular and useful tool for utilising faunal remains and furthering the amount of information that they can provide. Isotopes of carbon, nitrogen, strontium, and oxygen can be measured through this method and used to investigate past diets, subsistence strategies, and migration of both humans and animals from the archaeological record. To analyse stable isotope levels, collagen from the bone must be extracted and placed within a mass spectrometer to isolate the isotope ratios for measurement. This method is one of the best ways for zooarchaeologists to connect their faunal bones to the “bigger picture” of the archaeological context of their site, in particular, stable isotope analysis can reveal the finer details regarding the relationship between humans and animals in the past.
Zooarchaeology By Mass Spectrometry (ZooMS)
ZooMS is arguably one of the most useful advancements in archaeological science, specifically for zooarchaeologists. This method allows for better identifications of faunal bone, especially smaller, more fragmented pieces of bone that may be utterly unidentifiable by the human eye. The way ZooMS works is based on the concept that species have certain protein sequences that correlate specifically to themselves. ZooMS allows for these sequences to be isolated and measured – this provides us with a sort of “code” that correlates to a species, allowing for identification. Although not perfect – this method is not always reliable with regards to identifying between two very close species (for example, differentiating between a wild and domesticated version of the same animal – see: wild boar vs domesticated pig) – it’s still a huge improvement in confident identifications for faunal bone analysis.
Ancient DNA (aDNA)
Ancient DNA is one of the more recent developments within archaeological science – by utilising the DNA recovered from archaeological remains, archaeologists can examine how processes such as domestication affected the genetics of animals in the past. aDNA, often paired with other morphological analysis, can provide archaeologists with clear patterns regarding genetic modification over time and track morphological variation that could provide more detail into how animals adapt to their ever changing environments. Given how new this method is, I’d argue we’ve only really scratched the surface with what zooarchaeologists can do with aDNA – be on the lookout for new breakthroughs and amazing research coming out of this field in the near future!