
Plate to Pixel: The infinite shoebox
Mine-craft the Prequel: The Photographic Story of East Midlands Coal.

Paul Fillingham
November 2022
Follow Paul Fillingham
From magic lantern shows of D.H. Lawrence’s birthplace to social media images depicting the last days at Thoresby Colliery – Mine-craft the Prequel features a selection of rare and outstanding photographs, spanning over one hundred years of east Midlands coal mining history. With exclusive access to The Coal Authority archive of some 47,000 digital images, the project dispenses with nostalgic iconography, and presents a realistic account of an increasingly mechanised industry, told through the lens of successive photographic technologies.
Compiled by historian David Amos and linguist Natalie Braber with contributions from archivist Helen Simpson and digital consultant Paul Fillingham, the publication was inspired by a Nottingham Trent University outreach project – capturing eyewitness accounts from former mineworkers to enhance the descriptions (metadata) of photographs stored in The Coal Authority media archive. Published by Thinkamigo Editions to accompany a photographic exhibition at the D.H. Lawrence Birthplace Museum, the book is divided into several themes and case studies.
Mine-Craft the Prequel
Natalie Braber and David Amos
Contributors: Paul Fillingham and Helen Simpson
52 pages, Paperback cover
ISBN: 9781399926232
£8.50 (UK)

A digital camera captures the coalface in the final days at Thoresby Colliery, August 2014. Pic source: Anthony Kirby.
The following excerpt explores the development of imaging technology from the very earliest plate cameras through to mobile phones and considers the challenges presented by social media and artificial intelligence (AI).
In the early days of the internet, it was thought the web would collapse under increased demand for high-quality images and other rich media (audio/video). Moore’s Law (first postulated in 1965 and revised in 1975) was a benchmark for technological development – stating that ‘computing power doubles every two years’ inversely proportional to falling production costs. Memory capacity, improved sensors and even the number and size of pixels in digital cameras and screens are strongly linked to Moore’s law. The web did not collapse as feared – but the amount of processing power required to feed the world’s appetite for video and images is simply mind-boggling.
Only a few decades ago, photography was expensive and technically demanding – more so for the pioneering documentary photographers who chose to work in the hazardous, low-light conditions afforded by coal mines and industrial sites at the beginning of the 20th century. These barriers to entry mean that good quality images from the past possess a rarity that makes them very special.
Browsing, selecting and annotating pictures on a mobile device is fundamentally the same activity as sorting through family photographs stored in a shoebox. Anyone who has experienced viewing cherished photos with older family members will appreciate the sense of wonder as they recall stories and anecdotes. For historians, capturing this layer of information (metadata) is an essential part of understanding what is represented in an image.

Timeline showing imaging technology milestones, aligned with key chapters and case studies presented in the book.

In the late 70’s, Philips Laserdiscs were introduced by the National Coal Board to deliver technical training. This 30 cm (12 in) diameter digital video format was superseded by PC-based systems. Picture source: Paul Fillingham
At the time of the pit closures in the 1980s, photo-chemical processing was already being replaced by magnetic media, giving rise to a generation of reusable video formats and laying the groundwork for image capture, storage and transfer using digital devices. Coal mining history has been preserved using many types of photographic media and technology, each one purporting to be more robust than the last. Irrespective of the type of media used, the photographers of the time likely believed they were capturing images for posterity – when in fact, film fades when exposed to light, videotapes oxidise, and digital media can be adversely affected by strong magnetic fields or even corrupted by computer virus.
All photographic formats reach a point of obsolescence, when it becomes difficult to view the content in the way it was originally intended.
To mitigate the loss of historic images, organisations like the Movie Archive for Central England (MACE) and the British Film Institute (BFI) continue to preserve film and video content.

TV news camera-crew at Bilsthorpe colliery on the first morning of picketing in the Notts coalfield, 12 March 1984. Picture source: Mansfield Chad

The Regent Cinema, Kirkby in Ashfield, likely taken in 1932, the year the Anglo-German musical ‘Happy Ever After’ was released – as suggested by the signage and film posters. Picture source: Kirkby Heritage Centre.

Photo-sharing service Flickr extracts useful metadata and allows users to tag images with keywords to help index the content, thus making it more discoverable via the platform’s image-search feature.

Vannevar Bush’s ‘Memex’ (1945) proposed a method of harnessing metadata utilising the photo-mechanical technology that was available at the time.
In his essay ‘As we may think’ (‘The Atlantic’ magazine, 1945), Bush describes a ‘desktop memory machine’. Bush called his proposed device a ‘Memex’ machine – incorporating microfiche film-scanning, document notation, sorting and retrieval.
Internet pioneers Doug Engelbart (credited with developing the graphical user interface in 1962 and inventing the computer mouse in 1963), and Ted Nelson (who coined the phrase ‘hypertext’ in 1965), were influenced by Bush’s ideas and made efforts to include metadata as a foundation of the World Wide Web. However, when the web launched in 1994, the semantic structure of its web pages – written in Hyper-Text Markup Language (HTML) did not follow any strict protocol for representing metadata. Meaningful image descriptions on the web have therefore remained incomplete and elusive for nearly three decades. Even though there is currently no single agreed structure for representing metadata on the web, several emerging standards seek to resolve the issue of cataloguing content. The web’s dominant search engine – Google, encourages content creators to publish more accurate metadata by rewarding them with better visibility in search results. Search Engine Optimisation (SEO) experts devote much of their time to improving web page text density and relevancy of commercial websites in order to maintain a competitive advantage.
For many years Google could only read text when indexing a web page. Indexing images relied on programmers adding a short image description to their HTML code. This tagging method (Alt-tags) was sometimes abused by users who would add irrelevant keywords in order to skew Google search results to their advantage – a practice known as ‘keyword stuffing’. Whilst alt-tags are still an important form of metatdata, Google has since developed artificial intelligence (AI) that can ‘see’, identify, and group image details into meaningful categories. Automatically curated digital photo albums are a popular feature of commercial services by Apple and Google that make use of this technology.
Another form of metadata that is used by Google is the Open Graph protocol. OG is a web standard that facilitates the addictive ‘like’ button on social media feeds. With the necessary OG metadata in place, sharing a photograph from a website (to Twitter or Facebook for example), allows the social media posts to be automatically populated with descriptions and keywords in one click, ensuring consistency, and helping to maintain a single source of truth with little effort required by the user.
The quest for a common metadata standard has been ongoing for several decades. One such initiative is the SetDublin Core Metadata Element Set, which originated from a knowledge transfer conference in Dublin, Ohio in 1995. Dublin Core seeks to describe content using fifteen parameters that are broad, generic, and suitable for describing a wide range of resources and content. Dublin Core contains all of the necessary attributes for indexing media resources such as sounds and images but ultimately is dependent upon reliable information being recorded by the content creator.
Dublin Core Metadata Element Set
1) Title – A name given to the resource.
2) Subject – The topic of the resource.
3) Description – An account of the resource.
4) Creator – An entity primarily responsible for making the resource.
5) Publisher – An entity responsible for making the resource available.
6) Contributor – An entity responsible for making contributions to the resource.
7) Date – A point or period of time associated with an event in the lifecycle of the resource.
8) Type – The nature or genre of the resource.
9) Format – The file format, physical medium, or dimensions of the resource.
10) Identifier – An unambiguous reference to the resource within a given context.
11) Source – A related resource from which the described resource is derived.
12) Language – A language of the resource.
13) Relation – A related resource.
14) Coverage – The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant.
15) Rights – Information about rights held in and over the resource.
The much-quoted maxim ‘the photo never lies’ has always been open to challenge. Photo-manipulation has a long history, from the hand-tinted postcards of yesteryear to modern day image retouching afforded by tools like Adobe Photoshop (1989), and mobile apps like Instagram (2010). Though perceived as enhancements, the removal of unwanted details and artefacts, – for example; adjusting the position of an object, repairing damage to physical media, or compensating for poor exposure – can frustrate the efforts of historians by altering the original content. In recent years Artificial Intelligence has raised the bar on what we consider to be an authentic image, enabling the creation of so-called ‘deep fake’ images and even video that is indistinguishable from reality. One such technology, developed by microchip manufacturer Nvidia is the Generative Adversarial Network (GAN), which blends photographic details from multiple sources to create hyper-realistic images. Nvidia’s experimental website thispersondoesnotexist.com demonstrates this technique by generating pictures of human faces based on analysis of thousands of portraits that are shared on the photo sharing platform Flickr.
‘Artificial Intelligence has raised the bar on what we consider to be an authentic image’
Originally developed to facilitate applications involving biometric scanning, and facial recognition, GAN applications can also simulate the appearence of landscape features and man-made objects. AI imaging is so pervasive that it is now used to improve the quality of mobile snapshots. The skills of the photographer are not only built into the mobile phone, but we have reached the stage where the device can automatically fill-in missing details or remove unwanted artefacts ‘on the fly’.

Images created by Generative Adversarial Networks can be indistinguishable from reality. Simply refreshing the persondoesnotexist.com webpage generates another face. Only a small number can be discerned as being fake.
As the boundaries between what is real and what is manufactured become increasingly blurred, metadata adds a degree of provenance to archives and knowledge exchange. Metadata describes the essential attributes of an image – its origin, subject-matter, history and ownership, and makes this accessible to future generations.
Paul Fillingham
Digital Consultant
Tags
Links
- CREATE the future Stories and features
- CREATE the future Picture grid view
- GAN people www.thispersondoesnotexist.com
- Mining Heritage in the East Midlands www.miningheritage.co.uk
- Professor Natalie Braber Nottingham Trent University Profile