Timecode metadata are the critical link in the between textual content and audio or video in digital environments. Different architectures for timecode deployment have evolved independently in the creation of digital oral history collections, and all help to significantly increase digital accessibility. With many models now on the table it is an appropriate time to take inventory of what approaches are available, closely evaluate the relationship between these models, understand the range or textual data they are linked to, and elucidate the current “state of the art” to find common ground for future developments.
Timecodes are being put to use in two broad ways: 1.) as transcription timecodes, enhancing full text transcriptions with a cross-reference to time points in the source audio or video, and 2.) as audio or video file metadata enhancing a longer audio or video file, or A/V timecodes. Within A/V timecodes two basic models are emerging, one that uses timecodes pointing to a single point in time in the digital file, allowing the user to play forward from that point. (We might call these indexing point timecodes.) In another model, (which we might call passage timecodes), timecodes are defined as inpoints and outpoints giving meaningful content within a longer digital file its own begining, middle and ending.
The latter model of defining passage timecodes can take place in database environments where the in/out points are just references that move the listener digitally (hypertextually) to the passage of interest. In other contexts, practitioners manage oral histories by hard-editing passages permanently, thus creating segments or clips from the full length digital source file.
All timecode deployments require choices to be made--regarding the frequency of transcription or indexing point timecodes, or the length and comprehensiveness of passages timecodes. No standards have been set as to how these choices are made and there are strengths and weaknesses of the different approaches. I hope to have the opportunity to compare notes with others using the various models, determine the trades-offs between models, establish what can and cannot be standardized, and allow digital oral history stewards to proceed with future investments in software more informed.