Posted: December 1st, 2010 | Author: Tabitha Hart | Filed under: transcribing | 1 Comment »
If anyone is looking for fast and reliable transcribers who charge affordable rates, please contact me — I have a couple of transcribers to recommend.
In the meantime, has anyone tried using callgraph.biz‘s transcription services? Any reports and/or reviews would be greatly appreciated.
Posted: July 30th, 2010 | Author: Tabitha Hart | Filed under: articles & books, research tools, transcribing | 1 Comment »
Anyone in the business of analyzing talk knows that with every interview, focus group, or interaction comes the laborious task of transcribing it. When I’m really speedy I can transcribe 15 minutes of talk in about one hour, but that’s only a rough cut that doesn’t include Jeffersonian notations. When I’m adding those in, it nearly doubles the transcription time.
(Note: The Jeffersonian Notation system, developed by the late Gail Jefferson, who was an acclaimed Conversation Analyst, is a set of notations/markers that can be used to preserve phatic and other paralinguistic qualities of speech. See this Glossary of Transcript Symbols by Gail Jefferson herself.)
Is there anything to make transcription easier, short of paying someone else to do it for you? This week in the NY Times, David Pogue wrote an enthusiastic review of Dragon NaturallySpeaking for Windows.
Dragon NaturallySpeaking is a newly revamped and (according to Pogue) much improved voice recognition software package. I don’t have a copy of it myself, but it sounds like it might be a great tool for generating good (not perfect) rough cuts of recorded talk. Even better, the professional, premium, and home packages all have multiple language capabilities, including English, Dutch, French, German, Italian and Spanish. The downside is that Dragon NaturallySpeaking is only available for PC. However, Nuance, the company behind Dragon NaturallySpeaking, does offer a software package called MacSpeech Dicate for us Mackies.
If I paid the $100-200 for the home or premium versions and had my transcription time greatly reduced, I’d think it well worth the price.
Any insight on this?
Posted: June 24th, 2010 | Author: Tabitha Hart | Filed under: research tools, transcribing | Comments Off
If you collect audio or video data for research purposes, then you’ve certainly had to deal with questions of transcription: how much of the data to transcribe, what transcription convention (if any) to follow, how to present transcribed data to the reader, etc. Philosophically speaking, the act of transcribing talk is much weightier than one might imagine, since it involves interpretation of the data, for researcher and reader alike. When I was preparing for my general exams, my advisor assigned me to read:
Lapadat, J. C., & Lindsay, A. C. (1999). Transcription in research and practice: From standardization of technique to interpretive positionings. Qualitative Inquiry, 5(1), 64-86.
It’s an article that I highly recommend, since it offers keen insight, as well as guiding questions, on transcribing talk. The key point that Lapadat and Lindsey raise is that although many (most?) research articles don’t typically include much detail on transcription choices and procedures, they should, since “each researcher makes choices about whether to transcribe, what to transcribe, and how to represent the record in text.” (p. 66) These choices are not obvious, and they impact the interpretation of the data.
Some of the transcription choices listed by Lapadat and Lindsey are:
- How should one organize the page?
- How could transcript preparation procedures be designed to balance between competing demands of efficiency and accuracy?
- Should orthographic or phonetic transcription or a modified orthographic approach reflecting pronunciation be used?
- What paralinguistic and nonverbal information should be included, and what conventions should be used to symbolize or present it?
- What should constitute basic units in the transcript—utterances, turns, tone units, or something else?”
(Note that the questions above are directly quoted from p. 67.)
Other questions raised in Lapadat and Lindsey’s article are:
- What do we include in our transcripts, and what do we leave out? For example:
- descriptions of the setting
- descriptions of the interlocutors, or other contextual factors
- descriptions of interlocutors roles
- facial expressions
- tone of voice
- seating/standing configuration
- other activity on the scene
- “unintelligible utterances” (79) etc.
- How do we account or compensate for the data that we do not include in our transcripts?
- When (if ever) and how should we go about checking/proving the reliability and/or validity of our transcripts?
What Lapadat and Lindsey stress is that the transcriptions that we produce, regardless of (or because of) our choices, are not value-free or “neutral” (p. 69) and shouldn’t be regarded as such. We shouldn’t assume that a transcript provides us with an objective, one-to-one match with reality. Because of this, Lapadat and Lindsey believe that it’s important for researchers to be able to account for the “…influences of theory and transcription methodology and their implications for interpretation” (p. 76). We have to
“…make reasoned decisions about what part transcription will play in the methodology. This includes whether to include transcription as a step, how to ensure rigor in the transcription process and reporting of results, and heuristics and cautions for analyzing and drawing interpretations from the taped and transcribed data.” (81)
It’s clear that Lapadat and Lindsey strongly feel that our transcription choices should be reported on in our research articles. While this may not always be prioritized or even possible (especially considering the tough restrictions on content and length in academic journals), we should at least be reflective about how we transcribe talk, and we should be able to explain and justify the transcription choices that we make. As we go about transcribing our own data, it’s our task to make thoughtful, informed decisions about the process.