Posted: July 15th, 2010 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Ethnography app for iPhone
Has anyone tried the ethnography-themed “EverydayLives” app for the iPhone? It purports to help researchers with in-the-field data collection of still and movie images. It seems to have tagging, note-taking, and archiving functionalities. I believe you can also easily share the data with other team-members. There’s a short video demo of it on YouTube. So far it’s difficult to find any substantive reviews of it. It’s USD 12.00, so not a huge investment, but it would be nice to hear what other users have to say.
Posted: July 2nd, 2010 | Author: Tabitha Hart | Filed under: research tools, TAMS | 11 Comments »
Getting started with TAMS Analyzer
I’m updating my notes on TAMS as I get better at it. This should help you get started with the first-level coding of your data. As I learn more I’ll continue to share steps and tips.
- Currently TAMS only works with data in rtf format, although I understand that the upcoming version will also accommodate pdf. In the meantime, you’ll need to convert your data to rtf before you import it. (See user manual page 8.)
- I recommend creating a basic init file right away. (See user manual pages 35 & 95.) This file will save you a lot of time as you code your data as it tells the program how to treat certain variables and/or contextual data that you mark up in your texts. Note that you have to TELL the program which file to treat as the init file. Once you’ve created it (call it “init file”), go to the file list in the workbench. Highlight the init file in the list of files, and then click the “init file” button. Now in the bottom left corner of the workbench you’ll see “Init file: name of the file you selected.” This confirms that which file the system “sees” as the init file. These are the codes I put in my init file:
- {!universal datatype=””}
- {!context role}
- {!context speaker}
- {!button speaker}
- {!button “{!end}”}
- You can also do “if” coding, like {!if speaker=”Jane”=>role=”trainer”}
- Now, consistent with the init file, you’ll include some basic codes in each and every file you work with. Think of these as basic, structural codes that you’ve already decided on, which are linked to the init file. These are the particular ones that I’m using:
{!universal datatype=”Interview”} (or fieldnotes, or forum posts, etc.)
{role}{/role} (I code the role of the person in question, so it looks like this: {role}student{/role})
{speaker}{/speaker} (I code the name of the speaker, so it looks like this: {speaker}John{/speaker})
The benefit of steps one and two above is that in my search results I now have columns for contextual information like the type of text (interview, fieldnotes, forum posts, etc.), the speaker in question (Jane, Bob, James, etc.) and their role (student, teacher, staff member, etc.)
- Other notes on the information above:
- The code {!button speaker} in my init file creates a short cut “button” on each of my files for the {speaker}{/speaker} code. Clicking the “speaker” button is a nice shortcut for me when I code the data, since I use this particular code a lot.
- The code {!button “{!end}”} in my init file creates a short cut “button” on each of my files for the {!end} code, which is a context code. Without the short cut button I’d need to either type this in by hand or use the menu option Metatags>Structure>{!end}. This way, I can insert the {!end} tag with just one click. More about {!end} below.
- In my project, I’m using the context code {speaker}{/speaker} because it’s important to me to be able to link statements with a source (i.e. the person who said it). Given my large interview sample, having the capability to easily link statements/data to people is great. When I’m coding, I use the {speaker}{/speaker} code each time somebody takes a turn. The corollary to this is that I need to tell TAMS when that person’s turn of speech ends. To do this, I use the metatag {!end}. A passage of coded data would therefore look like this: {speaker}Barry{/speaker} When are you going to turn in that assignment? {!end} {speaker}Ralph{/speaker} I’m not sure. Probably next week. {!end}
- TIP (1) be careful to mark all the speakers, or you will think the wrong people are saying the things you are finding.
- TIP (2) put in {!end} whenever the value of speaker changes, or you will be misled as to who is speaking.
- Now we get to the regular data codes. As indicated above, TAMS uses squiggly brackets { } to denote coded data. The codes go on either side of the passage. The end code contains a slash: {code}piece of text here{/code}.
Code names can have numbers and underscore characters. No spaces permitted.
Passages of text can have multiple codes; codes can be nested and can overlap
Create a new code by entering its name into the field, then press “new”
As you create codes you’ll use the “definition” button to define them.
That sums up where I am right now in my first-level coding. I’ll report back with more information as I progress.
Posted: June 24th, 2010 | Author: Tabitha Hart | Filed under: research tools, transcribing | Comments Off on Transcribing talk
If you collect audio or video data for research purposes, then you’ve certainly had to deal with questions of transcription: how much of the data to transcribe, what transcription convention (if any) to follow, how to present transcribed data to the reader, etc. Philosophically speaking, the act of transcribing talk is much weightier than one might imagine, since it involves interpretation of the data, for researcher and reader alike. When I was preparing for my general exams, my advisor assigned me to read:
Lapadat, J. C., & Lindsay, A. C. (1999). Transcription in research and practice: From standardization of technique to interpretive positionings. Qualitative Inquiry, 5(1), 64-86.
It’s an article that I highly recommend, since it offers keen insight, as well as guiding questions, on transcribing talk. The key point that Lapadat and Lindsey raise is that although many (most?) research articles don’t typically include much detail on transcription choices and procedures, they should, since “each researcher makes choices about whether to transcribe, what to transcribe, and how to represent the record in text.” (p. 66) These choices are not obvious, and they impact the interpretation of the data.
Some of the transcription choices listed by Lapadat and Lindsey are:
- How should one organize the page?
- How could transcript preparation procedures be designed to balance between competing demands of efficiency and accuracy?
- Should orthographic or phonetic transcription or a modified orthographic approach reflecting pronunciation be used?
- What paralinguistic and nonverbal information should be included, and what conventions should be used to symbolize or present it?
- What should constitute basic units in the transcript—utterances, turns, tone units, or something else?”
(Note that the questions above are directly quoted from p. 67.)
Other questions raised in Lapadat and Lindsey’s article are:
- What do we include in our transcripts, and what do we leave out? For example:
- descriptions of the setting
- descriptions of the interlocutors, or other contextual factors
- descriptions of interlocutors roles
- gestures
- facial expressions
- tone of voice
- seating/standing configuration
- other activity on the scene
- misunderstandings
- “unintelligible utterances” (79) etc.
- How do we account or compensate for the data that we do not include in our transcripts?
- When (if ever) and how should we go about checking/proving the reliability and/or validity of our transcripts?
What Lapadat and Lindsey stress is that the transcriptions that we produce, regardless of (or because of) our choices, are not value-free or “neutral” (p. 69) and shouldn’t be regarded as such. We shouldn’t assume that a transcript provides us with an objective, one-to-one match with reality. Because of this, Lapadat and Lindsey believe that it’s important for researchers to be able to account for the “…influences of theory and transcription methodology and their implications for interpretation” (p. 76). We have to
“…make reasoned decisions about what part transcription will play in the methodology. This includes whether to include transcription as a step, how to ensure rigor in the transcription process and reporting of results, and heuristics and cautions for analyzing and drawing interpretations from the taped and transcribed data.” (81)
It’s clear that Lapadat and Lindsey strongly feel that our transcription choices should be reported on in our research articles. While this may not always be prioritized or even possible (especially considering the tough restrictions on content and length in academic journals), we should at least be reflective about how we transcribe talk, and we should be able to explain and justify the transcription choices that we make. As we go about transcribing our own data, it’s our task to make thoughtful, informed decisions about the process.
Posted: June 18th, 2010 | Author: Tabitha Hart | Filed under: articles & books | Comments Off on Gender and ethnographic research
Warren, C.A.B. and Hackney, J.K. (2000). Gender Issues In Ethnography. Thousand Oaks, CA. Sage.
Warren and Hackney’s interesting book treats how gender influences, shapes, affects, and is part of ethnographic data collection, especially a researcher’s experiences in the field. Below are some of my notes on Warren & Hackney’s key ideas & concepts.
Gender DOES influence ethnography and the fieldwork experience
- “Gender is built into the social structure of…social orders, across time and space, permeating other hierarchies of race or status. Living within a society or visiting one as a fieldworker presupposes gendered performances, interactions, conversations, and interpretations on the part of both the researcher and respondents.” (1)
- “…gender shapes the interactions in our settings; it shapes entrée, trust, research roles and relationships…” (3) It shapes the fieldwork experience as well as the knowledge produced in the fieldwork setting.
- Ethnography has stages, including: “entrée into the setting, finding a place, fieldwork roles and relationships, research bargains, trust and rapport, and leaving the field. Gender both frames these stages…” (3)
- As ethnographers, how we enter a fieldsite and how we are received/perceived by people in that fieldsite is affected by gender, as well as: “marital status, age, physical appearance, presence and number of children, social class, and ethnic racial or national differences…” (5)
- In terms of access, the authors differentiate between physical access and access to actual meanings (see p. 6) and note that gender will affect both of these factors.
- When we enter a culture, be essentially become part of its “landscape of contemporary life” (11) and who we are is established in part through the “existing cultural stock of knowledge and action available” (12) in that context. This includes the cultural-historical attitudes towards and beliefs about gender, of course.
- Your place in the fieldsite probably won’t be static. These authors believe that our “roles and relationships” in the fieldsite are constantly being negotiated and renegotiated – we shouldn’t take them for granted or assume that they won’t change. (14)
- Gender (yours, informants’) will affect interviews in some way, whether in what people wish to disclose to you, or in how they communicate with you, or how they view you, etc.
- Even if you connect with informants through shared gender, your rapport/connection will be further affected by perceptions of/evaluations of other factors like “education, marital status, and social class” (39)
Practical Considerations for ethnographers, especially female ethnographers
- Marital status/Parental status: What is a “legitimate” woman in the cultural context where you are collecting data? These authors note that marriage and motherhood are fuller, more legitimate social members in some societies than single or childless women. Factors like this can affect how you are viewed and treated by informants. (see p.8-11) Shared characteristics (marriage, parenthood, others) can possibly help you to establish a connection with your informants. How might differences affect your connections with informants?
- Clothes: “Sometimes the research task is facilitated by wearing clothes that are the same as one’s hosts, sometimes not.” (22) Dress and hairstyles: “may be adopted to fit into the culture’s gender roles, to disassociate oneself from those roles for some particular purpose, or to satisfy other demands based on age or social class.” (23) What are you expected to wear, based on your identity, or your preferred identity in that setting?
- Body norms: “researchers own conformity or nonconformity with [body norms] [has] research consequences.” (25)
- Consider etiquette, boundaries, norms, meanings, etc. related to gender in that cultural context. What are the expectations about your behavior as a (female) ethnographer? What are the rules in regards to talking with informants?
Other reflective questions
See excerpt from Krieger on p. 58: How do informants expect us to behave? How do informants expect each other to behave? What ideas are we bringing with us into the site? What gender-related expectations are flexible, breakable vs. inflexible, required? How can gender hinder or assist us in the data collection?
Other interesting concepts
- Fictive kin (14-15) – as stranger/ethnographer establishing your place in a fieldsite, you might be adopted as a family’s child, or people’s sister, etc.
- Cross-gender behavior (15-) to what extent are you permitted to flout or cross gender lines as a stranger/ethnographer in the fieldsite?
- Permitted deviance: “ways in which norms differ from behavior or norms for foreigners differ from norms for natives.” (58)
My comments/questions
- Assumptions about and expectations related to gender will vary from culture to culture. How do our assumptions as a research team, or our personal assumptions, compare to those of our informants, or to those of the informants’ cultures-at-large? We should be aware of this as we conduct our fieldwork.
- We should always have some background information on gender norms in the settings we study, and be aware of how they compare to gender norms in our home settings.
- One message of this book is to consider what aspects of informants’ worlds you are/are not seeing as an ethnographer. What percentage of those worlds do you actually have access to? Be aware of the limits of the information that you are collecting.
- What roles are we as researchers being “cast in” (35) because of gender (ours, our informants’)? What control do we have over that? How can we negotiate that?
- In writing up fieldwork for articles, conferences, etc. what do researchers put in/leave out in regards to gender in their methodology sections? To what extent should their gender and the gender of their informants be treated, discussed? How much of the gender element is necessary/important to expound on to explain findings, to make sense of the study? (see p. 39 onwards)
- As discussed by these authors (see p. 49 onwards) fieldnotes are not neutral, timeless documents, but are definitely rooted in the socio-cultural attitudes of the time/place/period in which they are written. Furthermore, they are rooted in our individual characters, beliefs, assumptions, etc. Our fieldnotes say as much about us – the writers – and our times as they do about the people we observe and interview. How do you deal with this fact in analyzing and writing up your fieldwork?
- Perhaps we should incorporate an ecological analysis into our work to be more aware of the different contextual layers that influence meaning, meaning making, and our own roles as ethnographers in the fieldsite? For example, include some cultural-historical analysis as macro and micro levels?
- To what extent do we modify our behavior to fit in with the gender norms/expectations where we collect data? Our attitudes, plans, results vis-à-vis modifying our behavior should be documented, discussed.
- How does gender (as well as other gender-related characteristics covered in this book) make it easier or difficult to gain trust, to establish rapport, to get deep information? How do these characteristics influence our interviews, our observations, our interactions?
Posted: June 11th, 2010 | Author: Tabitha Hart | Filed under: research tools, TAMS | Comments Off on TAMS Analyzer
To analyze the data for my project on online intercultural communication I have decided to use TAMS Analyzer. TAMS is an open source data analysis tool written for Mac OSX, and it’s free. Yes, free. From what I have learned so far it supports complex qualitative coding. You can also use it to generate different types of reports, such as counts and, of course, lists of sorted codes/coded passages. The documentation was helpful up to a point, and now I’m simply learning by doing. I’ll attach a short summary of my notes here, which might be useful to those just picking TAMS up. The page numbers refer to the “TAMS Analyzer User Guide,” a pdf document which comes bundled with the software itself. The entire package can be downloaded here.
Using TAMS
- Material has to be rtf. Recreate the docs as rtf files and import into TAMS
- You need to manually save these windows all the time.
- init file: create this to tell the program how to code contextual data (p. 35, 95)
- Have context codes here for “role” (student, trainer, staff)
- Have file types (fieldnotes, interaction, interview, forum etc.)
- Have “person” or “name” as contextual data
- Have “topic” variable
- You can also do “if” coding, like if person = Bob then role = trainer
- Each “document” that you import will have a name – be consistent with your naming scheme (best to match it with original documents in your archive)
- Use “universal codes” (metatags) to note, for example, what type of document it is (interview, fieldnotes, etc.) (p. 19)
- universal codes are generated for every results window record and hold their value through the whole document.
- At the top of the document, put {!universal datatype=”Interview”}
- This will produce one column in your output called “dataType” and for records from this document it with fill it with “Interview”.
- i. Note that the “horizon” (or scope) of universals is the end of file (eof)
- “context codes” mark distinctive attributes for a section of a document (marked by {!end} or {!endsection}). Typical repeat codes include speaker, time, question – all of which you would want to be attached to a passage of text you have coded. (See also variable tags, p.35)
- To make these (for example, to denote speaker) create the “heads up” tag like {!context speaker}at the top of the file. You’ll then insert the context tags in the file where applicable, like {speaker}John{/speaker}: {food>parsley}I hate parsley.{/food>parsley}{!end}
-
-
- Where you have more than one speaker it’s a good idea to make the document structured, i.e. to put in “sections” pertaining to the context codes. To do this:
- put the metatag {!struct} in your init file or on top of each interview if you don’t have an init file. Now you can show switches in speakers, roles, etc.
- have a context code in the file like the one above.
- At the end of the section (i.e. the end of the speaker’s turn) put in {!endsection}. With this command, context values get carried forward, but the system knows that particular section has ended. (There’s another command to wipe clear the context values, if you want.)
- TIPS: (1) be careful to mark all the speakers, or you will think the wrong people are saying the things you are finding. (2) put in an {!endsection} whenever the value of speaker changes, or you will be misled as to who is speaking.
- Data codes are marked with {code}interesting passage{/code}.
- Code names consist of numbers, spaces and underscore characters. No spaces permitted.
- Passages of text can have multiple codes; codes can be nested and can overlap.
- As you create codes you’ll use the “definition” button to define them.
- Coding Level 2 – there’s a “reanalysis” phase in which you re-configure codes that you’re working with. You have to set the software to “reanalysis mode” to preserve original information. You can then refine your codes.
- You can export reports from this level, too
- The > symbol shows subtype
- {sound>pig}oink, oink{/sound>pig} means that “oink, oink” is an example of sound subtype pig.
- i. You can have multiple levels of subtype
- Coding Level 3– you can identify code families (minus the “no spaces” restriction – you can use full sentences here)
- TAMS calls these “code sets”
- There is “no spaces” restriction – you can use full sentences for code set names
Memos/comments can be included with a coded passage – you just do it by hand after the end code, separated by a space but still inside the brackets. It looks like this: {food>parsley}I hate parsley.{/food>parsley This guy’s crazy!!!}
Posted: June 3rd, 2010 | Author: Tabitha Hart | Filed under: research tools | 1 Comment »
Here’s an updated list of qualitative data analysis (QDA) programs, courtesy of Leah R:
Freeware:
AnSWR
CDC EZ-Text
ELAN
Ethno 2
RDQA
TAMS Analyzer
The Coding Analysis Toolkit (CAT) (currently free while in beta)
Weft QDA
Not free (but may have free trial versions):
AccuLine
Atlas.ti
BEST
C-I-SAID Version 1.0
Ethnograph version 5.0
Ethnovision 2.3
HyperRESEARCH 2.8.2
HyperTRANSCRIBE 1.5 (transcription to support analysis)
InterClipper Professional v1.1.3
INTERACT
INTEXT 4.1
Kwalitan 5.0
MacSHAPA 1.03
MaxQDA
NVivo 8
OCS Tools 3.5
PolyAnalyst 4.5
QDA Miner
Qualitative Media Analyzer
Qualrus
Sign Stream Version 2.0
Survey Logix
TACT version 2.1
Transana
WordSmith 4.0
WordStat
XSight
Posted: June 3rd, 2010 | Author: Tabitha Hart | Filed under: equipment | Comments Off on Audio recording equipment
When you are in the field what equipment do you like to use for making audio recordings of interviews and interactions, and why?
My all-time favorite piece of equipment was a Sony Electret condenser microphone, which I used in combination with a Sony MP3 player. I liked the Sony Electret condenser microphone so much that I’d almost go out of my way to use it now. (Almost, but not quite. Other factors such as simplification and minimizing equipment win out with me these days.) The Sony Electret was compact, lightweight, totally reliable, and produced excellent sound quality every time. It was also durable and traveled easily from continent to continent without me worrying about it breaking.
The next recording device that I used was an Olympus DM-10 digital recorder, which came with me to India and Turkey. The Olympus is small and portable, and it’s very easy to transfer your digital recordings to your laptop. (You simply hook it up using the USB cable that comes with it.) I liked the quality of recordings that the Olympus produced, and it traveled well. What I didn’t like about the Olympus was its peculiar folder system for sorting recordings, and its limited memory. However, the real catalyst to the end of my working relationship with the Olympus was that it’s made to work with Microsoft programs, and is not easily paired with a Mac. You can get around this with additional tools (MPlayer is one), but I eventually opted to go with a more Mac-compatible recording device.
Now when I go into the field I take along an iPod Touch paired with a MityMic external microphone. I use the iPod’s Voice Memos functionality to record. To transfer the files to my laptop, I hook up the iPod and sync it with iTunes. The voice memos appear under the “Playlists” menu.
So far I’ve had good results with this setup. The sound quality has been very good, and I’ve captured clear interview recordings, even when those interviews took place in noisy settings, such as crowded cafes in Beijing. The Voice Memos software itself is very easy to understand, and the display shows you a sweet little VU meter to indicate your recording signal level. (Whether it’s accurate or not, I have no idea.) Since now all my devices are made to be compatible with one another, there is no hassle in transferring files. There are, however, some limitations. One is that there’s only one jack on an iPod touch, which means that you can either connect the MityMic or your headphones, but not both at the same time. Because of this you can’t monitor your recordings at the moment that they are collected. Instead, you have to stop the recording, unplug the mic, plug in the headphones or earbuds, and then play it back. Furthermore, I’ve done most of my work in stable settings, with me sitting across from my interviewees at a table, with the device laid out in between us. I haven’t yet tested how this equipment would perform on the fly if I was doing participant observations or on-the-spot interviews.
Posted: May 28th, 2010 | Author: Tabitha Hart | Filed under: research tools | 1 Comment »
Loving to write does not make it any less of an arduous task, and writing good fieldnotes is, I think, a true labor of love. The best fieldnotes, i.e. the ones that will most help you in your data analysis and write-up, are those that are most thoroughly detailed and descriptive, and it is no easy task to produce these. One of the best guides I’ve found on this process is “Writing ethnographic fieldnotes,” by Emerson, Fretz, and Shaw.
When I first began writing ethnographic fieldnotes I was a student researcher at UCSD’s Laboratory of Comparative Human Cognition where I worked on a project about bilingual afterschool education. For that project a group of us tutored local children at an afterschool computer club. After each session, we spent long hours at our computers, writing up pages and pages of our observations and experiences. I still remember being amazed at how long it took.
Nowadays I enter the field with better-formed plans and strategies in mind. One such strategy is “bracketing,” as described by Bruce L. Berg in his excellent book, “Qualitative Research Methods for the Social Sciences.” Bracketing entails selecting “certain subgroups of inhabitants [of a social setting] and observing them during specific times, in certain locations, and during the course of particular events and/or routines.” (Berg, 2001, p. 153) In other words, you think strategically about who exactly you need to observe, doing what, where, and when. It’s also important to carefully consider what your observational procedure will be once you enter the site.
In terms of deciding what to write down, Emerson, Fretz, and Shaw advise that we first take note of and describe our initial impressions of the scene, and then move on to describing “key events or incidents” (1995, p.27). Another key point is that:
“In writing fieldnotes, the field researcher should give special attention to the indigenous meanings and concerns of the people studied. …fieldnotes should detail the social and interactional processes that make up people’s everyday lives and activities…. Ethnographers should attempt to write fieldnotes in ways that capture and preserve indigenous meanings.” (Emerson, Fretz, & Shaw, 1995, p. 12)
In other words, writing fieldnotes is an excellent way understanding your participants’ worlds from their perspectives, including the meanings that they attach to their actions and interactions.
Aside from guides and strategies, the key thing about fieldnotes is to write them up as quickly as possible, since the longer you wait the less you’ll remember. Ideally, you’re sitting at your computer, typing away, no later than a few hours after each observation session.
Posted: May 20th, 2010 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Survey use in ethnographic studies
Are you a qualitative or quantitative researcher? In academia we are typically expected to adopt one or the other of these two camps, and in so doing we get swept up in the contentious debate as to which approach is best. The practical researcher should become skilled in qualitative and quantitative approaches, recognizing that both have their strengths and weaknesses. In fact, combining both qualitative and quantitative methods on a project can yield rich results.
My research project on Berlin Starbucks cafes made use of mixed methods. For that project my goal was to understand how the Starbucks baristas in Berlin, Germany made sense of, utilized, and modified the U.S. American-style customer service protocols that they were required to use. I carried out the research in three distinct phases. I began with non-participant observation in two of the Berlin Starbucks cafes. Next, I did in-depth, one-to-one interviews with a selection of the Berlin baristas and managers. Finally, I created a survey and distributed it to all of the Starbucks baristas in Berlin. (At the time of this project, there were only seven Starbucks cafes in Berlin, employing about 80 baristas. Now I believe there are 25 cafes, and I can only guess that they must have at least a couple hundred baristas.)
While I considered my project to be primarily ethnographic in nature, the survey component of the research helped me to test the nature and distribution of the themes and concepts that I had identified in the first two phases. Specifically, my surveys measured to what extent baristas felt comfortable with the customer service procedures that had been described to me in the interviews as especially difficult, stressful, or American/non-German.
Besides the results of the study, which were very interesting (if I do say so myself), I gleaned some practical knowledge and tips from this experience:
- Because the surveys were filled out by the baristas during their breaks, they had to be short enough to complete in 15 minutes. (How much time will your respondents realistically have to complete your survey?)
- My original survey draft was written in English, translated into German, and then back-translated by a second party into English. This was done to check the accuracy of the first translation and to fix any incorrect or ambiguous concepts, grammatical constructions and/or vocabulary. (Do the concepts that you are testing translate into the target language and/or culture? Do your questions make sense and/or mean what you intend them to mean in the target language and/or culture? )
- I did trial runs of the German survey with 10 baristas, who kindly gave both written and oral feedback on the questions, critiquing clarity, vocabulary, jargon, etc. More modifications to the survey were made based on this feedback. (Do the concepts that you are testing make sense to the target community? Are you using the correct terms and/or jargon for experts in that community?)
- The Starbucks store managers distributed the surveys to the baristas. The baristas were asked to fill the surveys out on the premises, as I thought this would increase the likelihood of their being completed. However, this could undoubtedly have influenced the baristas’ answers, since their privacy at work was limited. (Ideally, what sort of environment should respondents be in when they are answering the questions? Will respondents experience greater anonymity and/or privacy with a paper copy of the survey, or an electronic one?)
For more articles on mixed methods work done in the social sciences and other fields see the Journal of Mixed Methods Research.
Posted: May 13th, 2010 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Qualitative data analysis software
Having finished the data collection for my current project, I’m ready to begin the next phase of focused analysis. This puts me squarely in the market for qualitative data analysis (QDA) software. In past projects I used SuperHyperQual, which was a good starter tool, easy to learn and effective for straightforward coding & tagging in a small data set. Now I’m looking for something a bit more powerful. I need a tool that accommodates a wide variety of data formats, since my data set includes audio recordings, field notes, interview transcripts, news articles, opinion posts, manuals, and screenshots of online interactions. The main functionality I need is organizational; the tool should be able to help me archive, code, and link my materials. It should allow for multiple tags and codes attached to the same piece of data, and if it could generate summaries/reports in multiple formats that would be a big plus.
And if it’s not asking too much, it should be relatively easy to learn, and (here’s the deal-clincher) Mac-compatible.
Given this, what qualitative data analysis software package should I invest in?
This site has a nice overview of a number of popular QDA programs, some of which can be used on a Mac. See this site, too. Once I’ve tested some of them out, I’ll report back here.
In the meantime, what QDA software do you recommend, and why?