Team Viewer for remote access & desktop sharing

Posted: February 28th, 2011 | Author: | Filed under: research tools | Comments Off on Team Viewer for remote access & desktop sharing

My new favorite tool for remote access and desktop sharing is TeamViewer.  It’s free, it’s easy to use, the UI is nice, and it works like a charm.  There are versions for Mac, Windows, Linux, the iPhone, the iPad, and Android.  So far I’ve only used it on my MacBook, and I was very happy with it.

Why would you want such a tool?  I can think of several scenarios for researchers in which TeamViewer would come in handy.

  • You are remote-teaching a colleague how to use a computer-based tool (like TAMS, for data analysis).  As you speak together (on the phone, via Skype, etc.) you can watch your colleague’s movements on their desktop, and thus better assist him/her in the learning process.  I did this with a colleague of mine recently, and being able to see her desktop while explaining the program made things much easier.
  • You want to observe a research participant’s use of a computer-based tool, but you can’t be there in person.
  • You want to engage in collaborative coding of data with a colleague in real time, but can’t be there in person.

Have you used Team Viewer?  If so, what did you use it for?

More tools for recording phone interviews

Posted: February 21st, 2011 | Author: | Filed under: research tools | Comments Off on More tools for recording phone interviews

I stand by my earlier review of using a combination of Skype + AudioHijack Pro for recording phone interviews.  If, however, you’re looking for alternate solutions, here are a couple that have recently been discussed on listservs like medianthro and anthrodesign.  I have not used any of these solutions myself.  If anyone out there has comments and/or feedback on them, please share.

Hardware Solutions

These devices get plugged into phone jacks and/or telephones:

Mini Recorder Control

THAT-1 Telephone Handset Audio Tap

Software Plug-In Solutions

I’ve mentioned Call Graph previously when discussing transcription services.  Call Graph offers a free software plug-in that you can use to digitally record Skype conversations.   The free version has ads on it, but there’s also the option to pay for a premier version that has no ads.  I imagine that Call Graph’s main motivation in offering this tool is to entice you to send your transcription work to them. You can see a tutorial on their plug-in here.

My software of choice is still AudioHijack Pro.

Platforms is a subscription-based platform that supports conference calls.  As part of the service you can have the calls recorded and archived (for a limited time) through the platform.

Google Voice is a relatively new (and mostly free) service, and it seems that it supports recording at any point during a call.  The only limitation is that you may only record calls that you receive, not calls that you initiate.  More information here.

Web-based collaborative qualitative data analysis tool: Dedoose

Posted: February 2nd, 2011 | Author: | Filed under: research tools | 4 Comments »

I’ve just recently learned about Dedoose, a web-based tool designed for collaborative qualitative data analysis.

Through browsing their website I found that Dedoose supports both qualitative and quantitative analyses.  Because it’s web-based it has certain advantages: analyses are updated in real time, there is no need to download any software to your computer, and all of your material is stored (safely?) in the cloud.  Provided you have Internet access you and your team members can work anytime, anywhere.

I can’t speak of the user interface because I  haven’t tried it out yet, however Dedoose has received positive reviews on a few of the mailing lists that I subscribe to.  Like SurveyMonkey Dedoose operates on a subscription model — you pay a monthly fee to use it.  You can try before you buy, though:  according to their website anyone can try Dedoose out for free for one month.

I’ll get to that *after* this dissertation is done.  In the meantime, if you have reviews of Dedoose please share them.

Online survey tool: SurveyMonkey

Posted: January 27th, 2011 | Author: | Filed under: research tools | Comments Off on Online survey tool: SurveyMonkey

If you are looking for a comprehensive, easy-to-use tool for running an online survey, I recommend SurveyMonkey.  SurveyMonkey makes it easy for you to set up and launch a survey, as well as collect, monitor, and process the results.  If you are doing a small scale survey (10 questions or less, 100 respondents or less) you can even use it for free.  For larger surveys you can purchase a monthly membership, of which there are three reasonably priced levels.

If your respondents don’t have Internet access, I believe there are easy options to print out pdf versions of your survey, which you can then distribute.

What are your favorite survey tools?

Coding Analysis Toolkit (CAT)

Posted: December 1st, 2010 | Author: | Filed under: research tools | 4 Comments »

Today I attended a data collection + analysis workshop led by UMass Amherst professor Stuart Shulman.  The workshop focused on two web-based tools developed by Dr. Shulman – an old one (CAT) and a new one (Discover Text).

Here’s a thumbnail sketch of CAT and some of its potential uses.

Coding Analysis Toolkit (CAT)

CAT is a web-based system into which you can upload text files for team-based qualitative analysis.  It is intended primarily for Atlas.ti users who are working on collaborative projects involving a number of coders.  The idea is that you upload your Atlas.ti HUs (“hermeneutic units,” which is just a complicated name for “projects”) into CAT, and then run reliability tests on your coders’ work.  CAT users are probably asking questions like these:

  • Is there consistency in how my project’s coders are labeling, categorizing, or otherwise coding particular data?
  • Is there consistency across codes?
  • Is there consistency across coded excerpts?
  • What are the codes or excerpts for which there is strong pattern of disagreement?

If you do not have Atlas.ti data, but are looking for a platform for team/collaborative coding, CAT could also be useful to you.  The key thing here is that CAT is well suited to quickly coding very large batches of texts that are short and highly consistent.

Let me explain.

In a program like Atlas.ti you highlight and code small bits of text (for example, a word, sentence, paragraph, exchange, etc.) that are contained within a larger piece of text (such as an interview, an interaction transcript, an article or news story, etc.)  You are constantly highlighting and “tagging” contextualized data.  You are also able to see visual representations of the codes present in whichever file you are working on.

CAT, on the other hand, seeks to do away with the clicks and drags of selecting, highlighting, and tagging data with your keyboard and mouse.  Instead, CAT allows you to import broken up (or “demarcated”) data, which then gets separated into “pages” on the UI.  That is, instead of seeing one long interview transcript on the screen, I see just the first paragraph on the UI.  In one open field I type in a code (or multiple codes) for that paragraph.  Alternately, I can select a code from my list.  Once this paragraph is coded, I do a simple click to get to the next paragraph.  Again, instead of seeing the whole interview (or article, or transcript, etc.) I just see one piece of it at a time.  It’s like flipping through a book in which each “page” is a small piece of data.

This sort of approach would be well suited to examining archives of Twitter posts, or Facebook status updates, memos, interviews – any type of texts that are limited in size OR can easily be broken up into smaller pieces, and which have a consistent format.

As you can imagine, this would probably not be the ideal tool for you if you needed or wanted to keep your data embedded in its larger context as you were analyzing it.  CAT seems to be less suited to fine-grained analysis than Atlas.ti or other similar programs, but I can certainly see how it would be useful for doing concerted, rapid, first-run group analyses of very large data sets.

Using CAT is free, but you do need to create accounts for yourself as well as your coders.  You can also assign the coders on your project various permissions.

For more, see this introduction and this overview.

Next post:  Discover Text

Oral consent for distal interviews

Posted: August 19th, 2010 | Author: | Filed under: ethics, research tools | Comments Off on Oral consent for distal interviews

One thing that I’m interested in learning more about are the rules and expectations regarding informed consent at universities and organizations outside the United States.  Obtaining informed consent for interviews is a must when you are doing research at a U.S. American university, whether you are a student or a faculty member.  U.S. American universities also have official Human Subjects Committees that vet your project and grant you permission to proceed.

As I’ve blogged about before, I’ve had to conduct distal interviews for several of my projects.  Obtaining informed consent when you don’t get to meet your interviewees face-to-face is not difficult, and can be done orally.  Today I’m posting the approved script for getting oral informed consent that I used on my latest project.  This might come in handy if you need to prepare a similar script for your own university’s HS Committee:

Oral Consent Script for Recorded Phone or Skype Interviews

Hello, this is (your name), the researcher from the department of ( ) at (University ABC).  Thank you for scheduling this call with me.

As you know from the email you received, I want to better understand (topic of research). I am interviewing people who ( ).  I hope the results of this study will help us ( ). You may not directly benefit from taking part in this research study.

If you choose to be in this study, I would like to interview you about ( ). The interview will last about XX minutes and will focus on ( ).  For example, I will ask you about A, B, and C.  You do not have to answer every question.

Some people feel that providing information for research is an invasion of privacy.

Taking part in this study is voluntary. You can stop at any time.  I will audio-record the interview with your permission.  I will transcribe (write down the words from) the interview and destroy the recording by (date). [Here you should describe what you’ll be doing with the data, how you’ll protect each person’s identity, whether or not the data will be anonymous, how the data will be stored, etc.]

Do you have any questions?

Do you give your permission for me to interview you?

May I record the interview?

If you have questions later, you can reach me at by phone at ( ); on Skype at ( ); or by email at ( ). Although I keep Skype messages and e-mails private, I cannot guarantee the confidentiality of information sent online through those channels.

Transcription made easier?

Posted: July 30th, 2010 | Author: | Filed under: articles & books, research tools, transcribing | 1 Comment »

Anyone in the business of analyzing talk knows that with every interview, focus group, or interaction comes the laborious task of transcribing it.  When I’m really speedy I can transcribe 15 minutes of talk in about one hour, but that’s only a rough cut that doesn’t include Jeffersonian notations.  When I’m adding those in, it nearly doubles the transcription time.

(Note:  The Jeffersonian Notation system, developed by the late Gail Jefferson, who was an acclaimed Conversation Analyst, is a set of notations/markers that can be used to preserve phatic and other paralinguistic qualities of speech.  See this Glossary of Transcript Symbols by Gail Jefferson herself.)

Is there anything to make transcription easier, short of paying someone else to do it for you?  This week in the NY Times, David Pogue wrote an enthusiastic review of  Dragon NaturallySpeaking for Windows.

Dragon NaturallySpeaking is a newly revamped and (according to Pogue) much improved voice recognition software package.  I don’t have a copy of it myself, but it sounds like it might be a great tool for generating good (not perfect) rough cuts of recorded talk.  Even better, the professional, premium, and home packages all have multiple language capabilities, including English, Dutch, French, German, Italian and Spanish.  The downside is that Dragon NaturallySpeaking is only available for PC.  However, Nuance, the company behind Dragon NaturallySpeaking, does offer a software package called MacSpeech Dicate for us Mackies.

If I paid the $100-200 for the home or premium versions and had my transcription time greatly reduced, I’d think it well worth the price.

Any insight on this?

Ethnography app for iPhone

Posted: July 15th, 2010 | Author: | Filed under: research tools | Comments Off on Ethnography app for iPhone

Has anyone tried the ethnography-themed “EverydayLives” app for the iPhone?  It purports to help researchers with in-the-field data collection of still and movie images.  It seems to have  tagging, note-taking, and archiving functionalities.  I believe you can also easily share the data with other team-members.  There’s a short video demo of it on YouTube.  So far it’s difficult to find any substantive reviews of it.  It’s USD 12.00, so not a huge investment, but it would be nice to hear what other users have to say.

Getting started with TAMS Analyzer (first-level coding)

Posted: July 2nd, 2010 | Author: | Filed under: research tools, TAMS | 11 Comments »

Getting started with TAMS Analyzer

I’m updating my notes on TAMS as I get better at it.  This should help you get started with the first-level coding of your data.  As I learn more I’ll continue to share steps and tips.

  1. Currently TAMS only works with data in rtf format, although I understand that the upcoming version will also accommodate pdf.  In the meantime, you’ll need to convert your data to rtf before you import it.  (See user manual page 8.)
  2. I recommend creating a basic init file right away. (See user manual pages 35 & 95.) This file will save you a lot of time as you code your data as it tells the program how to treat certain variables and/or contextual data that you mark up in your texts.  Note that you have to TELL the program which file to treat as the init file.  Once you’ve created it (call it “init file”), go to the file list in the workbench.  Highlight the init file in the list of files, and then click the “init file” button.  Now in the bottom left corner of the workbench you’ll see “Init file: name of the file you selected.”  This confirms that which file the system “sees” as the init file.  These are the codes I put in my init file:
    1. {!universal datatype=””}
    2. {!context role}
    3. {!context speaker}
    4. {!button speaker}
    5. {!button “{!end}”}
    6. You can also do “if” coding, like {!if speaker=”Jane”=>role=”trainer”}
  3. Now, consistent with the init file, you’ll include some basic codes in each and every file you work with.  Think of these as basic, structural codes that you’ve already decided on, which are linked to the init file.  These are the particular ones that I’m using:

{!universal datatype=”Interview”} (or fieldnotes, or forum posts, etc.)
{role}{/role} (I code the role of the person in question, so it looks like this: {role}student{/role})

{speaker}{/speaker} (I code the name of the speaker, so it looks like this: {speaker}John{/speaker})

The benefit of steps one and two above is that in my search results I now have columns for contextual information like the type of text (interview, fieldnotes, forum posts, etc.), the speaker in question (Jane, Bob, James, etc.) and their role (student, teacher, staff member, etc.)

  1. Other notes on the information above:
    1. The code {!button speaker} in my init file creates a short cut “button” on each of my files for the {speaker}{/speaker} code.  Clicking the “speaker” button is a nice shortcut for me when I code the data, since I use this particular code a lot.
    2. The code {!button “{!end}”} in my init file creates a short cut “button” on each of my files for the {!end} code, which is a context code.  Without the short cut button I’d need to either type this in by hand or use the menu option Metatags>Structure>{!end}.  This way, I can insert the {!end} tag with just one click.  More about {!end} below.
    3. In my project, I’m using the context code {speaker}{/speaker} because it’s important to me to be able to link statements with a source (i.e. the person who said it).  Given my large interview sample, having the capability to easily link statements/data to people is great.  When I’m coding, I use the {speaker}{/speaker} code each time somebody takes a turn.  The corollary to this is that I need to tell TAMS when that person’s turn of speech ends.  To do this, I use the metatag {!end}.  A passage of coded data would therefore look like this:  {speaker}Barry{/speaker} When are you going to turn in that assignment? {!end} {speaker}Ralph{/speaker} I’m not sure.  Probably next week.  {!end}
      1. TIP (1) be careful to mark all the speakers, or you will think the wrong people are saying the things you are finding.
      2. TIP (2) put in {!end} whenever the value of speaker changes, or you will be misled as to who is speaking.
  2. Now we get to the regular data codes.  As indicated above, TAMS uses squiggly brackets { } to denote coded data.  The codes go on either side of the passage.  The end code contains a slash: {code}piece of text here{/code}.

Code names can have numbers and underscore characters. No spaces permitted.

Passages of text can have multiple codes; codes can be nested and can overlap

Create a new code by entering its name into the field, then press “new”

As you create codes you’ll use the “definition” button to define them.

That sums up where I am right now in my first-level coding.  I’ll report back with more information as I progress.

Transcribing talk

Posted: June 24th, 2010 | Author: | Filed under: research tools, transcribing | Comments Off on Transcribing talk

If you collect audio or video data for research purposes, then you’ve certainly had to deal with questions of transcription:  how much of the data to transcribe, what transcription convention (if any) to follow, how to present transcribed data to the reader, etc.  Philosophically speaking, the act of transcribing talk is much weightier than one might imagine, since it involves interpretation of the data, for researcher and reader alike.  When I was preparing for my general exams, my advisor assigned me to read:

Lapadat, J. C., & Lindsay, A. C. (1999). Transcription in research and practice: From standardization of technique to interpretive positionings. Qualitative Inquiry, 5(1), 64-86.

It’s an article that I highly recommend, since it offers keen insight, as well as guiding questions, on transcribing talk.  The key point that Lapadat and Lindsey raise is that although many (most?) research articles don’t typically include much detail on transcription choices and procedures, they should, since “each researcher makes choices about whether to transcribe, what to transcribe, and how to represent the record in text.” (p. 66)  These choices are not obvious, and they impact the interpretation of the data.

Some of the transcription choices listed by Lapadat and Lindsey are:

  • How should one organize the page?
  • How could transcript preparation procedures be designed to balance between competing demands of efficiency and accuracy?
  • Should orthographic or phonetic transcription or a modified orthographic approach reflecting pronunciation be used?
  • What paralinguistic and nonverbal information should be included, and what conventions should be used to symbolize or present it?
  • What should constitute basic units in the transcript—utterances, turns, tone units, or something else?”

(Note that the questions above are directly quoted from p. 67.)

Other questions raised in Lapadat and Lindsey’s article are:

  • What do we include in our transcripts, and what do we leave out?  For example:
    • descriptions of the setting
    • descriptions of the interlocutors, or other contextual factors
    • descriptions of interlocutors roles
    • gestures
    • facial expressions
    • tone of voice
    • seating/standing configuration
    • other activity on the scene
    • misunderstandings
    • “unintelligible utterances” (79) etc.
  • How do we account or compensate for the data that we do not include in our transcripts?
  • When (if ever) and how should we go about checking/proving the reliability and/or validity of our transcripts?

What Lapadat and Lindsey stress is that the transcriptions that we produce, regardless of (or because of) our choices, are not value-free or “neutral” (p. 69) and shouldn’t be regarded as such. We shouldn’t assume that a transcript provides us with an objective, one-to-one match with reality. Because of this, Lapadat and Lindsey believe that it’s important for researchers to be able to account for the “…influences of theory and transcription methodology and their implications for interpretation” (p. 76). We have to

“…make reasoned decisions about what part transcription will play in the methodology. This includes whether to include transcription as a step, how to ensure rigor in the transcription process and reporting of results, and heuristics and cautions for analyzing and drawing interpretations from the taped and transcribed data.” (81)

It’s clear that Lapadat and Lindsey strongly feel that our transcription choices should be reported on in our research articles.  While this may not always be prioritized or even possible (especially considering the tough restrictions on content and length in academic journals), we should at least be reflective about how we transcribe talk, and we should be able to explain and justify the transcription choices that we make.  As we go about transcribing our own data, it’s our task to make thoughtful, informed decisions about the process.