Posted: December 9th, 2011 | Author: Tabitha Hart | Filed under: articles & books, theory | Comments Off on Free materials on Social Computing
As I have blogged about before, I am very interested in theories of social technology as well as research on the ways in which technology and communication are mutually constitutive. How are technologies strategically designed to shape communication? How can such designs be improved? How can developers take social interactions into account as they plan, develop, and execute designs for technological interfaces? If you’re interested in these questions, and you’d like to explore their answers from a developer’s/technologist’s viewpoint, then you’ll want to take a look at these free materials on social computing developed by Tom Erickson of IBM Research Labs. These materials are part of the growing collection being developed by Interaction-Design.org, a venture driven by scholars and thinkers who seem wholly dedicated to sharing information and supporting conversations.
Posted: November 15th, 2011 | Author: Tabitha Hart | Filed under: articles & books | Comments Off on A glimpse into the work of corporate anthropologists
Today’s post is a very short one as I’m swamped with work and getting ready for NCA. I’d like to share the link to an article that was mentioned on the Media Anthropology mailing list. The article, which was published by the BBC a few months ago, is on applied research in organizational settings. Specifically, it’s a tantalizing glimpse into the world of corporate anthropology, written up by Genevieve Bell, a researcher at Intel.
Posted: November 2nd, 2011 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Dedoose update
Dedoose, which I’ve written about before, is a web-based platform for qualitative data analysis. I recently had another look at it when two of my colleagues elected to try it out for their current research project. (Thanks for the inspiration, Colin and Damon!)
Dedoose works on a subscription model, where you pay a monthly fee to use the services. Once you have an account you upload your data onto their cloud-based system, and then you code, organize, and analyze much the way you would with computer-based tools such as TAMS and Atlas.ti.
So why would you use Dedoose rather than another software package? Well, the unique strength of Dedoose is that it facilitates collaborative coding and analysis.
With Dedoose you can share projects and/or data sets with a few simple clicks. What this means is that you can easily bring your research colleagues on board for the coding and analysis. Because all of the material is web-based, you can see the coding work that your teammates are doing in real time. There’s no need to ever merge coded documents or email updates back and forth, since everyone can access all the materials, all the time, in their real time state. For collaborative projects, a tool like Dedoose is really ideal.
What are the drawbacks? First, from what I know Dedoose currently cannot work with audio, video, or pdfs. It can handle images but only if they are embedded in text files. If you have a data set that is heavily comprised of audio, video, pdf, or image data, you will probably want to use another software package. (Tip: I’ve heard rave reviews about Transana, which is geared towards the analysis of video and audio data.)
Second, files can’t be edited once they are uploaded. This isn’t necessarily a big deal, but it does mean that you need to have your data clean before you upload and start coding, since you won’t be able to do this on the fly. Also, Dedoose undoubtedly does a great deal to protect its clients’ data, but for safety’s sake you’ll want to anonymize everything before uploading.
Finally, while the subscription model has certain advantages (you only have to pay for it as long as you need it), remember that when you finish your subscription you lose access to the tool, so there’ll be no going back and coding extra data unless you extend your membership.
I’m sure there are other pros and cons I’ve missed. Have you used Dedoose? Let me know if you have — I’d be interested in hearing about your experience with it.
Posted: October 28th, 2011 | Author: Tabitha Hart | Filed under: research work, theory, writing | Comments Off on Strategies for ensuring validity and reliability in ethnographies of communication
When you are engaged in doing an ethnography of communication, how do you ensure that you are assessing your key concepts accurately? How do you make certain that your readings of the data are correct? There are a number of strategies that can be used to test the validity and reliability of work done under the aegis of the ethnography of communication.
First, does the researcher make it a point to use key terms, concepts, descriptions, and explanations used and/or provided by the people under study? (Philipsen, 1982, p. 49) One excellent model for this approach is an article on an Osage community by Pratt and Wieder (1993). In this article, Pratt and Wieder provide readers with a step-by-step description of the speech events under analysis, including detailed information on who (gender, age, experience, role) can speak for others (and why); who participates in these speech events (speaking for others) and how; how these events start, proceed, and end; how they are regulated; how/where people sit/arrange themselves; how stages of the events are ordered; the underlying reasons for the events; how people prepare for the events; what expectations govern the events; may and may not be said; how listeners comport themselves; and the delivery (eye contact, gaze, volume, tone) of speakers. In this way Pratt and Wieder use informants’ terms and also describe very carefully, down to the smallest details, how informants see these speech events playing out.
Second, does the report expound on something that the people under study would actually acknowledge as a facet of their lives? In other words, would community members recognize the findings as something real and true about their world? (Philipsen, 1982, p. 49) One great example of this is Manning’s (2008) analysis of online forums for Starbucks baristas. In the forums the baristas let off steam about “SCOWs” (stupid customer of the week), a local concept. Manning uses the baristas’ own words to elucidate what, exactly, stupid customers are (what they do, say, etc.)
Third, does the researcher produce an analysis that actually helps people from the community under study to “better to understand [their] own social world?” (Philipsen, 1982, p. 49) When sharing your work with your informants, do they report back that it helps them to analyze, understand, or deal with the issues better, more effectively, or more successfully?
Fourth, (how) does the researcher seek out checks and/or validation of the findings from members of the community under study? (Lindlof & Taylor, 2002) Such checks can do a great deal to validate the accuracy of the findings, because ultimately an ethnographer of communication seeks to discover the meanings and understandings of the informants themselves. Two model studies of this are Baxter (1993) and Bailey (1997). In both of these cases, the researchers test their findings by sharing them with informants, asking them if they got things right, and having them provide further explanations where necessary.
Fifth, having multiple researchers on the project can function to ensure validity. Pratt and Wieder (1993) are a good example of this, with Pratt’s inside knowledge as a member of the Osage community working in combination with the experience of Wieder.
Sixth, another good strategy is to have comparative data at hand. Bailey’s (1997) article is a good model for this because he shares transcripts of typical service interactions from multiple perspectives.
Finally, intercoder reliability checks can be a very effective way of ensuring reliability in the data analysis. To do this, one must engage a second (or third, etc.) person to look over the data not only to make sure that the codes and categories seem logical, but also to test whether or not they code it consistently with the primary researchers.
What methods do you use to ensure the validity and reliability of your qualitative work?
References
Bailey, B. (1997). Communication of respect in interethnic service encounters. Language in Society, 26(3), 327-356.
Baxter, L. (1993). “Talking things through” and “putting it in writing”: Two codes of communication in an academic institution. Journal of Applied Communication Research, 21, 313-326.
Lindlof, T. R., & Taylor, B. C. (2002). Qualitative communication research methods (2nd ed.). Thousand Oaks, CA: Sage.
Manning, P. (2008). Barista rants about stupid customers at Starbucks: What imaginary conversations can teach us about real ones. Language & Communication, 28, 101–126.
Philipsen, G. (1982). Linearity of reserach design in ethnographic studies of speaking. Communication Quarterly, 25(3), 42-50.
Pratt, S., & Wieder, D. L. (1993). The case of saying a few words and talking for another among the Osage people: ‘public speaking’ as an object of ethnography. Research on Language and Social Interaction 26(4), 353-408.
Posted: October 21st, 2011 | Author: Tabitha Hart | Filed under: Mac, research tools | 3 Comments »
Just as ethnographers working offline produce sketches or photographs of the people, places, and artifacts that they study, so too do ethnographers of online communities. When I was collecting data for my most recent online ethnography, I decided to try capturing images of the online places & spaces I was studying, as well as the activities that I observed and engaged in there. These visual records proved to be valuable data for analyzing and making sense of the online community I studied. They have also been incredibly useful in writing up the results, since they help readers see and understand the places and phenomena being described.
Since I did my online ethnography on a MacBook Pro, I used the free native Mac functionalities (certain keyboard combinations) and apps (Grab) that I had available, which worked out very well.
Mac OSX keyboard combinations for screen captures
Using simple keyboard combinations you can quickly and easily take screenshots of your full screen, a selected area, or an open window. The images will be saved either to your desktop or the clipboard, depending on which combinations you use. I learned about these keyboard combinations through this MacRumors:Guides webpage. The basic ones listed on the page are:
Command-Shift-3: Take a screenshot of the screen, and save it as a file on the desktop
Command-Shift-4, then select an area: Take a screenshot of an area and save it as a file on the desktop
Command-Shift-4, then space, then click a window: Take a screenshot of a window and save it as a file on the desktop
Command-Control-Shift-3: Take a screenshot of the screen, and save it to the clipboard
Command-Control-Shift-4, then select an area: Take a screenshot of an area and save it to the clipboard
Command-Control-Shift-4, then space, then click a window: Take a screenshot of a window and save it to the clipboard
With my version of Mac OSX the images were saved as .png files. (The file type that they get saved as depends on which version you have, as the article notes.)
Once I had these images I renamed them and archived them with the rest of the data (fieldnotes, interviews, transcripts, etc.) that I collected. Later, when I was writing up the results, I imported them into my Word documents using Insert > Picture > From File…. It couldn’t have been easier.
Grab
Grab is a screenshot application that comes bundled with Mac OSX. Find it by opening Preview and clicking File > Grab, or simply by scrolling through your applications. Using Grab you can take pictures of your full screen, an open window, or a selection determined by you. You can also do timed shots, enabling you to take shots of things (menus, for one) that have to be activated. Once Grab is running simply click Capture on the main menu, select the type of screenshot you want to do, and follow the prompts. Images will be saved as tiff files to whatever location (desktop, folder, etc.) you select.
Third party applications
MacRumors lists several third party applications for taking screenshots, including applications that can produce moving images i.e. live action movies of the activities happening on your screen. Some of these are free and some you have to pay for. I haven’t tried these out myself. If you have, please write in and let me know how they worked for you.
Which option is best for you?
Deciding which tool is best for you is, of course, depends in large part what type of images you are trying to capture. In my case, I only wanted simple snapshots, so the keyboard combos and Grab were well suited to my needs. The decision also rests on how you work when you are doing your participant observations. Ultimately, I found the keyboard combos to be the most useful, because when I was “on site” doing participant observation, it was easy and convenient to hit the keys without breaking stride in my interactions. This was preferable to fiddling around with menu options, which distracted me from the activities that I was participating in. At the end of my participant observation sessions I’d have a huge stack of images on my desktop, which I’d then sort through, name, and archive. Not all of them proved to be good images, but since I erred on the side of caution by taking a lot of shots, I always ended up with enough of what I needed.
Posted: October 16th, 2011 | Author: Tabitha Hart | Filed under: articles & books | 1 Comment »
If you study online learning environments, use Hymes’ Ethnography of Communication or Philipsen’s Speech Codes Theory, or if you are learning about online ethnography, then you might be interested in a book chapter of mine that has just been published by IGI Global, titled “Speech Codes Theory as a Framework for Analyzing Communication in Online Educational Settings.” Here’s the abstract:
Knowing how best to assess and evaluate the communication that takes place in online educational settings can be a challenge, especially when the features of educational platforms continue to develop in their complexity. This chapter will discuss Speech Codes Theory, which is grounded in the Ethnography of Communication, as a theoretical and methodological framework for conducting qualitative, interpretive research. It will show how Speech Codes Theory can potentially be used to analyze and understand communication in a range of online educational settings.
The chapter can be purchased through the IGI Global website.
I’m always interested in reading more on how the Ethnography of Communication and/or Speech Codes Theory get applied to online settings, so if you have any articles, books, or other resources to recommend, do let me know.
Posted: October 12th, 2011 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Now is the time to get a student license for Atlas.ti
If you are a student interested in purchasing a license for Atlas.ti, now seems like a good time to do it. Through the end of this month (October 2011) a student license will only be $79, and that price will include a free upgrade to version 7. Go here for more information.
Posted: October 11th, 2011 | Author: Tabitha Hart | Filed under: events, research work | Comments Off on AoIR taking place in Seattle this week
The annual Association of Internet Researchers (AoIR) conference is in Seattle this week, and promises an exciting line-up of panels, keynote speakers, and events. See this link for the conference program and this one to learn more about AoIR in general. If you can’t make it to the conference you can always follow the AoIR Tweets, which are tagged with #ir12.
This morning I gave a talk on online labor, a topic that I have mentioned in previous posts. My talk dealt with the pros and cons of being an online laborer in one particular community that I have studied. The pros I discussed included enhanced feelings of freedom and flexibility vis-a-vis the work; better choices for jobs and new opportunities for professional development; and the chance to engage in meaningful communication with clients and colleagues alike. The cons that I included in my talk were increased surveillance and monitoring; the control of employee communication through the use of service scripts; and tensions between increasing the scale of communication services and having to (de)personalize service communication.
This is a topic that I am increasingly interested in, so please do contact me if you’d like to learn more about this research.
What books, articles, and/or websites on the subject of online labor do you recommend?
Posted: October 7th, 2011 | Author: Tabitha Hart | Filed under: events | 1 Comment »
Who was Ada Lovelace? (Bonus points to you if you already know the answer.)
Ada Lovelace was an early pioneer in computing. She collaborated with Charles Babbage (the mathematician who conceived of programmable computers) and she is reputed to the be world’s first computer programmer. That’s right — the world’s first computer programmer. Talk about (geek) grrl power.
Today is the third annual Ada Lovelace Day, an occasion for celebrating the work and the achievements of women in the fields of science, technology, engineering and mathematics. In honor of this I’d like to express my heartfelt gratitude to the two women who introduced me to the use of computing in qualitative data collection and analysis: Dr. Olga Vasquez (UCSD) and Dr. Lynda Stone (CSUS). Any inclination I had towards embracing computing in communications research can be traced back to the work that I did with them.
Which women in science, technology, engineering and/or mathematics have inspired you? Share your stories and spread the word.
Posted: September 30th, 2011 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Collecting Tweets: TwapperKeeper
I’ve recently become interested in tools for collecting and analyzing Tweets. I know that DiscoverText, which I’ve mentioned before, can be used for these purposes, and I’ve just begun experimenting with TwapperKeeper. TwapperKeeper is, as far as I can tell, a free tool that allows you to create archives of Tweet data associated with hashtags, keywords, or @people of your choosing. Unfortunately, TwapperKeeper can no longer be used to export the data it archives, since this apparently violates Twitter’s Terms of Service, as I learned through Mark Sample’s ProfHacker blog.
What alternatives are out there for collecting and exporting Tweet data? Any suggestions and/or recommendations?