Mile End Data Session

From IMC wiki
Jump to: navigation, search

The Mile End Data Session is an initiative of the Cognitive Science research group, but encourages researchers from all over Queen Mary (and elsewhere) to come and present interactional data for group discussion and analysis.

Date / Time / Location

  • The data session is held on alternate Wednesdays. Please check the schedule for upcoming dates.
  • It is based at the Mile End campus of Queen Mary University, in room CS414.
  • The session usually runs from 2pm - 3:30pm.

Announcements

Will be sent to the MEDS mailing list - subscribe there for updates.

Remote participation

Efforts can be made to enable remote participation via Google Hangout.

Notes

Notes on each session will be uploaded to this wiki. Links where available from topics column in the timetable...

Presenting

If you would like to present at a Mile End Data Session, please contact Saul Albert or Shauna Concannon.

What is interactional data?

Basically any recordings or even just data logs of natural or spontaneously occurring human interaction rather than scripted interviews or structured dialogues. Interactions can be local, remote, mediated or face-to-face. **NB:** This kind of data might include interviews or interactions in institutional/formal contexts, but it's important to note that in this case the situation being examined in the data session will be the interview or formal context itself and how it structures or is structured by the interaction - rather than the issues raised in the 'content' of the interview or in pre-prepared questions. In general, unless interview situations are the subject of study, this kind of data may not be appropriate for a data session.

How a data session works

You bring some video or audio data, and preferably some transcripts - in any format. Everyone looks at them, probably multiple times, and then discusses them. They are a very open format for the exchange of ideas and discussion - more a way of generating ideas than a seminar, so if you don't have everything figured out, so much the better.

Timetable

No planned upcoming sessions for the moment


Date Name of Researcher Topic
9th July 2014 2pm-5pm, CS414 Moira McGregor and Clare Nicholson Clare is presenting some data focussing on a person with severe-profound and a care staff member interacting. The person with severe-profound learning difficulties is non-verbal and uses other, often idosyncratic methods, to communicate. Moira has some interactional and phone-screen-capture data from people using (and recording their interactions with) their phones in social contexts.

Previous sessions

Date Name of Researcher Topic
21st May 2014 4pm-5pm, CS414 Gibson Okechukwu Ikoro Gibson Okechukwu Ikoro will be presenting some online chat data with which he is developing a method for automated sequence and turn identification. The data session will involve an open-minded look at the data as well as collaborating on qualitative analysis, then discussion and feedback on Gibson's proposed methods.
22nd January 2014 Toby Harris Comedy Lab: Instrumented Audiences: The Comedy Lab performance experiments were in part an exercise in instrumenting an audience. In this data session, we will be looking at the resulting dataset. This includes audio-visual recordings of performer and audience, motion capture of head and wrist position of performer and audience members, and measures of breathing and display of facial affect for the audience members. We have performed some specific analyses with promising results, however the aim of this session to explore the challenges and opportunities of this cross-modal dataset as a whole
29th January 2014 Dirk vom Lehn and Saul Albert Dance in a Day Data: The data presented here is drawn from recordings made by Saul Albert and in Dirk vom Lehn of a day-long beginner's partner dance workshop. It was filmed using three camera angles, using two wireless lavellier microphones on six different leader students, and on-camera barrel mics to capture environmental sound. Partner dances such as the Lindy Hop nominally involve a 'lead' partner initiating and a 'follow' partner responding to a spontaneously combined set of more or less conventionalized bodily movements around a dance floor, often in conjunction with rhythmic instructions, counting or music.
5th February 2014 Louis Mccallum TBC
12th February 2014 Rose McCabe/Jemima Dooley NB: MOVED TO THE ITL Acknowledgement tokens: little words that matter. This week we will be looking at a variety of naturally occurring conversational data and focussing on objects like 'Okay, 'Oh', 'yeah', 'mhmm' and other acknowledgement tokens, especially 'Okay'. There is a wealth of conversation analytic literature on this issue, one of the first (and the recommended reading for the session) is Beach, W. (1995). Conversation Analysis:'Okay' as a Clue for Understanding Consequentiality. The consequentiality of communication, (1983), 317–348. (pdf available here).
19th February 2014 CANCELLED
26th February 2014 Saul Albert: Response cries: Goffman's hilarious paper on this topic introduced this as a bucket term to describe and catalogue the squeaks, burps, giggles and fits of impassioned swearing that form a significant part of human interaction. Goffman's survey incorporated everything from the 'strain grunt' to the 'orgasmic moan', but the criteria for inclusion in the bucket was that it should be an apparently unmediated expression of an internal state. Looking to the later development of CA as an empirical method of investigating Goffman's phenomena, a perfect example is John Heritage's famous analysis of 'Oh' as an informational change of state token. More recently Kitzinger and Wilkinson have taken some more of these terms out of Goffman's bucket (specifically those dealing with surprise), and have analysed them as an interactional achievement, grounding their analysis in relation Darwinian theories of emotional expression. Setting aside any of these theories (so you don't really have to read any of these papers, though they're all great fun), in this data session we are going to be looking at 'oohs', 'aahs', sardonic snorts and other kinds of noises people make while watching a performance artwork at the Tate.
5th March 2014 Kavin Narasimhan: "Statistical measures of the spatial manifestation of Conversational Clusters": The aim of this data session will be to look at ways of obtaining statistical measures of the spatial manifestation of conversational clusters. We have three datasets: Dataset 1 comprises videos of naturally occuring human conversational clusters filmed during drinks reception parties following seminars at QMUL; Datasets 2 and 3 are screen recordings of agent clusters resulting from two different computational models respectively. The aim is to measure specific spatial attributes & features of both people and agent clusters -- e.g., shape and size of the clusters etc., and then to compare measurements with one another. No background reading is required. So far, we have used various techniques to annotate/measure spatial features of people and agent clusters: hand-coding; OpenCV algorithms and MATLAB for image processing/transformation etc. Each of these methods have yielded different outcomes and intermediate outcomes. Alongside gathering feedback regarding our existing approaches, I also look foward to brainstorming for objective ways to annotate and analyse the spatial features of conversational clusters.
12th March 2014 TBC TBC
19th March 2014 TBC TBC
26th March 2014 TBC TBC
Date Name of Researcher Topic
25th September 2013 Nicola Plant Descriptions of felt experiences in dyadic interactions. The felt experiences discussed by the participants range from painful experiences such as a headache or pleasant experiences such as a yawn. We'll be looking video and speech data of a selection of different items, with a particular interest in the nonverbal interaction, such as posture, expressions and gesture.
9th October 2013 Sara Heitlinger TBC
23rd October 2013 Pollie Barden Research video of runners and older people from the Good Gym project: http://qmat.net/project/goodgym/ working on a tablet together.
6th November 2013 Stavros Orfanos Group therapy / body movement therapy video data.
20th November 2013

No session

TBC
4th December 2013 Vincent Akkermans The data under scrutiny are Blender video tutorials and their transcriptions. In these tutorial videos an expert user explains how to make something using particular techniques and features of the software. I'm interested in how the experts talk about their actions. For example, is there a structure to the way they talk about the what, how, and why of what they do? What types of details do they leave out? The motivation for this study is to produce a framework that can inform the development of a system that produces summarisations of interactions with Blender.