I’m feeling bloody-minded and have been re-reading Don Norman‘s “Design of Everyday Things.” I got on this kick because I was annoyed at Norman’s article in the latest Interactions: in discussing “social signals” (indicators of status left by human activity–for example, an empty train platform indicating you’ve missed your train) Norman has unkind words regarding “affordances”, a cherished interaction-design concept.

Affordances basically suggest how one might interact with an object. Say, the holes on scissor handles suggest your fingers go there, or the beveling of a UI button suggest that it is something to be clicked on. While psychologist James J. Gibson coined the idea in 1977, Norman borrowed the concept for “Design of Every Day Things,” cited it continuously, and basically grafted the idea into the heads of a generation of interaction designers (like me).

Now, Norman is suggesting we “forget affordances” and concentrate on social signals. Okay, Norman’s a bomb-thrower, and he probably doesn’t really mean it. There’s nothing wrong with considering a range of concepts and theories when considering how best to design a set of interactions around an object or service.

BUT: Norman’s casual murder of affordances got under my skin, and I started re-reading “Design of Everyday Things.” I’m only two chapters in, but one of the things I’m struck by, now that I’m out of grad school and less of a usability zealot, is Norman’s apparent belief that any object should be immediately understandable and usable, and, by implication, Norman’s rejection of expert utility. Secretaries should not need to be trained to use advanced phone functions; film machines should be grasped by any ol’ schmo.

I may go to user-centered design hell for it, but: I don’t agree. There is a place for experts using machines tuned to their trained and developed sensitivities. Call me elitist, but I say: YouTube’s singular impact is to prove that most people shouldn’t be allowed anywhere near a videocamera.

This all made me think of Steve Albini’s (engineer of Nirvana’s “In Utero”, dontcha know) famous mid-’90s rant “The Problem with Music.” In the midst of a wide-ranging screed against the music industry, Albini complains that untrained “crappy engineers” are recording and producing music with (then) new, easy-to-use DAT machines. saying: “Tape machines ought to be big and cumbersome and difficult to use, if only to keep the riff-raff out. DAT machines make it possible for morons to make a living and do damage to the music we all have to listen to.”

I agree. Not everything should be user-friendly.


Moderator: “Nothing says 8:30 a.m. on the last day of the conference like a math and theory-heavy session.”

Can Markets Help: Applying Market Mechanisms to Improve Synchronous Communication

CMU People. Natch.

Basic upshot: Sender does not now how costly interruption is to the receivers, and the receivers do not know how important the communication is to the sender.

Possible solutions:

  • Full disclosure: disclose all information in communication. Problems: Privacy issues; people may not use the disclosed information.
  • Payment markets (and the focus of this session): Can we utilize markets to improve communication efficiency, and how elaborate must hte market be to derive benefits?

So, how about if you put $5 on the value of the sent communication. Then receivers can filter out communication requests on price.

Benefits of market:

  • Privacy is prserved
  • Important and ugent communication wil have higher payments
  • Receivers compensated for interruption.

Prior work has shown that markets can increase gains for senders and receivers. But, in real life, people aren’t very good at estimating things, estimating value.

So, they did a lab study. Participants come in and earn money by solving puzzles. they can ask for help from other participants. And communication (ask for help) is run on a no-price market, variable-price market, fixed-price market.

Results

  • Market participants earneda bout a dollar more than no-market
  • and the fixed-price market seemed to perform the best.
  • Variable price market performed worse, but also was more prone to error and higher cognitive costs (assessing a number of prices)
  • Variable price market had significantly higher time to make decision compared to fixed-price market

Interesting stuff.

Network Structure, Position, ties and ICT Use in Distributed Knowledge Intensive Work

Competing models/theories to explain technology use:

  • Technology Acceptance MOdel (TAM)
  • Social Influence Model
  • Significance of social structural, position, and ties not well-accounted in the above models

So, what social structures affect ICT use?

Weak ties! Hearing a lot about that this conference.

And now, a completely impenetrable explanation of method. Oy, I am not smart.

Long Title: Investigating the Effcect of Dicsuiion Forum Affordances on Conversational Structures

Looking at thread structure of bulliten boards. Like, explicit or implicit threads.

Question: Is the way the discussion presented have an effect on how people internalize the discussion. (?) “Determine whtether these differences affect how people ‘talk’ “

Investigate the contribution of “Time” and “Similarity”.

Basically they’re looking at “flat” (blog-like) thread structure vs. threaded thread structure and see if that has an impact on the language that arises.

What they found: People drop cueues to other text in threaded discussions, use them heavily in non-threaded discussion. SHOCKER!


Coordinating High Interdependency Tasks in Asymmetric Distributed Teams

Petra Saskia Bayerl and Kristina Lauche, Delft University of Technology

Challenges of remote teams

  • Coordination of tasks and processes
  • Technology restrictions
  • Process and motivation losses
  • Conflicts and trust

Whoah, they looked at offshore oil production teams and the control people onshore.

Study aim: coordination for highly interpendent taks in distributed teams in foffshore oil and gas production. Implications for technology support.

Cutting Into COllaboration: Understanding Coordination in Distributed Medical Work

Collaboration is important: when it’s distributed it gets more difficult.

Loose collaboration structure can make coordination difficult.

Goals:

  • understand work practices ian coordination in research
  • tie practice to cscw issues sush as awareness and informal interaction

Research context:

  • Hospital in big city, biomedical engineering dept in small down, 240 miles separated.
  • Surgery dept: Define requirements, apply materials, conduct animal studies, analyze patient data
  • BME side: develop and design materials, developm modesl, build materials and devices.

Conflicts

  • Different views of time: surgeons go to OR all the time, engineers are timely and pissed off when surgeons miss meetings.
  • Communication: surgeons had different views of how they should be communicated to.

Research goals conflicts

  • Surgeons: clinical stuides, clear application, quick adoption
  • BMG: Innovation and new materials, patents, try several attempts.

So, successful projects sometimes, how did they overcome challenges:

  • Use human mediators
  • Opportunistic schedule adjustment (but not always aware of remote colleagues)
  • Optimize joint retreats and one-day trips

Implications for design

  • Flexible calendaring
  • Improved activity awareness

Future work

  • what makes project succeed: how much does coordination matter, how do these strategies work longer term
  • What does low-level coordination look like? Who and when do collaborators communicate? What do they share?

Summary

  • Key difficulties in coordination: perceptons of hierarchy; priorities, scheduling and work locations; research goals
  • Strategies to overcome: human mediators; maximizing face to face contact; opportunistic schedule adjustment and ad hoc communication

Linguistic Mimicry and Trust in Text-Based CMC

Lauren Scissors et al, Northwestern University

In face to face settings, people establish rapport through behavior mimicry, to get people to like them.

Lack this in text. Is there linguistic mimicry?

Previous research indicates that f-to-f speech patterns, people tend to adopt partner’s speach patterns.

Also, research on trust in CMC environments. Takes longer to develop trust in CMC.

Hypthoesis: Text-chat enviormenent: high levels of linguistic mimicry associated with higher level of trust, lower mimicry associated with lower level of trust. Hmmm, I’m skeptical.

Description of method…..examined mimicry: using lexical mimicry (noun or noun phrase), Text-chat abbreviation mimicry (like “u didn’t do it”), and syntactic, emotion-related character (emoticons).

Hmm, deeply skeptical of abbreviation mimicry.

WIthin-session mimicry led to higher trust. Across-session mimicry lower trust. Hm, not sure i buy it. Many approach the microphone for some of the ol’ rip-n-tear action.

Mind your Ps and Qs: The Impact of politeness and rudeness in online communities

Moira Burke and Bob Kraut, CMU

Goals

  • determine impact of polite or rude language in online commnity interaction (newcomer integration, more efficient groupwork, death by monster (?))
  • Build machine learning tools to automatically detect polite language
  • Extend linguistic politeness theory to social interactions between strangers online

Method: survey to rate politeness of messages  in a discussion group. Code message for presence or active of specific strategies around politeness.

Linguistic politeness theory, Brown and Levinson: 15 positive strategies to increase person’s positive social value, and 10 negative strategies to decrease. Interesting, should read that sometime.

Generally, they found that rude behavior in a politics group  “helps” (that is, gets replies), where as in technical groups tend to not get responses. Well, okay. Keep in mind, that previous research points out that getting responses is what fuels future participation.

Next steps: train machine to detect language. A “Politness checker” like a grammar checker. Hm, not sure i like that. Good writing is the avoidance of cliche, not the repetition of patterns observed elsewhere.

I am waiting – Timing and responsiveness in semi-synchronous communication

Within synchronous communication, lack of responsiveness is immediately problemmatic. But what is the affect in asynchronous?

In IM, users can choose whether and when to respond to EVERY point of the conversation. And, users typically multitask n IM communications.

Objective of study: a deeper understanding of factors that affect responsiveness. They do this with a survey. Hmm. This seems kind of obvious: resposiveness is a runction of how busy i am and the perceived importance of the message. Perceived importance culd be bucketed into a few things (who sent it, subject matter, provocation, etc).

Their list:

  • Identify of the buddy
  • relationship with buddy
  • time since liast message
  • whtehr message window already existed
  • whether message window was in focus

Results HIghlights and Implications

Relationship category did not have a significant impact on responsiveness. Hm, that’s surprising. But there were significant differences between individuals.

Work-fragmentation is a strong indicator of faster responsiveness

  • More keyboard activity
  • More mouse activity
  • More app-window switches.
  • Hmm, that’s interesting too.

Also: Faster responsiveness if the IM window was already open. And, signifiant efrect whether the window is covered or not. Shocker!

  • Longer messages got faster responses
  • Questions got faster responses
  • URLs got slower responsiveness
  • Emoticons marginally slower responses.

Microstructures of Social Tagging

Need to get name of presenter…University of Illinois

What are microstructures?

Relatively invariant behavioral patterns emerged from user-environment interactions.

At a functional level, cognitive processes tend to be stable across individuals.

Why do we care?

Provide explanations that cut zcross levels of activities: social levels (minutes, hours, weeks), cognitive levels (seconds, minutes), embodiment level (ms, seconds). Whoah.

Distributed congnition

Arguing that social tagging is a distributed cognitive system, where individual represntations of individual users interact with the “same” external representations of other users (tags)

Tagging is a form a knowledge exchange (your representation via a tag is interpreted by another person)

Exploratory Search

Exploratory information search characteristics

  • Lack of specific information goals
  • Info goals are defined throuh a series of search-and-comprehend activities
  • Claim: Mental concepts are utilized (and critical) for evaluation of info content
  • Okay, what does this have to do with tagging?
  • Claim: social tags augment the evaluation process and thus facilitate exploratory search

How do people form and use mental categories

Peeople naturally categorize concepts. Concept formation is a rational response to information reduction.

The study (“to show you i’m not just hallucinating”)

Follow 4 users across 8 weeks. Engage in exploratory info tasks. Use del.icou.us to collect information and prepare for talk. Create tags for themselves and others.

Results

Somewhat impenetrable. Upshot seems to be arguing that tags are not just “metadata” but actually directly influence knowledge structures.

Influences on tag choices on Del.icio.us

Emilee Rader, Univ. of Michigan

Missed presenter name.

Folksonomy: Potential for the emergence of collective meaning.

Why do people choose some tags over other tags.?

Social: Tag choices influenced by the system. non-social: tag choices are idiosyncratic. Which is true?

Found: Users future tag choices are heavily influenced by tag choices they have previously made. Shocker. (and the ui specifically encourages that)

Social hypothesis: users’ tag choices are influenced by tags applied by other people.

Organizing hypothesis: users’ tag choices are personal and idiosuncratic, NOT influenced by others’ tag choices.

So we set out to look for a aconection between the small scale (individual tag choices) and the large scale (aggregate patterns). Dataset: 30 pages, hundreds of thousands of tags, thousands of users.

Final hypotheses

  • Imitation: users imitate tags that previous users have used
  • Organizaing: users re-use tags they ahve previously used — Our study indicates this is most imporant (But SK says: yeah, but the UI supports this one most visibly. You’re just verifying the UI effect. Right? Am I missing something? Okay, yeah, guy comes up and ask.)
  • Recommended: users choose suggested tags from del.icio.us



I’m taking notes as the sessions go…

Mopping up: Modeling Wikipedia promotion decisions

Moira Burke and Robert Kraut – CMU (Bob is a failry big figure in CSCW and CHI)

How are promotion decisions made?

  • Large groups of strangers colaborate to choose caretakers known as administrators
  • We model successful candidates based on simple metrics that can be computed quickly in real time
  • How does the community user evidence to build consensus, and are there opportunities for tools to support decision-making?

Policy capture theory:

  • compare organizations stated criteria for making decisions with actual behaviors
  • typically: a disconnect (now, imagine that)
  • beause of:
  • difficulty finding information
  • cognitive overload
  • weighting simple things too heavily
  • process-blocking or bandwagon effects in collective process.

Method: Looked at all Admin-approvals.

  • Categorized previous contributions based on RFA guide
  • Modeled promotion success based on contribution history
  • Several criteria listed from RFA

Hard to measure criteria:

  • trustworthiness
  • quality of edits

How to use: apply their model of admin-effectiveness to:

  • voter dashboard or self-evaluation tool
  • Admin finder bot
  • Similar model for decisionmaking in other online environments like WOW

Harnessing the wisdom of crowds in Wikipedia: Quality through coordination

Aniket Kittur and Bob Kraut, CMU

Online collective intelligence:

  • Predicitng
  • Filtering
  • Organizaing
  • Recommending (netflix)

Assumptions:

  • people are making independent judgements
  • and you can automatically aggregate these assumptions

But that doesn’t really work for complex information processing.

Need to coordinate, collaborate. So: HOw do we harness the wisdom of crowds for complex things.  Just throwing people together won’t work. “Adding manpower to a late software project makes it: later.”

Previous research indicates that more work / more people on wikipedia leads to better articles (“Feature articles”).

Interested in coordination among authors editors:

  • Explicit coordination (direct communication between authors / editors)
  • Implicit coordination (structuring wrork so it is concentrated in core group; leadership role in setting scope and direction)

How to measure quality of artciles? Why, with the Wikipedia 1.0 Quality Assessment Scale!

Findings:

  • Incrasing # of editors had NO INCREASE in quality
  • Increasing coordination in communication and concentration resulted in higher quality.
  • Communication does not scale to the crowd. High communication with few editors leads to qualtity. But scale up the editors, quality goes down.

Interesting stuff.

Articulations of WikiWork. (Good paper!)

Mass collaporations, a la wiki, are going to become more important to society.

We want to know how this work is sustained, so we studied Barnstars on wikipedia. To figure out how work is valued on wikipedia.

They review the breakdown of barnstars on wikipedia:

  • Editing work: 27%
  • Social and Community support actions: 25%
  • Border Patrol: 11%
  • Administrative Actions”: 9%
  • Collaborative ations and dispositions (collaboration on pages like mediating conflicts) 8%
  • Meta cntent work, the cration of tools, templates, etc. 5%
  • And, undifferentiated: 14%

So note that editing only accounts for 27%. Reputation systems must award more than simple production, right?

Need reputation frameworks for complex cooperative work. Very interesting.


I spent the day yesterday in a roomful of supersmart people discussing Social Networking in the Organization. Below you’ll find a not-especially-coherent splash of notes on the whole thing. Big thanks organizers of the workshop, who made it an interesting day and patiently tolerated my industry-skewed blathering.

Some Key Themes Discussed / Questions Raised

Going from memory here…

  • Goals and needs that populations tend to bring to social networking software. What are they?
  • Jonathan Grudin talked about the classic CSCW paper from McGrath which plots the work that goes on in groups and teams on an axis. (Haven’t read this paper…need to.) Usually people pay attention to the production portion of that plotting, but perhaps with social software we should be looking a lot more at how SNS affects the team building and member support activities of team work. Interesting.
  • Great deal of discussion on how to measure activity and assign it a value of SNS. I’d been jabbering a lot, so I did not bring up Rob Cross’s Social Network Analysis work, but Cross has an interesting angle on it.
  • Design for sales. I mentioned that occasionally we’ve built features that our clients sometimes don’t actually use, but which must be there in order to make the sale. Millen mentioned there might be a paper in there somewhere.
  • What constitutes “inappropriate” content…the kind of content that an enterprise theoretically would want to control. Profanity? Sure. Thoughtful criticism of the sponsoring company’s strategic goals? Perhaps. A survey study of different organizations to find out just what constitutes “inappropriate” would be pretty interesting.
  • Plenty more…but space / attention is limited…

Finally, I railroaded a good part of the final discussion into consideration of how moderational / monitoring controls impact population activity / contributions. Patricia Romeo from Deloitte, who has led the development of their internal social networking app, was astonished at how much monitoring / moderational control is built into the SelectMinds application (and if Patricia was astonished, the IBM people were aghast…apparently anything goes on their internal SNS).

Fair enough. There’s no doubt that more moderational / monitoring intervention = less activity, less robust network. My challenge was: can we actually study and quantify the impact of different moderational / monitoring approaches to robustness of the community? That would allow me to present clients with a cost-benefit framework around moderational control. Maybe that’s my next paper.

Some interesting topic that came up, and links

Dana Boyd – PhD researcher concentrating on “faceted identity” online. Related to my last few blog posts, will need the check these out.

HCI Remixed – Includes Grudin’s essay on McGrath’s older paper. Will need to buy this.




Follow

Get every new post delivered to your Inbox.