Friday, January 24: The final version of our paper on Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos (to appear at Learning at Scale conference) is now available.
Curio and Lab in the Wild are featured in an article on Popular Science in the current issue of Harvard Magazine.
Friday, January 17: The final versions of our CHI'14 papers are now available:
- Quantifying Visual Preferences Around the World (the data set and other additional resources will be available online soon)
- Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos
Friday, January 17: The final versions of our IUI'14 papers are now available:
- Adaptive Click-and-Cross: Adapting to Both Abilities and Task Improves Performance of Users With Impaired Dexterity
- Active Learning of Intuitive Control Knobs for Synthesizers Using Gaussian Processes
Monday, December 9: Two of our papers got accepted to CHI: "Quantifying Visual Preferences Around the World" led by Katharina Reinecke and "Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos" led by Juho Kim.
Thursday, December 5: Our two papers got accepted to ACM IUI: "Adaptive Click-and-Cross: Adapting to Both Abilities and Task Improves Performance of Users With Impaired Dexterity" led by Louis Li and "Active Learning of High-Level Knobs for Synthesis with Gaussian Processes" led by Anna Huang.
Thursday, November 14: Today we ran a panel on Taking Research Out Into the Wild. Main message: engaging broader publics over the internet (either as participants or as collaborators) makes answering entirely new kinds of questions possible and it does not require superhuman abilities or resources.
Sunday, October 27: In collaboration with the Center for Research on Computation and Society (CRCS), we are now accepting applications for 1- or 2-year Postdoctoral Fellowships. Consider applying even if you are currently seeking a faculty position. Many schools will let you defer your faculty position for a year and a Fellowship at CRCS is a great way to develop your research agenda and to expand your research network.
Friday, October 25: Our team (led by Mary Regan at UMD School of Nursing) received an R01 NIH grant to study the behavioral and nutritional factors impacting pre-term birth. A key technical enabler of this project is a mechanism, based on our PlateMate system, for scalable nutritional analysis, which will make it possible to track the nutritional intake of 400 pregnant women for several months each.
Monday, October 21: Louis Li presented a poster on his work on Adaptive Click-and-Cross at the ACM ASSETS conference. Adaptive Click-and-Cross combines several adaptive mechanisms (which were previously studied in isolation) to improve the efficiency of computer access for people with impaired dexterity.
Wednesday, October 16: Our paper reporting on an Evaluation of filesystem provenance visualization tools was presented today at IEEE InfoVis.
Saturday, October 12: The first results from the age guessing experiment: 17-year olds are the most efficient clickers. Past the age of 25, we all get slower at a steady rate for the rest of our lives. Read more...
Sunday, Sept 22: Reminder: CrowdCamp applications are due on Sept 25! Next CrowdCamp (a two-day hack-a-thon for prototyping novel crowd-powered ideas) will take place at HCOMP'13 and will be lead by a great team: Lydia Childton (UW), Juho Kim (MIT) and Pao Siangliulue (Harvard).
Sunday, September 8: At HCOMP 2013, we will present a demo of Curio, a crowdsourcing platform that connects interested citizens with researchers to help answer important questions in the sciences and humanities. Read the abstract.
Saturday, July 20: At long last, we have published a data set to accompany our 2011 PlateMate paper. The data set contains 16 out of the 18 images we used to evaluate PlateMate's accuracy. The data set also includes the ground truth nutritional info for each photograph, expert estimates, as well as PlateMate's estimates.
Monday, June 24: We are gearing up to launch Curio, a crowdsourcing platform that connects interested citizens with researchers to help answer important questions in the sciences and humanities. Sign up now to receive an early access invitation!
Friday, May 31: More than 500,000 people have participated in experiments on Lab in the Wild.
Friday, April 19: More than 100,000 people have participated in experiments on Lab in the Wild.
Sunday, March 10: Our SPRWeb paper will receive a best paper award at CHI 2013 and our paper on predicting first impressions of web site aesthetics will get an honorable mention. Both will be presented in the Aesthetics and the Web session on Wednesday morning.
Tuesday, March 5: A few days ago at CrowdCamp, we have experimented with new ways to elicit creative ideas from crowds by combining techniques from Design, Improv Theater, Crowdsourcing, and AI. Here's our story.
Saturday, January 26: The final versions of our CHI'13 papers are now available:
About The Group
The Intelligent Interactive Systems Group at Harvard was founded in September of 2009. We are interested in how intelligent technologies can enable novel ways of interacting with computation, and in the new challenges that human abilities, limitations and preferences create for machine learning algorithms embedded in interactive systems.
About Intelligent Interactive Systems
Intelligent Interactive Systems are fundamentally hard to design because they require intelligent technology that is well suited for people's abilities, limitations, and preferences; they also require entirely novel interactions that can give the user a predictable and reliable experience despite the fact that the underlying technology is inherently proactive, unpredictable, and occasionally wrong. Thus, design of successful intelligent interactive systems requires intimate knowledge of and ability to innovate in two very disparate areas: human-computer interaction and artificial intelligence or machine learning.
What We Do
Our projects span the full range from formal user studies to statistical machine learning. We have worked on developing new intelligent technologies to enable novel interactions (e.g., SUPPLE system) and on understanding the principles underlying how people interact with intelligent systems (e.g., the project on exploring the design space of adaptive user interfaces). Our Brain-Computer Interface project aims at developing a new set of interactions for efficiently controlling complex applications, and we are also interested in building and studying complete applications. One particular area of inteterest is the ability-based user interfaces -- an approach for adapting interactions to the individual abilities of people with impairments or of able-bodied people in unusual situations.