SUPPLE: Automatically Generating Personalized User Interfaces


 

Home - Projects - SUPPLE

Overview


Higher quality QuickTime versions of this video are also available: 480 x 270 (6 MB), 640 x 360 (7 MB) or High Definition (12 MB).

SUPPLE uses decision-theoretic optimization to automatically generate user interfaces adapted to a person's abilities, devices, preferences, and tasks. As part of our larger effort to develop ability-based user interfaces, we have used SUPPLE to generate user interfaces for people with motor and vision impairments and the results of our laboratory experiments show that these automatically generated, ability-based user interfaces significantly improve speed, accuracy and satisfaction of users with motor impairments compared to manufacturers' default interfaces.

We have also developed ARNAULD, a system to elicit and model people's aesthetic preferences allowing SUPPLE to generate novel user interfaces that accurately reflect a person's preferences. We have also used SUPPLE to generate interfaces for a variety of physical and software platforms, such as desktop computers, touch panels, web browsers, PDAs, and WAP cell phones.

Despite solving a computationally hard problem, in many situations SUPPLE can generate new interfaces in under a second making it practical as a tool for providing personalized on-demand interfaces. To quickly learn more about the project, see the video or read our AAAI '08 NECTAR paper summarizing the project.

Some frequently asked questions

Automatically generated user interfaces are not as good as those created by human designers. What is the value of systems like SUPPLE?

Automatically generated user interfaces are typically perceived as being less aesthetically pleasing than those created by human designers. Indeed, we do believe that hand-crafted user interfaces, which reflect designers' creativity and understanding of applications' semantics, will---for typical users in typical situations---result in more desirable interfaces than those created by automated tools. SUPPLE, therefore, is not intend to replace or compete with human designers. Instead, SUPPLE offers alternative user interfaces for those users whose devices, tasks, preferences, and abilities are not sufficiently addressed by mainstream hand-crafted designs.

Because there exist a myriad of distinct individuals, each with his or her own devices, tasks, preferences, and abilities, the problem of providing each person with the most appropriate interface is simply one of scale: there are not enough human experts to provide each user with an interface reflecting that person's context. The results of our user study demonstrate that people with motor impairments perform better with and strongly prefer interfaces generated by SUPPLE compared to the manually designed default interfaces.

Our approach stands in contrast to the majority of prior work on model-based user interface design, where the automated design tools were used primarily as a means to incrementally improve existing design processes.

Creation of model-based user interfaces requires a large amount of upfront effort in creating the models. This model creation is incompatible with the current design practice.

Indeed, nearly all model-based user interface toolkits require that users begin the UI design process by creating abstract models of the tasks or data (or both). Even if a system provides a graphical environment for designing such models, this is still inconsistent with the current design practice, which stresses the importance of exploring the space of concrete (even if low fidelity) designs from the very beginning of the design process.

This high up-front cost has been identified as an important barrier to adoption of automatic user interface generation technology and it turns user interface design into an abstract programming-like task, which is not our intention.

Instead, we believe that the interfaces for typical users in typical situations should continue to be created by expert designers using current design methods. The abstract interface model should be automatically inferred as the designer creates and explores the concrete designs for the typical user. Indeed, this approach has been attempted in a recent system called Gummy. Gummy observes the designer as he or she creates the default version of a user interface and it then automatically suggests designs for alternative platforms. We intend to develop such a design too, which---through only a small amount of additional interaction with the designer---will capture his or her rationale and design preferences, so that they can be reflected in the automatically generated alternatives.

Alternatively, the specification can be obtained by automatically reverse engineering a concrete user interface. The feasibility of this approach has been recently demonstrated by two different groups. While some manual intervention will be required to refine such automatically extracted specifications, this approach may significantly reduce the barrier to automatically generating alternative user interfaces for existing applications.

Are systems like SUPPLE practical?

The most important limitation to practical deployment of systems like SUPPLE is the current software engineering practice, which makes the user interface code inseparable from the application logic. HTML-based web applications are still an exception.

It is therefore our intention is to deploy our technology first in the web context, most likely as a JavaScript library that can be included with existing pages and applications to enable a rich set of client-side adaptations and customizations.

Is SUPPLE's approach limited to dialog box-like interfaces?

Our approach of casting user interface design as a discrete combinatorial problem is particularly well suited for dialog box-like user interfaces because there is a well-established vocabulary of interactions used for designing such interfaces. The approach is not limited to such interfaces, however. Most canvas-based interfaces (e.g., word processors, image manipulation programs) also rely on just a few classes of operations: continuous 2D positioning, building a trajectory in 2D, selection of discrete objects, mode switching, command choice. While each application seems unique, the core interaction vocabulary for performing these operations is well-established making canvas-based operations amenable to our approach.

What about documentation and tech support?

If systems like SUPPLE were to be widely adopted, what would happen to our ability to share expertise via documentation or other technical support mechanisms? For documentation, the answer is easy: the screen shots and the specific instructions regarding the sequence of UI operations can be trivially generated (and illustrated) automatically. For remote technical support, where screen sharing is currently used, a ``model sharing'' approach could be used instead: the user's and the technician's versions of the software could be linked not at the level of the pixels but at the level of the underlying model: the technician and the user can see different surface presentation of the application but both would be operating identical functionality. If the technician, for example, set a combo box to a particular value, the same operation could be visualized on a user's screen regardless of how this functionality is rendered.

Can I have it?

SUPPLE is still a research prototype. We do hope to create a public version of the system within the next couple of years. You should expect to see it deployed on the web first, giving people personalized access to web-based applications such as email, social networking, and others.

A 2005 version of SUPPLE (the basic interface generation engine) is available for download. We haven't packaged the latest version of the code for distribution yet.

Contributors

The projects was developed by Krzysztof Gajos, Daniel Weld (UW CSE), and Jacob Wobbrock (UW iSchool) with contributions from David Christianson, Kiera Henning, Raphael Hoffmann, Jing Jing Long, and Anthony Wu.

Selected Media Mentions

Pimp My Program a TV spot by Ivanhoe Discoveries and Breakthroughs Inside Science. March, 2009

Special GUI for Your Eyes Only by Anuradha Menon, The Future of Things. November 24, 2008.

An interface for your eyes only by Lee Bruno, The Guardian. August 28, 2008.

Tweaking user interfaces to match abilities, disabilities by Yun Xie, ArsTechnica. July 17, 2008

Every User Deserves a Personalized Interface by Maria José Viñas, The Chronicle of Higher Education. July 16, 2008

For your eyes only: Custom interfaces make computer clicking faster, easier Press Release by Hannah Hickey, UW News Office. July 15, 2008.

Publications

Krzysztof Z. Gajos, Daniel S. Weld, and Jacob O. Wobbrock. Automatically generating personalized user interfaces with Supple. Artificial Intelligence, 174:910-950, 2010.
[Abstract, BibTeX, etc.]

Krzysztof Z. Gajos, Jacob O. Wobbrock, and Daniel S. Weld. Improving the performance of motor-impaired users with automatically-generated, ability-based interfaces. In CHI '08: Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, pages 1257-1266, New York, NY, USA, 2008. ACM.   Best Paper Award  
[Abstract, BibTeX, Video, etc.]

Krzysztof Z. Gajos. Automatically Generating Personalized User Interfaces. PhD thesis, University of Washington, Seattle, WA, USA, 2008.
[Abstract, BibTeX, etc.]

Krzysztof Z. Gajos, Daniel S. Weld, and Jacob O. Wobbrock. Decision-Theoretic User Interface Generation. In AAAI'08, pages 1532-1536. AAAI Press, 2008.
[Abstract, BibTeX, etc.]

Krzysztof Z. Gajos, Jacob O. Wobbrock, and Daniel S. Weld. Automatically generating user interfaces adapted to users' motor and vision capabilities. In UIST '07: Proceedings of the 20th annual ACM symposium on User interface software and technology, pages 231-240, New York, NY, USA, 2007. ACM Press.
[Abstract, BibTeX, Slides, Video, etc.]

Krzysztof Z. Gajos, Jing J. Long, and Daniel S. Weld. Automatically generating custom user interfaces for users with physical disabilities. In Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 243-244, New York, NY, USA, 2006. ACM Press.

Krzysztof Gajos, David Christianson, Raphael Hoffmann, Tal Shaked, Kiera Henning, Jing J. Long, and Daniel S. Weld. Fast and Robust Interface Generation for Ubiquitous Applications. In UbiComp 2005: Ubiquitous Computing, volume 3660 of Lecture Notes in Computer Science, pages 37-55, Berlin / Heidelberg, 2005. Springer.
[Abstract, BibTeX, Slides, etc.]

Krzysztof Gajos and Daniel S. Weld. Preference elicitation for interface optimization. In UIST '05: Proceedings of the 18th annual ACM symposium on User interface software and technology, pages 173-182, New York, NY, USA, 2005. ACM Press.
[Abstract, BibTeX, Slides, etc.]

Krzysztof Gajos, Anthony Wu, and Daniel S. Weld. Cross-Device Consistency in Automatically Generated User Interfaces. In Proceedings of Workshop on Multi-User and Ubiquitous User Interfaces (MU3I'05), 2005.

Krzysztof Gajos and Daniel S. Weld. SUPPLE: automatically generating user interfaces. In IUI '04: Proceedings of the 9th international conference on Intelligent user interface, pages 93-100, New York, NY, USA, 2004. ACM Press.
[Abstract, BibTeX, etc.]

Krzysztof Gajos, Raphael Hoffmann, and Daniel S. Weld. Improving User Interface Personalization. In Supplementary Proceedings of UIST'04, Santa Fe, NM, 2004.

Daniel S. Weld, Corin Anderson, Pedro Domingos, Oren Etzioni, Krzysztof Gajos, Tessa Lau, and Steve Wolfman. Automatically Personalizing User Interfaces. In IJCAI03, Acapulco, Mexico, August 2003. Invited paper.
[Abstract, BibTeX, etc.]

This page was last modified on Sunday, 19-Jan-2014 11:05:06 EST.