User Interface Types

 

Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond at least loosely to the physical world.

Graphical user interfaces (GUI) accept input via devices such as computer keyboard and mouse and provide articulated graphical output on the computer monitor.

Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages which are transmitted via the Internet and viewed by the user using a web browser program.

Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines etc.

Command line interfaces, where the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor.

Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators etc.

Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.

Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.

Conversational Interface Agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft’s Clippy the paperclip), and present interactions in a conversational form.

Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.

Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.

Intelligent user interfaces are human-machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).

Motion tracking interfaces monitor the user’s body motions and translate them into commands, currently being developed by Apple[1]

Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.

Non-command user interfaces, which observe the user to infer his / her needs and intentions, without requiring that he / she formulate explicit commands.

Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.

Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically this is only possible with very rich graphic user interfaces.

Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.

Task-Focused Interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction

Text-based user interfaces are user interfaces which output a text. TUIs can either contain a command-line interface or a text-based WIMP environment.

Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.

Natural-language interfaces – Used for search engines and on webpages. User types in a question and waits for a response.

Zero-Input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.

Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.

 

Reference: Wikipedia

  1. No comments yet.
  1. No trackbacks yet.

Leave a comment