Apple 2020
Published by Apple Worldwide Developer's
Conference, 1990 May 9.
SUMMARY: This is a visions video produced by Apple. The
list below indicates many of the novel computer science
innovations that must be used for this vision to become real.
Some are currently available, with others many years in the
future.
Virtual laboratory:
- simulations of physical processes (such as chemical
experiments)
- hand gestures tied to moving screen artifacts e.g., test
tubes
- key events in the laboratory trigger outside action e.g.,
notify the instructor
Groupware:
- on-line video phone, switching, and access control
- remote sketching e.g., writing "Chapter 1" by
hand gestures on the other person's screen
Interaces for disabled:
- glasses for the deaf that displays text of other person's
speech
- system can understand crude gestures and can recognize
difficult speech (for handicapped boy)
- cooking instruction presented by the agent is paced for
mentally disabled (for girl)
Hardware:
- no keyboard or mouse!
- screen is light-weight, high-resolution, flat, color,
very fast
- video camera: hidden (in-screen?)
- microphone: hidden (in-screen?)
- gesture recognizer: hidden (in-screen?)
- ordinary glasses as display (deaf person), with display
on lower half and hidden speech input device
Multi-modal input:
- speech
- gestures for control of the interface
- pointing to screen objects
Speech/language recognition:
- continuous voice
- disambiguates
- noise (such as the passing train),
- outside communication (talking to kids),
- asides (talking to self)
- language translation (French to English)
- untrained speech recognition (Restaurant scene: can
understand voice of the woman's client, waiter)
- speech to text (for dictation, for deaf woman's glasses)
- context understanding (errors corrected in context)
- real English text produced (true natural language
understanding)
Voice output:
- completely natural, including inflection
- tied to items on the display (highlighting, motion)
Hand gesture recognition:
- controls screen objects
- deixis: speech understood as references to screen
elements e.g., "Show me this one"
Sound output:
- everyday sounds for feedback (e.g., experiment blowing
up)
- abstract sounds for feedback (e.g., tones indicate
events)
Remote control:
- computer senses and controls kitchen appliances
- eg, scale with cup, microwave oven
Multimedia:
- on-screen stills, video & audio
- novel scrolling (circular rolodex)
- film techniques for fades, panning....
- film editing & production
Database queries:
- fuzzy queries e.g., "looking for disabled, in last
15-20 years..."
- approximate solutions and estimate of number of results
Intelligent agents:
- within an application: e.g., cook
- external to applications: e.g. main persona of system
- controls display, contents, highlighting
- context sensitive help/coaching e.g., "Cindy, why
did you stop?"
Last updated April
1997, by Saul Greenberg