Smart phone user interface is simpler to that of PC. But it worsens in one aspect of PC user interface.
PC user interface is complex. For example, mouse has right click button which many users (like my wife) don’t care about. keyboard has many function keys which many users don’t care about. PC screen shows many objects in tool-bars/menus which many users don’t know their meanings.
On the other hand, Smart phone (or WEB) offers “see and select” user interface. Users has only to see several objects(icons or texts) and select one of them, and repeat it. Moreover it’s a direct manipulation by finger. This simplicity of user interface, besides mobility, must be the major factor of the boarder adoptions.
By the way, PC user interface heavily relies on hands/fingers to give information to computers, and eyes to get information from computers.
Smart phone has smaller screen. Users must be more focused on the smaller screen. Smart phone is mobile but users must hold it in one hand and operate on it by the other hand. That is, smart phone occupies user’s eyes and both hands more than PC. It worsens PC user interface in a sense.
Voice interaction is going to make eye/hand free, but its use is limited to closed environments.
It industry should offer better user interaction.
While you are driving a car, you do the followings in parallel.
- watch forward of the read, get feedback of your action
- hear outside sounds
- manipulate handle by hand and brake/accelerator bu foot
- talk with passenger seat person
You sight organ, audio organ, and body motor systems are working in parallel, even two parts of body motor systems are working in parallel, and you can achieve your goal to get some place.
How wonderful the power of the human being is!
On the contrarily, current human computer interaction is very limited and naive. It doesn’t take advantage of the human power as above: You give information only by hand, and get information only by sight of 2D, serially. Voice interaction is going to add another channel b/w human and machine, but the channel is isolated from the others.
According to a news about this years’ CES, its major trend is the AI+voice interacting with life backbone systems such as car or home electronic devices. (Now smartphone, mobility, or wearable are already old fashioned vocabularies). How does it, the human machine interaction, look like?
Voice can support texting and discrete commanding. But it can not be suffice for human to interact machines in coming ages.
- Voice has a problem for privacy. Its use is limited in private/closed spaces.
- Voice lacks capability of pointing and analog commanding (as mouse had).
- Voice interaction should be supplemented by some other means for pointing and analog quantity commanding.
- Voice interaction should be taken over someday by some other texting means which support privacy.
IT should innovate further in those interaction areas too.
Blincam is a cool device. There has been a device which uses the blinking as camera control interface, but it may be first to combine it with a wearable device.
Blinking is a good media to command and control devices, because
- It is a voluntary muscle and perhaps fatigue-less as finger.
- People naturally do blinking and it does not require people to learn the operation.
Blincam focuses on a single scenario to capturing a moment – capture what you see. The value proposition is easy to understand. It does not interfere with user sight as traditional camera. This is the same as Google glass.
On the other hand, it is different from Google glass as follows:
- It is attachable to glasses you use already, not a brand new glasses. Adoption bar is lower than Google glass.
- Use of Blinking UI
- It is really hand-free. Google glass used a touch to control the device which is not really hand-free. (People like me thought that Google would combine gaze tracking with its Google glass, but they didn’t…)
- Blinking keeps privacy of its user better than Google glass. Google glass used a touch/speech to control the device which looks strange in social context. Blinking is less noticeable from the others than speech or touch.
But, as you recognize, Blincam does not solve the problem of privacy of the people around the user. I am not sure about real reasons why Google glass failed, but the privacy of the others may be one of key problems. Blincam must solve the problem to get a broad adoption.
I migrated my FreeMind files to MindMup.
I have used FreeMind open software to organize my ideas and maintain my to-do list. I have used Dropbox as storage of FreeMind file to use those files on multiple devices. However I have been uncomfortable with FreeMind in some ways.
- It’s too complex. It hampers concentration of my attentions.
- Its old-fashioned user interface (small menus/toolbars/icons) is hard to operate on touch devices.
- I must always pay attention to synchronization and the latest version of the file.
- It needs installation of client programs on all devices, but there is no good FreeMind client software on iPhone.
MindMup is browser-based software and uses the cloud storage. After I use MindMup, I found:
- I can use MindMup on any devices immediately.
- I can see latest version on any devices immediately after I change MindMup on one device.
- I can change MindMup on iPhone.
- Icons are large and easy to operate on.
- Its user interface is simple. It helps me concentrate on my topics.
Those advantages of modern cloud software are so evident that I didn’t think worth mentioning. But I found again those advantages these days. My experience of the everyday application is now so simple.
Using Smart-Phone is eye-busy and hand-busy. You can’t use it while doing other things. Using on the road is dangerous.
It is because Smart-Phone is a mere miniature of desktop computer interactions. Human use hand/finger to input and get information by eye. It doesn’t change human machine interactions fundamentally.
Apple’s Watch is another desktop too. Using it is eye-busy and hand-busy.
Head Mount display or wearable glasses such as Google glass, Microsoft Hololens looks to me yet another miniature of desktop interaction. Using them is eye-busy, though hands may be free. Eye is already busy to catch information from the rich world and to guide reaction against the world. Why do we over-load eye more? Virtual Reality belongs to this eye-busy staff.
I think it is worth pursuing an approach which take information from eye (gaze tracker) without any interference in sight, take advantage of eye to react against world without any interference in sight again.
Touch is natural and direct manipulation. It looked it would win over mouse. But it didn’t. Why?
I think mouse utilize hand/finger power better than touch.
1) Efficacy – Short moving distance of the Finger/hand subtle movement of mouse does not lead to fatigue, while touch can lead to fatigue.
2) Mouse has two mode of movements (a) rough cursor positioning by hand movement (b) micro movement. Hand&Finger can do both. Touch, on the other hand, only utilizes rough positioning. Touch is poorer language than mouse.
I think IT industry should explore hand/finger power more.