AR is not natural

I examined recently how NUI is described.

One description describes that AR is among NUI. I don’t think so.

Real world seen is already rich and dangerous enough. The AR, for instance with Head Mount Display, adds extra information for seeing, which leads further stress on human sight/brain. AR is practical for those who are used to exposed and manage spiritual tensions such as soldiers. However AR will be an extra stress to most people, until human being get used to it as part of life, and it make time time more than a few generations.

Smart phone UI occupies eyes/hand more than PC

Smart phone user interface is simpler to that of PC. But it worsens in one aspect of PC user interface.

PC user interface is complex.  For example, mouse has right click button which many users (like my wife) don’t care about. keyboard has many function keys which many users don’t care about.  PC screen shows many objects in tool-bars/menus which many users don’t know their meanings.

On the other hand, Smart phone (or WEB) offers “see and select” user interface. Users has only to see several objects(icons or texts) and select one of them, and repeat it. Moreover it’s a direct manipulation by finger. This simplicity of user interface, besides mobility, must be the major factor of the boarder adoptions.

By the way, PC user interface heavily relies on hands/fingers to give information to computers, and eyes to get information from computers.

Smart phone has smaller screen. Users must be more focused on the smaller screen. Smart phone is mobile but users must hold it in one hand and operate on it by the other hand. That is, smart phone occupies user’s eyes and both hands more than PC. It worsens PC user interface in a sense.

Voice interaction is going to make eye/hand free, but its use is limited to closed environments.

It industry should offer better user interaction.

 

Car driving as a human machine interaction model

While you are driving a car, you do the followings in parallel.

  • watch forward of the read, get feedback of your action
  • hear outside sounds
  • manipulate handle by hand and brake/accelerator bu foot
  • talk with passenger seat person

You sight organ, audio organ, and body motor systems are working in parallel, even two parts of body motor systems are working in parallel, and you can achieve your goal to get some place.

How wonderful the power of the human being is!

On the contrarily,  current human computer interaction is very limited and naive. It doesn’t take advantage of the human power as above:  You give information only by hand, and get information only by sight of 2D, serially. Voice interaction is going to add another channel b/w human and machine, but the channel is isolated from the others.

 

User Interactions in the age of AI+Voice?

According to a news about this years’ CES, its major trend is the AI+voice interacting with life backbone systems such as car or home electronic devices. (Now smartphone, mobility, or wearable are already old fashioned vocabularies). How does it, the human machine interaction, look like?

Voice can support texting and discrete commanding. But it can not be suffice for human to interact machines in coming ages.

  • Voice has a problem for privacy. Its use is limited in private/closed spaces.
  • Voice lacks capability of pointing and analog commanding (as mouse had).

I believe:

  • Voice interaction should be supplemented by some other means for pointing and analog quantity commanding.
  • Voice interaction should be taken over someday by some other texting means which support privacy.

IT should innovate further in those interaction areas too.

Mouse miss-match with finger dexterity

Fingers move finer than wrists. Wrists move finer than hands. The subtlety of human muscle control is Fingers>Wrists>Hands.

All the human great tools in the history, scissors, chopstickes, pens, etc rely on and take advantages of synthesis of finger/wristshands.

Traditional computer gadget, Mouse, failed to take advantage of human muscle natural powers, because

  1. Mouse uses wrists/hands for pointing and finger click for commanding but pointing requires more subtlety than commanding.
  2. Hands for point and finger for commanding are separately used and they are not the synthesis of each powers.

Microsoft Surface dial looks to try taking advantage of finger for pointing too. If it is true, it would be one of great progresses from the mouse.

Blinking UI in Blincam

Blincam  is a cool device. There has been a device which uses the blinking as camera control interface, but it may be first to combine it with a wearable device.

Blinking is a good media to command and control devices, because

  1.  It is a voluntary muscle and perhaps fatigue-less as finger.
  2. People naturally do blinking and it does not require people to learn the operation.

Blincam focuses on a single scenario to capturing a moment – capture what you see. The value proposition is easy to understand. It does not interfere with user sight as traditional camera. This is the same as Google glass.

On the other hand, it is different from Google glass as follows:

  • It is attachable to glasses you use already, not a brand new glasses. Adoption bar is lower than Google glass.
  • Use of Blinking UI
    • It is really hand-free. Google glass used a touch to control the device which is not really hand-free. (People like me thought that Google would combine gaze tracking with its Google glass, but they didn’t…)
    • Blinking keeps privacy of its user better than Google glass. Google glass used a touch/speech to control the device which looks strange in social context. Blinking is less noticeable from the others than speech or touch.

But, as you recognize, Blincam does not solve the problem of privacy of the people around the user. I am not sure about real reasons why Google glass failed, but the privacy of the others may be one of key problems. Blincam must solve the problem to get a broad adoption.

The simple experience with MindMup

I migrated my FreeMind files to MindMup.

I have used FreeMind  open software to organize my ideas and maintain my to-do list. I have used Dropbox as storage of FreeMind file to use those files on multiple devices. However I have been uncomfortable with FreeMind in some ways.

  • It’s too complex. It hampers concentration of my attentions.
  • Its old-fashioned user interface (small menus/toolbars/icons) is hard to operate on touch devices.
  • I must always pay attention to synchronization and the latest version of the file.
  • It needs installation of client programs on all devices, but there is no good FreeMind client software on iPhone.

MindMup is browser-based software and uses the cloud storage. After I use MindMup, I found:

  • I can use MindMup on any devices immediately.
  • I can see latest version on any devices immediately after I change MindMup on one device.
  • I can change MindMup on iPhone.
  • Icons are large and easy to operate on.
  • Its user interface is simple. It helps me concentrate on my topics.

Those advantages of modern cloud software are so evident that I didn’t think worth mentioning. But I found again those advantages these days. My experience of the everyday application is now so simple.