Blincam is a cool device. There has been a device which uses the blinking as camera control interface, but it may be first to combine it with a wearable device.
Blinking is a good media to command and control devices, because
- It is a voluntary muscle and perhaps fatigue-less as finger.
- People naturally do blinking and it does not require people to learn the operation.
Blincam focuses on a single scenario to capturing a moment – capture what you see. The value proposition is easy to understand. It does not interfere with user sight as traditional camera. This is the same as Google glass.
On the other hand, it is different from Google glass as follows:
- It is attachable to glasses you use already, not a brand new glasses. Adoption bar is lower than Google glass.
- Use of Blinking UI
- It is really hand-free. Google glass used a touch to control the device which is not really hand-free. (People like me thought that Google would combine gaze tracking with its Google glass, but they didn’t…)
- Blinking keeps privacy of its user better than Google glass. Google glass used a touch/speech to control the device which looks strange in social context. Blinking is less noticeable from the others than speech or touch.
But, as you recognize, Blincam does not solve the problem of privacy of the people around the user. I am not sure about real reasons why Google glass failed, but the privacy of the others may be one of key problems. Blincam must solve the problem to get a broad adoption.
I migrated my FreeMind files to MindMup.
I have used FreeMind open software to organize my ideas and maintain my to-do list. I have used Dropbox as storage of FreeMind file to use those files on multiple devices. However I have been uncomfortable with FreeMind in some ways.
- It’s too complex. It hampers concentration of my attentions.
- Its old-fashioned user interface (small menus/toolbars/icons) is hard to operate on touch devices.
- I must always pay attention to synchronization and the latest version of the file.
- It needs installation of client programs on all devices, but there is no good FreeMind client software on iPhone.
MindMup is browser-based software and uses the cloud storage. After I use MindMup, I found:
- I can use MindMup on any devices immediately.
- I can see latest version on any devices immediately after I change MindMup on one device.
- I can change MindMup on iPhone.
- Icons are large and easy to operate on.
- Its user interface is simple. It helps me concentrate on my topics.
Those advantages of modern cloud software are so evident that I didn’t think worth mentioning. But I found again those advantages these days. My experience of the everyday application is now so simple.
I cleaned up all icons on my PC desktop window..
PC desktop metaphor has played an important role for productivity of people since two decades.
My desktop use has been a mixture of short cuts of applications and working documents.
After I upgraded my Windows to 10, its start-up menu has a duplicated role for application short cuts. I moved those short cuts on Desktop to start menu. Windows 8 tile start window could play the same role.
On the other hand, I use a Dropbox folder as my working document repository, because I access them from multiple devices. Dropbox synchronization is much faster and stable than One Drive or Google drive. I don’t need short cuts for them.
So I don’t need desktop window any more even of desktop PC which is not tablet.
Using Smart-Phone is eye-busy and hand-busy. You can’t use it while doing other things. Using on the road is dangerous.
It is because Smart-Phone is a mere miniature of desktop computer interactions. Human use hand/finger to input and get information by eye. It doesn’t change human machine interactions fundamentally.
Apple’s Watch is another desktop too. Using it is eye-busy and hand-busy.
Head Mount display or wearable glasses such as Google glass, Microsoft Hololens looks to me yet another miniature of desktop interaction. Using them is eye-busy, though hands may be free. Eye is already busy to catch information from the rich world and to guide reaction against the world. Why do we over-load eye more? Virtual Reality belongs to this eye-busy staff.
I think it is worth pursuing an approach which take information from eye (gaze tracker) without any interference in sight, take advantage of eye to react against world without any interference in sight again.
Touch is natural and direct manipulation. It looked it would win over mouse. But it didn’t. Why?
I think mouse utilize hand/finger power better than touch.
1) Efficacy – Short moving distance of the Finger/hand subtle movement of mouse does not lead to fatigue, while touch can lead to fatigue.
2) Mouse has two mode of movements (a) rough cursor positioning by hand movement (b) micro movement. Hand&Finger can do both. Touch, on the other hand, only utilizes rough positioning. Touch is poorer language than mouse.
I think IT industry should explore hand/finger power more.
I tried condenser microphone with accelerometer. My objective is to build a command & control method whitch captures fingertip gestures.
This report is in Japanese.
Here is a short report on the use of gaze tracker and speech recoginzer (May 2014).