Most who know me, know my vision for the cellphone of the future. It will replace everything, and simply be a conduit device that interconnects anything that in turn is connected to the Internet.
For example, you will simply place it on a table in your living room, and it will pull video content from the web, projecting it on the wall (likely holographic eventually) and as such will negate the need for you to have a TV. I know, that vision is constrained by bandwidth, battery life an other factors. We will not be able to use that vision of Mobile for 5-10 years. But do you want to see something that's as fun, and is likely right around the corner?
Sekai Camera (World Camera in Japanese) is an iPhone-exclusive social tagging service developed by Tokyo-based mobile application provider Tonchidot that recently Demo'd their product at TechCrunch TC50
This video is a MUST SEE for anyone wanted to get a glimpse of their near (Mobile) future. In the video, their CEO, Takahito Iguchi delivers a presentation for the ages - garnishing a standing ovation for this crowd favorite at the end. The video shows Iguchi walking through the streets of Tokyo holding up an Iphone in front of him. On the Iphone display, we see "tags" embedded on top of the real time images he points his Iphone at. I do not believe it was clear to anyone (because of Iguchi's English language barrier), but it was implied that these tags were;
1) Geotags (location based)
2) User generated content
When asked how it worked, Iguchi replied "We have a patent!"...The room was hysterically laughing at his reply. Based on these two aforementioned assumptions, the so called expert panel panned the business model behind Sekai Camera. They asked and made comments like "What would happen if something that had been tagged , was not longer there?" (in one example in the video, Iguchi points his Iphone in a store's window at a Camera for sale, and is shown a tag telling him more about the camera). I believe that based on lack of vision, these panelists dismissed Sekai Camera right then and there.
The panelists got it wrong!! We will see something like this soon, if not from Sekai Camera, then from others. In my vision for this model, I have already answered all the objections that the panelists had. The tags will come not just from user generated Geo Tagging. The tags will also by via Bluetooth, RFID, Barcode scanning, and from databased (corporate) info. about the items.
"What about too many tags on top of each other...won't there become tag overload?" someone asked. No, this too is easily solved by having filters. Each user will be able to choose exactly what tags they would want to see, when they would want to see them, and how they are alerted that a tag exists. For example, I might be touring New York City, and I could filter my tags to only show me points of interest. So as I approach the empire state building, a tag would alert me. Clicking on the tag would show me a picture and title "Empire State building", and allow me more options;
1) Read about Empire State building (could be Wikipedia based).
2) See pictures of the Empire State building (perhaps pulled from Flickr).
3) See videos of the Empire State building (YouTube).
After I have had my little virtual tour, perhaps I am hungry for lunch. I could simply go to my filter page, and request that it show me "tags" for food around my location. As I point my Smartphone at a restaurant, I will be able to see;
1) The restaurant's menu (Menupages.com)
2) User generated reviews (Yelp.com)
Well you get the idea...I am just surprised that the panelists didn't. What do you think of Sekai Camera's business model (I think it is spot on and is the future), and what would you like to see for mobile?
Please leave your comments, Digg this, or Twitter me.