It was an interesting proof of concept. I'm not sure how much the parallax-correction or full-screen toggling would help but there seem to be many more possibilities. This is a great opportunity for Canonical or other developers to build in a gesture-recognition framework which other developers can plug into and build applications.
Think about it - a gesture can mute the sound, so you don't need to pause the music when a phone call comes in. Get up and leave, and the screen will lock itself. Stick out your tongue and it'll check your email. Heck, I'm sure soon enough Ubuntu will detect your face seeming sad during debugging sessions and automatically shows kitten pictures (Emacs probably does this already, I'm not sure).
Yes, this way of interacting with your pc could be a nice addition to existing ways like touch or voice recognition, particularly if this is part of a system wide framework. Locking your screen when you are not in front of it (and eventually logging you back on as soon as your face is detected) could be a not too invasive first step in this direction.
Instead of me waving my hands to mute the sound I'd prefer the microphone to pick up my ringtone and mute itself. From a technical point of view, this all is possible for a couple of years now, no? So what's the reason why this wasn't implemented earlier? Maybe people find running a webcam all the time too invasive or the concepts too unintuitive?
Remember when voice recognition was all the rage? I remember when I was younger and some kid on the block got a copy of some voice recognition software. Plus, he was the only one with a computer with horsepower to run it. Anyways, we all sat down and tried to train it - given the fact that we were in India and our accent isn't very easily understandable by most humans, let alone computers - we did see some results.
That kid would have the time of his life saying "Open solitaire" loudly and clearly into the microphone a gazillion times, and the system would interpret it as "delete my homework" or "email grandma all my private files".
All this seems pretty neat, but when it comes to just getting a little work done - I'm still skeptical about how much advantage touch, motion, gestures, speech, etc will help.
I am very excited about gaze tracking though. I don't think it's too long before I can switch from one window to another by just looking at it and twitching my eye.
> All this seems pretty neat, but when it comes to just getting a little work done - I'm still skeptical about how much advantage touch, motion, gestures, speech, etc will help.
Probably major advantages for people with physical disabilities and repetitive strain injuries.
> I don't think it's too long before I can switch from one window to another by just looking at it and twitching my eye.
Most X11 window managers support the option for Focus Follows Mouse, where simply hovering over a different window will make it the active window, and after a short delay, bring it to the front of the window stack if necessary. Just combine this with "Multi Pointer X" and a gaze tracking camera that acts as another mouse input, and you could have what you're looking for...
I'm pretty sure I would want things to happen more or less only when I take some action with my hands (gaze AND click/keypress). Things opening, closing, raising, lowering, or otherwise flying around purely on the basis of my gaze would probably be disorienting, or there would be a lot of false positives. ("I don't WANT that window to raise!")
Windows don't raise unless clicked. They only take keyboard focus when hovered. Of course, desktop environments like Gnome and KDE don't do this anymore by default, to accommodate the converts from Windows and MacOS.
Edit: the focus-stealing prevention already built into window managers could be combined with the gaze tracking to allow you to type into one window while reading from another.
I think Ubuntu first needs to brush up on its desktop-ui. There's still many little annoyances.
For my use-case (dev + web) it's superior to Windows, but it's no OSX yet. And it wouldn't take that much to make it awesome.
Example from 10 Seconds ago: Resizing the browser-window. I had to use ALT + Middle Mousebutton, because otherwise the resize-area in the corner is too small.
> You know a company can work on more than one thing at the time.
I too feel that they need to experiment and push the envelope forward. But Canonical doesn't seem to have _that_ many developers that they can focus on too many items. Of course it's run by some very experienced people, and I'm not one to complain. This seems more like a cool hack which can spawn off an independent community project which might eventually become part of the desktop.
They have a group of designer/developers specifically dedicated to UI development. I remember mark saying that the changes will be incremental over some time as they experiment and get feedback from community not drastic.
If you look back in the last 2 Ubuntu releases you will see there has quite a bit of changes here and there, they all add up over time.
I find it's actually quicker and easier to move a window this way, compared to the regular "locate right hand corner, drag and release". Although the combo I use is a little easier than yours (I have resizes mapped to super key+left mouse button).
I was a solid windows user for many many years and now find myself getting frustrated using it when I have to hunt for the edge of a window. Maybe it's just me ...
If I remember correctly corner resizing has been a bug for something like years (no time to look it up now). It's especially annoying when using a trackpad, but nobody wants to bother to fix it I guess. I can't believe this is something that doesn't annoy a single Canonical developer enough.
> I can't believe this is something that doesn't annoy a single Canonical developer enough.
The cool thing about open source is that anyone can fix this bug.
That said, it's pretty annoying and I wish Canonical would devote some developer time to fixing it.
Edit: I don't get the downvote. It really is a beautiful aspect of it: if you need something fixed in a certain way, you can, you are no longer beholden to anyone.
I didn't downvote you, but my perspective on the 'if you want it fixed, do it yourself' thing is this: I've been tinkering with computers for a really long time. I've had some fun times messing around with Linux, tweaking it, getting it set up, etc.
But I'm at the time in my life right now where it's no longer fun jumping through 100 hoops to get something working at an acceptable level. I just don't have time to fix it myself. I have my own codebase and business to take care of, and I want my OS to work with minimum fuss without me having to dig into the Metacity codebase and patch things myself.
It's very true that anyone can fix it themselves, and it is indeed a beautiful aspect of Linux and OSS. But things like resizing windows should just work at this point, because I don't have the time to mess around anymore.
Things like 'focus follows mouse' that I want will never work on Mac OS, because I cannot tinker with the OS.
Clearly, things are better if stuff 'just works', but if you have a critical need that isn't being met, open source at least gives you the possibility to fix it, even if it's "expensive".
You can (or could) also get focus follows mouse on Windows using one of the Power Toys applications.
OS X may be as hackable as Linux, but it doesn't seem as easy to find out how. It took me days to find the MacOS equivalent to xorg.conf mode lines, for example.
I agree that there are parts of the UI that could use some fixes (I think papercuts is the term people are throwing around these days).
But are you kidding me? The Alt+MouseKey combos for moving and resizing windows around is probably one of my favorite features of their desktop interface. The moment a coworker or friend sees me do it they immediately start telling me how "awesome" it is and plead for me to "make it work" on their Windows or Mac PC.
Agreed, but they also need to work with independent app developers to encourage/ensure consistency. Consider what happened to me yesterday: I was unplugged and wanted to see how my battery was doing. You don't do that via the Power Management console, although that is where you change all of the battery/AC settings. You have to left click the battery icon in the top panel and go to Preferences. Arrrgh.
Is it Chrome? I don't seem to have a resize corner at all in Chrome on Windows. The Firefox/Vimperator on Ubuntu has one, though it's small and not called out clearly.
For fullscreen video, I set "fs" in my .mplayer/config. For notifications while I'm away from my computer, I use my phone. For seeing behind windows, I simply don't allow windows to overlap. (How do I know if something is on another desktop? I don't -- that's the window manager's job too. I press S-a to spawn or go to emacs. I press S-s to go to a web browser (or create one). I press S-d to go to the next urxvt. Instead of making my brain remember where my apps are, or making my eyes look for them, I just let the computer do that for me.)
Anyway, I guess these things make the computer seem "cooler", but I think they all kill your productivity. Only show me notifications every 45 minutes unless they are tagged as work and it's between 9-5. Manage my windows for me. Make the video full screen if the content is longer than 1 minute. Then let me use my 8 CPU cores for something other than popup boxes saying my friend is online.
> If the user moves further from the screen while a video is playing on the focused window, the video will go automatically to fullscreen.
No.
It shouldn't do this, I don't want this to happen. The computer should do what I tell it to (making it go fullscreen is just one keypress FFS), not what it thinks I want to do, because it'll inevitably get it wrong from time to time.
Do you want your screen saver to turn on only when you click a button? There are a number of behaviors that you want your computer to do automatically without your input and a proximity sensor can help it do these things more intelligently. If auto full-screening a video is not one of them, you can always turn that off.
We are making web browser aware of gestures (http://api.alii.tv/, http://pprevolution.com). For now it is still a plugin based technology (only works on Windows and Mac), but we do plan to leverage Native Client to make it more accessible.
> Fullscreen notifications
> If the user is not in front of the screen, the notifications could be shown at
> fullscreen so the user can still read them from a different location.
This sounds like an excellent way to play pranks on co-workers. "Bob, your shipment of Viagra has just arrived!". The trolling is on.
Think about it - a gesture can mute the sound, so you don't need to pause the music when a phone call comes in. Get up and leave, and the screen will lock itself. Stick out your tongue and it'll check your email. Heck, I'm sure soon enough Ubuntu will detect your face seeming sad during debugging sessions and automatically shows kitten pictures (Emacs probably does this already, I'm not sure).