Monday, December 15, 2008

Nice Pen-based Input Research

One of the things I enjoy to using this blog for is to share cool projects from Human-Computer Interaction (HCI) research. This post highlights projects by Gonzalo Ramos (or "Gonzo" for short) and his co-authors. He has worked on several projects demonstrating how much better pen input software could be. These are just a few I like.

1. The Zlider - A pressure sensitive slider widget that adds additional navigation and control capability to standard slider interactions. Academic research video below. Quick demo montage at beginning, but the demo meat is at 3:07


2. Using a Pen to Effortlessly Bridge Displays. Using a stylus, you can simply drag documents between computer screens or mobile devices. The pen motion also implicity defines the orientation of the displays relative to one another. Academic video below. Demos at the beginning and more mobile screen scenarios at around 2:43


3. Rolling the Pen as Input Using an external tracker and a Wacom tablet, rotating the pen in your fingers can be used to control another parameter without moving the stylus. Academic video below, demo meat at 2:19


You can check more of his projects on his website.

Thursday, November 20, 2008

Some great Wiimote IR tracking projects

I've decided to collect some of my favorite projects I've seen people do with my Wiimote projects, derivatives of them, or distantly inspired (through the creator's own admissions). It's a surprise, and flattering to see how many people seem happy to credit me. Thanks all! The list gets more "unusual" the further you go down.

Two Wiimote Whiteboards to make a competative relay race:


Great IR wands for the Wiimote whiteboard. I've been meaning to make these, but I haven't gotten to it yet.


Some nice two handed, two finger pinching systems:




Wiimote Wheelchair art. Unfortunately, no video but more information at this link.


Head tracking prototypes with Anime assets. The effect of the girl coming out of the screen (about half way through the video) is very nicely done with the "haze" layer. His other videos are also worth checking out. I don't know what he does for a living, but he's good at it.


Wii Theremin gallantly created/performed by Ken Moore:


Finally, a video on "chicken head tracking". It doesn't use the Wii remote, but was posted as a response to my video and I love it!

Thursday, November 13, 2008

Scratch Input and Low-Cost Multi-spectral material sensor

Chris Harrison, a PhD Student at my old program at CMU, presented a couple projects of his at UIST 2008 that I really really like. The first is his "Scratch Input" device. The basic idea is that if you place a senstive microphone on the bottom of a mobile device. Any large, hard surface you put it down on can now be used as an input gesture surface. A variety of gestures can be distinctly and reliably detected with some simple machine learning. Video (academic) below include a nice demo where he turns his entire wall into an MP3 player controller:



The other project he presented was a simple, cheap multi-spectral sensor for recognizing various materials. It includes an IR LED, UV LED, RGB LED, a photoresistor, and a TSL230 TOAS optical sensor. With these, he read the reflectively under different illuminations to recognize 27 different materials with 86.9% accuracy, be this your jeans, your backpack, your desk at home, your desk at work. This means coarse location awareness of mobile devices for cheap, some opportunities for more intelligent power management, and implicit security behaviors when placed on familiar or unfamiliar surfaces. Very nice work.

Friday, November 7, 2008

SurfaceWare - sensing glasses for Surface

My colleague, Paul Dietz, in the Applied Sciences group released a video of one of his first projects he did when he joined Microsoft. These glasses use the transparent material of the glass as prisms that sense the amount of liquid in them by watching the amount of internally reflected IR light. Check out the video:



If you aren't familiar with how Surface works, it is a rear projected table that also has a bright IR emitter inside that illuminates objects placed on the surface which are then visible to an IR camera. The video does a good job explaining how the glasses work.

This is actually a revisit of an older project of Paul's called iGlassware. That one used passively powered RFID sensor tags in the base of the glass to capacitively measure the liquid level. The table had a big RFID antenna in it. Paul was also a key developer of Mitsubishi Electric Research Lab's Diamond Touch table being skillfully demonstrated by Ed Tse below.



Ed is currently at Smart Technologies, who helped push out their new touch table:

Thursday, October 9, 2008

Andy Wilson

I was re-watching some videos of work done by one my colleagues Andy Wilson, and I don't think his work gets as much attention as it deserves given how amazing it is. If you think my stuff is cool, you should bow down to his greatness... or at least watch these videos.




Thursday, September 4, 2008

Working with the PixArt camera directly

This has been a pretty whirlwind past few months. Lots of things have happened, almost none of which procrastineering related which is why I haven't posted anything here. But, one of the things that I have poked at in the past few weeks was creating a PixArt to USB-HID device which allows the camera from the Wiimote to appear as a relatively easy to access USB device. This addresses several problems with using the Wiimote such as running off batteries for extended periods and flakey platform specific Bluetooth drivers. It's also possible to read from the Pixart cam at over 100Hz if you read directly via I2C as well as track visible dots once you remove the IR filter. Of course, none of this was discovered by me. All credit belongs to the numerous individuals who have contributed thier knowledge to the various Wiimote hacking websites. Normally, this project wouldn't be worth a post, but all the information on how to do this is pretty scattered and difficult to follow. So, I figured I would contribute by trying to making this all a bit clearer.


This project is fairly advanced. You must be comfortable with working with microcontrollers. Several simpler devices such as the Arduino or the Basic Stamp may work, but I used the 18F4550 PIC Microcontroller which provides built-in full-speed USB capabilites. But first, let talk about the PixArt camera:

Here's the pinout thanks to kako and a PCB picture. The Reset pin is active low, so use a pullup resistor to Vcc. The Wiimote runs the camera with a 25Mhz clock, but it also works with a 20Mhz clock so you might get away with fudging this a bit. The I2C communication is fast 400Khz and the slave device address is 0xB0. Most microcontroller development platforms should include I2C communication capabilities. If yours doesn't, get a better dev kit =o). Desoldering the camera can be hard with so many pins. But, careful use of a hot air gun will do the trick. The first part is to initialize the camera over I2C. Here's the pseudo code for initializing to maximum sensitivity (actual CCS C code in comments):

  1. write(hex): B0 30 01
  2. wait 100ms
  3. write(hex): B0 00 00 00 00 00 00 00 90 //sensitivity part 1
  4. wait 100ms
  5. write (hex): B0 07 00 41 //sensitivity part 2
  6. wait 100ms
  7. write(hex): B0 1A 40 00 //sensitivity part 3
  8. wait 100ms
  9. write(hex): B0 33 03 //sets the mode
  10. wait 100ms
  11. write(hex): B0 30 08
  12. wait 100ms

It's still somewhat mysterious to me what all these mean, but in this mess is the sensitivity and mode settings described at Wiibrew. The above code uses the sensitivity setting suggested by inio "00 00 00 00 00 00 90 00 41, 40 00" experssed in the 2nd, 3rd, and 4th message. The wait times are conservatively long. After you initialize, you can now read samples from it:

  1. write(hex): B0 37 //prepare for reading
  2. wait 25us
  3. write(hex): B1 //read request
  4. read 8 bytes
  5. wait 380us
  6. write(hex): B1 //read request
  7. read 4 bytes

This yeilds one sample from the camera containing 12 bytes, 3 for each of the 4 potential points. The format of the data will be the Extended Mode (X,Y, Y 2-msb, X 2-msb, Size 4-bits). The wait timings approximate what the Wiimote does. I've called this routine 1000 times per second without ill effect. Though, I doubt this is actually scanning the sensor and instead is just reporting the contents of an interal buffer. But, people claim 200Hz updates are possible. So, you can use that as a suggestion.

Hooking this up to your microcontroller is pretty straight forward. Give the camera 3.3v power using a voltage regulator, ground, a 20-25Mhz clock, and connect the SDA and SCL lines (don't forget your pull up resistors), and pull up the reset pin.

The CCS C Compiler for the PIC18F4550 includes USB-HID sample code. It's simply a matter of stuffing the data you got from the PixArt camera into the input report buffers for the USB. With this, you could actually create a USB mouse profile and make it control the cursor without any software or drivers at all. If set it up as a full speed device, it's possible to get 1ms reports providing extremely low latency updates. CCS provides relatively affordable PIC programmers as well. Explaining how to set all this up is not within the scope of this post, but it should be plenty to get you started. If you want to make a PCB, you can try ExpressPCB which can get you boards in-hand for as low as $60.

Update 9/6/08: Just a note about the clock. Since my PIC was using a 20Mhz resonator, I just piggy backed the Pixart clock pin off the OSC2/CLKO pin of the PIC which seemed to work fine. Also, Kako has more details (in Japanese) on doing this with an Arduino

Monday, June 23, 2008

More Wiimote Projects - A Brain Dump

It’s been a while since I’ve posted anything. That’s largely because I’ve been traveling a lot, giving talks, and most recently relocating to a new city. It became clear to me a while ago that I wasn’t going to get around to making more videos anytime soon. So, I figured I would make a post about the projects that I would probably make videos of if I had more free time. The content of this post has been in the talks that I’ve been giving, but I’m just sitting down to write it out now for my trusty blog readers.

1. Throwable Displays using the Wii remote

This I actually built and demoed in my lab at CMU. But, it only existed for about two days before I had to break it down to move and I didn’t get a chance to document it. Several months ago, a patent filed by Philips made some of the tech new sites about throwable displays in games. But it was a concept patent pretty far from a working demo. However, it turns out it’s pretty easy to implement using a projector, a wiimote, an IR emitter, and some of our trusty retro-reflective tape. It essentially combines the techniques from the finger tracking and the wiimote whiteboard projects. You put a little bit of reflective tape on each corner of a square piece of foam core, turn on the IR emitter so the Wiimote can see the four corners, align the camera tracking data with a projector using the 4-point calibration, and then the projector can display images perfectly aligned to the edges of a moving piece of foam core. The process of using a projector to augment the appearance of objects is called “Spatially Augmented Reality”.

Research colleagues of mine made a really fun demo where they tracked an air hockey puck from above and projected down on the air hockey table to display all sorts of visual effects that responded to the location/motion of the puck. They were demonstrating a fancy new type of high-speed tracking system. But, the Wiimote works quite well at 100Hz. I wish I had documented the throwable display on video, because it worked quite well. You really could pick it up and throw it around and the video image stays fairly locked onto the surface. There's a small latency primarily due to the 60Hz refresh of the projector. I even made a rough demo of the air hockey table, but it was VERY rough - just drew a line tail behind the puck. Again, a little patch of reflective tape on the puck and IR ring illuminated Wiimote above. However, the throwable display concept is actually a simpler implementation of a project I did earlier on “Foldable Displays” (tracked using a Wii remote) which I did make a video of, but not in tutorial format like my other Wii videos:

2. 3D tracking using two (or more) Wii remotes

Since the tracking in the Wiimote is done with a camera, if you have two cameras you can do a simple stereo vision triangulation to do full 3D motion capture for about $100. This was actually already done by some people at the University of Cambridge:

This is text book computer vision algorithm, but I haven’t gotten around to making a C# implementation. Obviously, you can use more than 2 wii remotes to increase tracking stability as well as increase occlusion tolerance. This would be a VERY useful and popular utility if anyone out there wants to make a nice software tool to transform multiple wiimotes into a cheap mocap system.

3. Universal Pointer using the Wii remote

The nice thing about the camera is that it can detect multiple points in different configurations. The four dots could be used to create a set of barcode-like or glyph-like identifiers above each screen in a multi-display environment. This would not only provide pointing functionality on each screen, but also provide screen ID which means you could interact with any cooperating computer simply by pointing at its screen. No fumbling for the mouse and keyboard, just walk around the room, or office building, or campus, and point at a screen. If all the computers were networked, you could carry files with your Wiimote virtually (using the controller ID) letting you copy/paste or otherwise manipulate documents across arbitrary screens regardless of what computer is driving the display or what input device is attached to the computer. You just carry your universal pointer that works on any screen, anywhere automatically. This makes a big infrastructure assumption, but it really alters the way one could interact with computational environments. The computers disappear and it becomes just a bunch of screens and your universal pointer.

Similarly, arbitrary objects could have unique IR identifiers. For example, if each lamp in your house had a uniquely shaped Wii sensor bar on it (and they were computer controlled lamps, of course), you could turn on a specific lamp simply by pointing at it and pressing a button or dim it by rotating the wiimote. If was an RGB led lamp, you could specify brightness, hue, and saturation with a quick gesture..

4. Laser Tag using Wii remotes

If you put IR leds on each of the Wii remotes, they can see each other. So, you can have a laser-tag like interaction just using Wii remotes – no display, except perhaps if you wanted a big score board. You’d have to validate which Wii remote you were shooting at, which you would do using some kind of IR LED blink sequence for confirmation. Just wire up the IR leds to the LEDs built into the Wii remote, so you can computer control their illumination.

5. IR tracking with ID using the Wii remote

This is more technical (and related to the above idea), but it addresses an important issue that I have yet to see done in either commercial or research systems. The problem with IR blob tracking using cameras is that you can’t which blob is which. You could blink the LEDs to broadcast their ID. But, this 1) would be slow because the ID data rate is limited by the frame rate of the camera 2) really hurts your tracking rate/reliability because you don’t know where the dot is when the LED is off. Now, the Wii remote’s camera chip gives 100Hz update, which might be tolerable for a small number of IDs. But, this approach doesn’t really work well when you want fast tracking with lots of unique IDs. One solution is to attach a high speed IR receiver to the side of the Wii remote for data transmission and simply use the camera for location tracking. IR receivers used in your TV probably support data rates of around 4000 bps - much higher than the 50 bps sampling limit you could squeeze out of the Wii remote. So, as the LEDs furiously blink their IDs at 4Kbps, they look like they are constantly on to the camera. This yields good tracking as well as many IDs. Now when you have multiple LEDs transmitting simultaneously, you’ll get packet collisions. So, some type of collision avoidance scheme would be needed of which there are many to choose from. It will also be necessary to re-associate the data packet with a visible dot. So, not all the LEDs can be visible all the time. But, you only have to sacrifice a small number of camera frames to support a large number of IDs. You can also probably boost performance if you are willing to accept short term probabilistic ID association.