Future Lab: Context Aware

April 4th, 2011 |
Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email

One of the next frontiers of computing is to create systems that understand the user. Context aware devices of the near future might recommend restaurants, monitor a user’s health, or screen phone calls—all based on information collected from device sensors and casual input data. Two of the most profound sensors are already on our devices: microphone and camera. In this episode of Future Lab we talk with researchers who are exploring how to make devices context aware.

Andrew Campbell, Professor of Computer Science, Dartmouth College
Rosalind Picard, Professor of Media Arts and Sciences, M.I T.
Lama Nachman, Senior Researcher, Intel Labs

Mobile Sensing Group at Dartmouth

See photos on Flickr.

See also:
Raising the IQ on Smartphones

Future Lab radio is sponsored by Intel Labs and is available on Intel Free Press and through iTunes 

Tags: , , , , , , , , , , , , , , ,
Posted in: Audio Podcast, Future Lab Radio, Intel, Intel Free Press, Intel Labs, Research@Intel