By sensing what you're doing (e.g reading, sleeping, eating, etc) using depth sensors, it changes the orientations of the lights to ensure you always have the perfect lighting, no fumbling around with switches needed.
I started this project in my spare time in December 2013. I intend to opensource everything when I find the time.
See Myra in action:
The Objective
The motivation behind this project is simple: to free people's lives from the constraints of the "fixed" lighting system.
Conventional room lighting is static (or only mildly flexible).
- For example, many rooms (especially in Europe) are designed with a dark ambient, with some spots of light.
This is aesthetically fine, but what if you decide that you want to read a book in the middle of the room? You can't.
With conventional lights, actions of people (e.g reading) are constrained to particular locations (e.g next to reading lamps).
- Another example is if a single room holds multiple functions, like a living room and dining room (as is often the case in small countries like Japan.)
It would make sense to be able to change the lighting based on how you're currently using the room;
when you're eating, you want to light up the table and hide everything else,
but perhaps if you're watching TV, you just want some light on the walls.
Again, the conventional static lighting system wouldn't allow this (very easily), so when a room is multi-functional, people have to compromise and live with suboptimal, inaesthetic lighting.
Such problems could be solved if we had a lighting system that:
(a) can orient light and shadow in any configuration
(b) does (a) with minimal user input (preferably completely autonomously)
This is the idea behind building Myra, a robotic spotlight array that will configure itself automatically by recognizing the residents' activities.
The Prototype
The Myra system consists of three components:
(1) The Myra Light: this is a robotic arm with an LED. An array of these are placed around the room, and are controlled by a central PC.
(2) An RGBD sensor (e.g Microsoft Kinect, or Asus Xtion): this allows the PC to "see" where the residents are, and what they're doing.
(3) A PC: the PC analyses the readings from the sensor to classify user activities, decides how to light the room, and sends commands to individual Myra lights.
Components of the Myra system; an array of Myra lights (left) and an RGBD sensor (right) |
(1) Building the Myra Light
○Hardware
The hardware is very simple. (schematics etc will be uploaded somewhere at some point)
It is an Arduino controlled wireless robot arm with a high-powered LED.
The LED is fitted with a lens, which gives a fairly strong light beam of 15° arc.
A Myra Light on my curtain rail |
BOM (for a single Myra light):
- GWS Servo S03T $10×2 = $20
- Servo Brackets (AGBL-S03T) $17×2 = $34
- High Power 3W LED (e.g OSM5XME3C1S) $2
- LED holder+lens $1
- components for a stand-alone Arduino ~$5
- 315MHz receiver $3
The entire setup (for a single light) should be less than $60.
I have attached all of my lights to the curtain rail, using an attachment that I made using plywood.
You also need a 315MHz transceiver to wirelessly control the light from your PC.○Software:
The software that goes on the Myra light is a very simple Arduino sketch;
receive orientation (θ,φ) and the intensity (k) from the 315MHz receiver with VirtualWire.h, and then set the orientations of the servo (and the intensity of the LED) accordingly.
(2) Setting up the RGBD sensor
Any off-the-shelf RGBD sensor will do.
I used an Xtion Pro Live, because it requires no external power and so is easier to handle.
(3) Developing the controller PC
All of the intelligent decisions take place in a central PC that reads input from the sensor, and sends commands to the Myra lights.
The current system is written in Java, using OpenNI and NiTE for processing the RGBD sensor feed.
Visualizations are done using JMonkeyEngine.
The figure below shows the control pipeline for the Myra system:
Control pipeline for Myra |
I won't go into the specifics of each step (I'll do that in a future post), but here is what each step does.
1) You live your life. Myra requires no user input.
2) The RGBD sensor collects a point cloud of your room at 60Hz.
3) Myra uses NiTE to extract people from the point cloud. NiTE tags people, and their features (e.g head, shoulders, hands, etc) automatically.
4) This is where much of the hard work happens. Myra first classifies the state of each person into 5 different activities: reading, watching TV, standing, walking, sleeping.
Then, "lighting targets" are set according to the person't activity, and their postures.
For example, if a person is "reading", then a target is set on the person's arm closest to their head.
-> this allows Myra to continually adjust the light so that your book is never in shadow.
If the person is "walking", multiple targets are set: one at the predicted destination, and others at various features in the path up to the destination.
-> this is useful in cases like, when you are watching a film and you need to go to the toilet, or you get up in the middle of the night to get a glass of water. There's no need to find the light switch, because Myra automatically shows the way to the door, without disturbing anyone else.
5) After lighting targets have been found, Myra assigns a robot light to point at one (or more) of the lighting targets. The video above shows a scene where Myra picks the best light to shine on a book I am holding.
6) Finally, the orientations of each of the lights are sent wirelessly to each robot light.
Steps 1~6 are repeated about 3 times a second.
(4) Calibration
When the system is first setup, Myra doesn't know where each of the lights are with respect to the sensor.
This is essential information, because it needs to be able to know which values of (θ,φ) would shine light on a particular position in the sensor's coordinate system.
I'm not going to go into details here, but I have developed a fully automatic calibration system (as shown in the video); I will explain how this works in a future post.
Essentially, the system points the light in random directions, and sees where the center of the lights lye.
It then uses simulated annealing to find the best-fit estimation of the light orientation.
So what can it do?
Here are some of the things that I have got Myra to do so far:
- automatically generate "skins" for different functions of your living room
- point of interest tracking: ensure that whatever posture you're in, your book will always be lit
- create a "path of light" to wherever you're going, so you can always find your way anywhere safely and without stress
- automatically turn off when you fall asleep
- a "targeted alarm clock" that wakes a person up by shining targeted light at the target's face, without disturbing anyone else
Finally...
Myra is a project still very much in development.
I don't think the use cases I've described above fully exploit the potential of the very new concept of an autonomous robotic light array.
When I find the time, I will release all schematics and code in open source.
I will appreciate any comments or suggestions.
Thanks for reading.
This is essential information, because it needs to be able to know which values of (θ,φ) would shine light on a particular position in the sensor's coordinate system.
Myra calibrating itself |
Essentially, the system points the light in random directions, and sees where the center of the lights lye.
It then uses simulated annealing to find the best-fit estimation of the light orientation.
So what can it do?
Here are some of the things that I have got Myra to do so far:
- automatically generate "skins" for different functions of your living room
The same room looks entirely different just from the lighting. |
- point of interest tracking: ensure that whatever posture you're in, your book will always be lit
Myra follows you, however absurd your favorite posture may be. |
You're never in the dark with Myra. |
- a "targeted alarm clock" that wakes a person up by shining targeted light at the target's face, without disturbing anyone else
Finally...
Myra is a project still very much in development.
I don't think the use cases I've described above fully exploit the potential of the very new concept of an autonomous robotic light array.
When I find the time, I will release all schematics and code in open source.
I will appreciate any comments or suggestions.
Thanks for reading.