The project has been actively developed for the past 3yrs+ and is being used in the classroom to teach vision processing. The system has become very stable and several example projects using Phission are available. There is also an examples/ sub-directory within the Phission code base.
Phission was a project for a Robotics 91.549 class taught by Holly Yanco at UMass Lowell during the Fall semester of 2003. The problem Phission solved is the need for a fast vision processing architecture for use on real robots. The vision processing software being used, before Phission, was slow and employed a poor system architecture. The Phission project replaces that previous vision system by using an architecture suited for a real-time environment.
Phission uses parallel processing and signaling mechanisms to allow pipelining & processing of live sources and designing a simple data flow type network. Phission provides three main objects common to data flow networks:
Phission uses parallel processing & signaling mechanisms to allow pipelining & processing of live sources. Phission is meant to run as a sub-system to let more important perception & control to execute without being blocked by vision processing. It was developed specifically for this purpose because the previous vision system was inadequate and slow. Phission began as a semester project to fix the previous system's inadequacies and allow a concurrent threaded model but developed into a thesis research project. In addition, other research being done within the University of Massachusetts at Lowell Computer Science Robotics Lab showed a need for a system like Phission.
The ideas and structures put into Phission were developed from some initial research into sound localization. A series of threads were used with specific purposes of capturing, processing and storing sound data. During a robotics course with Dr. Holly Yanco, a look into how the vision system worked showed it could be much better with as little as a month of design and coding. The design of the software that was used to capture, process and store the sound data was a rather generic approach so these ideas were applied to the initial stages of the Phission project.
In efforts to assure I wasn't duplicating other's work research was put into finding a suitable replacement for Phission that allowed the same process scope threaded functionality for vision processing a continuous media stream. In addition, a requisite of Python language support, open source licensing and code availability were essential. Most toolkits and resources that came up during this research were not meant for robotic applications but rather for other purposes such as video DJing, streaming media, multimedia, etc. Of these toolkits, there were a few with similar architectures and even fewer limited to process scope (ie not using System V or Posix shared memory for data sharing). However, most the research for these systems is older than 5 years and the support and the code hasn't been actively maintained for just as long.
The Python and Java specific APIs aren't included in this document yet and so one must read the examples or the language specific files that are installed in the afore mentioned directories. Subscribing to and emailing the phission-help mailing list with questions could prove useful for development in these languages. If the SWIG generated code works correctly, one should be able to overload any of the Phission classes in Python and Java. This hasn't been tested as of yet, but examples will be provided when time permits.
|Copyright (C) 2002 - 2007||
Philip D.S. Thoren ( firstname.lastname@example.org )
University Of Massachusetts at Lowell