Last year our school converted to an academy. To help us with the set-up of the administrative side of the new organisation, I set up an electronic document management system to hold our management documents such as policies and procedures.
The system I set up was a modified version of OpenDocMan. This has worked pretty well from the point of view of recording the documents and allowing us to retrieve the issued version, but now we are looking at updating some of the documents, and establishing another part of the organisation, we are finding some limitations. The most significant problem is that the document does not appear publicly while it is waiting for approval - I want the latest issued document to always be available even while we are reviewing and approving the new version.
I decided that rather than modifying my version of OpenDocMan, it is probably better to write an alternative simple system based on an established software framework.
The new Hartlepool Aspire Trust Document Management System (HDMS) is based on the cakephp framework, which makes interfacing with the database, and dealing with internet http requests very simple, and it automatically produced the code to do basic database record creation/deletion etc. automatically, so I only had to do the 'business' logic.
The concepts for the new system and workflow are shown in these slides, and there is a demo installation here.
Descriptions of some of my geeky projects in case I need to remember what I did in the future.
Friday, 29 August 2014
Monday, 13 January 2014
Breathing Detection with Kinect - A working Prototype Seizure Detector!

I now have a working prototype that monitors breathing and can alarm if the breathing rate is abnormally low. It sends data to our 'bentv' monitors (image right), and has a web interface so I can see what it is doing (image below). It is on soak test now.....
Details at http://openseizuredetector.org.uk.
Sunday, 5 January 2014
Breathing Detection using Kinect and OpenCV - Part 2 - Peak detection
A few days ago I published a post about how I am using a Microsoft Kinect depth camera and the OpenCV image processing library to identify a test subject from a background, and analyse the series of images from the camera to detect small movements.
The next stage is to calculate the brightness of the test subject at each frame, and turn that into a time series so we can see how it changes with time, and analyse it to detect specific events.
We can use the openCV 'mean' function to work out the average brightness of the test image easily, then just add it onto the end of an array, and trim the first value off the start to keep the length the same.
The resulting image and time series are shown below:

The image here shows that we can extract the subject from the background quite accurately (this is Benjamin's body and legs as he lies on the floor). the shading is the movement relative to the average position.
The resulting time series is shown here - the measured data is the blue spiky line. The red one is the smoothed version (I know I have a half second offset between the two...).
The red dots are peaks detected using a very simple peak searching algorithm.
The chart clearly shows a 'fidget' being detected as a large peak. There is a breathing event at about 8 seconds that has been detected too.
So, the detection system is looking promising - I have had better breathing detection when I was testing it on myself - I think I will have to change the position of the camera a bit to improve sensitivity.
I have now set up a simple python based web server to allow other applications to connect to this one to request the data.
We are getting there. The outstanding issues are:
The next stage is to calculate the brightness of the test subject at each frame, and turn that into a time series so we can see how it changes with time, and analyse it to detect specific events.
We can use the openCV 'mean' function to work out the average brightness of the test image easily, then just add it onto the end of an array, and trim the first value off the start to keep the length the same.
The resulting image and time series are shown below:

The image here shows that we can extract the subject from the background quite accurately (this is Benjamin's body and legs as he lies on the floor). the shading is the movement relative to the average position.
The resulting time series is shown here - the measured data is the blue spiky line. The red one is the smoothed version (I know I have a half second offset between the two...).
The red dots are peaks detected using a very simple peak searching algorithm.
The chart clearly shows a 'fidget' being detected as a large peak. There is a breathing event at about 8 seconds that has been detected too.
So, the detection system is looking promising - I have had better breathing detection when I was testing it on myself - I think I will have to change the position of the camera a bit to improve sensitivity.
I have now set up a simple python based web server to allow other applications to connect to this one to request the data.
We are getting there. The outstanding issues are:
- Memory Leak - after the application has run for 30 min the computer gets very slow and eventually crashes - I suspect a memory leak somewhere - this will have to be fixed!
- Optimum camera position - I think I can get better breathing detection sensitivity by altering the camera position - will have to experiment a bit.
- Add some code to identify whether we are looking at Benjamin or just noise - at the moment I analyse the largest bright subject in the image, and assume that is Benjamin - I should probably have a minimum size limit so it gives up if it can not see Benjamin.
- Summarise what we are seeing automatically - "normal breathing", "can't see Benjamin", "abnormal breathing", "fidgeting" etc.
- Modify our monitors that we use to keep an eye on Benjamin to talk to the new web server and display the status messages and raise an alarm if necessary.
Wednesday, 1 January 2014
Breathing Detection using Kinect and OpenCV - Part 1 - Image Processing
I have had a go at detecting breathing using an XBox Kinnect depth sensor and the OpenCV image processing library.
I have seen a research paper that did breathing detection, but it relied on fitting the output of the Kinect to a skeleton model to identify the chest area to monitor. I would like to do it with a less calculation intensive route, so am trying to just use image processing.
To detect the small movements of the chest during breathing, I am doing the following:
I have seen a research paper that did breathing detection, but it relied on fitting the output of the Kinect to a skeleton model to identify the chest area to monitor. I would like to do it with a less calculation intensive route, so am trying to just use image processing.
To detect the small movements of the chest during breathing, I am doing the following:
![]() |
Start with a background depth image of empty room. |
![]() |
Grab a depth image from kinect |
![]() |
Subtract Background so we have only the test subject. |
![]() |
Subtract a rolling average background image, and amplify the resulting small differences - makes image very sensitive to small movements. |
Resulting video shows image brightness changing due to chest movements from breathing. |
We can calculate the average brightness of the test subject image - the value clearly changes due to breathing movements - job for tomorrow night is to do some statistics to work out the breathing rate from this data.
The source code of the python script that does this is the 'benfinder' program in the OpenSeizureDetector archive.
Tuesday, 31 December 2013
A Microsoft Kinect Based Seizure Detector?
Background
I have been trying to develop an epileptic seizure detector for our son on-and-off for the last year. The difficulty is that it has to be non-contact as he is autistic and will not tolerate any contact sensors, and would not lie on a sensor mat etc.I had a go at a video based version previously, but struggled with a lot of noise, so put it on hold.
Connecting Kinect
When I saw a Kinect sensor in a second hand gadgets shop on Sunday, I had to buy it and see what it can do.The first pleasant surprise that I got was that it came with a power supply and had a standard USB plug on it (I thought I would have to solder a USB plug onto it) - I plugged it into my laptop (Xubuntu 13.10), and it was immediately detected as a Video4Linux webcam - a very good start.
System Software
I installed the libfreenect library and its python bindings (I built it from source, but I don't think I had to - there is an ubuntu package python-freenect which would have done it).I deviated from the advice in the book here, because the Author suggested using the OpenNI library, but this didn't seem to work - looks like they no longer support Microsoft Kinect sensors (suspect it is a licensing issue...). Also the particularly clever software to do skeleton detection (Nite) is not open source so you have to install it as a binary package, which I do not like. It seems that the way to get OpenNI working with Kinect is to use a wrapper around libfreenect, so I decided to stick with libfreenect.
The only odd thing is whether you need to be root to use the kinect or not - sometimes it seems I need to access it as root, then after that it works as a normal user - will think about this later - must be something to do with udev rules, so not a big deal at the moment....
BenFinder Software
To see whether the Kinect looks promising to use as a seizure detector, wrote a small application based on the framework in Joseph Howse's book. I had to modify it to work with libfreenect - basically it is a custom frame grabber.
The code does the following:
- Display video streams from kinect, from either the video camera or the infrared depth camera on the kinect - works! (switch between the two with the 'd' key).
- Save an image to disk ('s' key).
- Subtract a background image from the current image, and display the resulting image ('b' key).
- Record a video (tab key).
The code is in my Open Seizure Detector github repository.
The idea is that it should be able to distinguish Benjamin from the background reliably, so we can then start to analyse his image to see if his movements seem odd (those who know Benjamin will know that 'odd' is a bit difficult to define for him!).
Output
I am very pleased with the output - it looks like it could work - a few images:
![]() |
Output from Kinect Video Camera (note the clutter to make detection difficult!) |
![]() |
Kinect Depth Camera Output - Note black hole created by open door. |
![]() |
Depth Camera Output with background image subtracted - note that the subject stands out quite clearly. |
Example of me trying to do Benjamin-like behaviours to see if I can be detected. |
Conclusion & What Next
Background subtraction from the depth camera makes the test subject stand out nice and clearly - should be quite easy to detect him computationally.
Next stage is to see if the depth camera is sensitive enough to detect breathing (when lying still) - will try by subtracting an each image from the average of the last 30 or so, and amplifying the differences to see if it can be seen.
If that fails, I will look at Skeltrack to fit a body model to the images and analyse movement of limbs (but this will be much more computationally costly).
Then I will have to look at infrastructure to deploy this - I will either need a powerful computer in Benjamin's room to interface with the Kinect and do the analysis, or maybe use a Raspberry Pi to interface with the kinect and serve the depth camera output as a video stream.
Looking promising - will add another post with the breathing analysis in the new year...
Thursday, 5 December 2013
Using a Kobo Ebook Reader as a Gmail Notifier

I was in WH Smith the other day and realised that they were selling Kobo Mini e-book readers for a very good price (<£30). When you think about it the reader is a small battery powered computer with wifi interface, a 5" e-ink screen with a touch screen interface. This sounds like just the thing to hang on the wall and use to display the number of un-read emails.
Fortunately some clever people have worked out how to modify the software on the device - it runs linux and the manufacturers have published the open source part of the device firmware (https://github.com/kobolabs/Kobo-Reader). I haven't done it myself, but someone else has compiled python to run on the device and use the pygame library to handle writing to the screen (http://www.mobileread.com/forums/showthread.php?t=219173). Note that I needed this later build of python to run on my new kobo mini as some of the other builds that are available crashed without any error messages - I think this is to do with the version of some of the c libraries installed on the device.
Finally someone called Kevin Short wrote a programme to use a kobo as a weather monitor, which is very similar to what I am trying to do and was a very useful template to start from - thank you, Kevin! (http://www.mobileread.com/forums/showthread.php?t=194376).
The steps I followed to get this working were:
- Enable telnet and ftp access to the kobo (http://wiki.mobileread.com/wiki/Kobo_Touch_Hacking)
- Put python on the 'user' folder of the device (/mnt/onboard/.python).
- Extend the LD_LIBRARY_PATH in /etc/profile to point to the new python/lib and pygame library directories.
- Add 'source /etc/profile' into /etc/init.d/rcS so that we have access to the python libraries during boot-up.
- Prevented the normal kobo software from starting by commenting out the lines that start the 'hindenburg' and 'nickel' applications in /etc/init.d/rcS.
- Killed the boot-up animation screen by adding the following into rcS:
killall on-animator.sh
sleep 1 - Added my own boot-up splash screen by adding the follwing to rcS:
cat /etc/images/SandieMail.raw | /usr/local/Kobo/pickel showpic - Enabled wifi networking on boot up by referencing a new script /etc/network/wifiup.sh in rcS, which contains:
insmod /drivers/ntx508/wifi/sdio_wifi_pwr.ko
insmod /drivers/ntx508/wifi/dhd.ko
sleep 2
ifconfig eth0 up
wlarm_le -i eth0 up
wpa_supplicant -s -i eth0 -c /etc/wpa_supplicant/wpa_supplicant.conf -C /var/run/wpa_supplicant -B sleep 2
udhcpc -S -i eth0 -s /etc/udhcpc.d/default.script -t15 -T10 -A3 -f -q - Started my new gmail notifier program using the following in rcS:
cd /mnt/onboard/.apps/koboGmail
/usr/bin/python gmail.py > /mnt/onboard/gmail.log 2>&1 &

- Get the battery status, and create an appropriate icon to show battery state.
- Get the wifi link status and create an appropriate icon to show the link state.
- Get the 'atom' feed of the user's gmail account using the url, username and password stored in a configuration file.
- Draw the screen image showing the number of unread emails, and the sender and subject of the first 10 unread mails, and render the battery and wifi icons onto it.
- Update the kobo screen with the new image.
- Wait a while (5 seconds at the moment for testing, but will make it longer in the future - 5 min would probably be plenty).
- Repeat indefinitely.
The source code is in my github repository.
The resulting display is pretty basic, but functional as shown in the picture.
Things to Do
There are a few improvements I would like to make to this:
- Make it less power intensive by switching off wifi when it is not needed (it can flatten its battery in about 12 hours so will need to be plugged into a mains adapter at the moment).
- Make it respond to the power switch - you can switch it off by holding the power switch across for about 15 seconds, but it does not shutdown nicely - no 'bye' display on the screen or anything like that - just freezes.
- Get it working as a usb mass storage device again - it does usb networking at the moment instead, so you have to use ftp to update the software or log in and use vi to edit the configuration files - not user friendly.
- Make it respond to the touch screen - I will need to interpret the data that appears in /dev/input for this. The python library evdev should help with interpreting the data, but it uses native c code so I need a cross compiler environment for the kobo to use that, which I have not set up yet. Might be as easy to code it myself as I will only be doing simple things.
- Get it to flash its LED to show that there are unread emails - might have to modify the hardware to add a bigger LED that faces the front rather than top too.
- Documentation - if anyone wants to get this working themselves, they will need to put some effort in, because the above is a long way off being a tutorial. It should be possible to make a kobo firmware update file that would install it if people are interested in trying though.
Tuesday, 22 October 2013
Raspberry Pi and Arduino
I am putting together a data logger for the biogas generator.
I would like it networked so I don't have to go out in the cold, so will use a raspberry pi. To make interfacing the sensors easy I will connect the Pi to an Arduino microcontroller. This is a bit over the top as I should be able to do everything I need using the Pi's GPIO pins, but Arduino has a lot of libraries to save me programming....
To get it working I installed the following packages using:
To test it, copy the Blink.ino sketch from /usr/share/arduino/examples/01.Basics/Blink/ to a user directory.
Then create a Makefile in the same directory that has the following contents:
I would like it networked so I don't have to go out in the cold, so will use a raspberry pi. To make interfacing the sensors easy I will connect the Pi to an Arduino microcontroller. This is a bit over the top as I should be able to do everything I need using the Pi's GPIO pins, but Arduino has a lot of libraries to save me programming....
To get it working I installed the following packages using:
apt-get install gcc-avr avr-libc avrdude arduino-core arduino-mk
To test it, copy the Blink.ino sketch from /usr/share/arduino/examples/01.Basics/Blink/ to a user directory.
Then create a Makefile in the same directory that has the following contents:
ARDUINO_DIR = /usr/share/arduino
TARGET = Blink
ARDUINO_LIBS =
BOARD_TAG = uno
ARDUINO_PORT = /dev/ttyACM0
include /usr/share/arduino/Arduino.mk
Then just do 'make' to compile it, then upload to the arduino (in this case a Uno) using:
avrdude -F -V -p ATMEGA328P -c arduino -P/dev/ttyACM0 -U build-cli/Blink.hex
The LED on the Arduino Uno starts to blink - success!
Subscribe to:
Posts (Atom)