Sunday, 28 September 2014

Charity Document Management System

After a bit more development of the Document Management System for our Academy Charitable Trust (HDMS), I have now got something working which I think is useable.    There may well be some changes once we use it in anger for a while and find some 'features' annoying!


Background

HDMS is a Document Management System that has been developed for Hartlepool Aspire Trust (Catcote Academy).
It has been developed because the Trust is expected to have many policies to ensure compliance with statutory regulations, and these policies are implemented within the trust using procedures for detailed instructions, and forms to record information.
It is important that the latest versions of the Policies, Procedures and Forms are available to staff and key stakeholders, and that changes between versions can be tracked and communicated to stakeholders so they know what has changed when a new document is issued.

User Interface

HDMS has been developed to store the Trust's documents in a single repository (a web server) and present the latest version of documents to interested parties. Users are initially presented with a graphical summary of the document structure.
Screenshot Image The user clicks on parts of the graphical summary to search for specific types of documents (such as Financial Procedures, or Human Resources Policies). This gives a list of documents, showing the latest revision number with date of issue, with clickable icons to download either the PDF version or native version of the file.
Screenshot ImageAuthorised users have options to create new revisions, or edit existing draft documents.
Draft versions of documents are not publicly visible, but can be viewed by authorised users. Approval and issue of documents is managed by the draft document being sent electronically to reviewers/approvers.
The document is issued and becomes the latest version once all the reviewers/approvers have approved the document.
The workflow for creating, revising and approving a document is shown in a set of slides here.
The system stores both 'native' (e.g. MS Word) documents and PDF documents. By default the PDF version is delivered to the public, as this can not be modified accidentally. The system can also store 'extra' files, which may be the source files for drawings or tables of data that are used in the document - this is useful for future updates so the author can obtain all the data used to produce the original document.

Live Version

The live version of the system is running at (http://catcotegb.co.uk/hdms).
The software is quite general so may be of use to other small and medium size organisations who wish to manage their documentation in a systematic way. There is a demonstration version of the system available for testing at http://catcotegb.co.uk/hdms_demo - login as 'user1' with password 'test').   The source code is available on my GitHub repository.
Please let me know if you are interested in using this for your organisation and I will help explain how to set it up, because my installation instructions may not be complete!

Friday, 29 August 2014

Academy Charitable Trust Document Management System

Last year our school converted to an academy.   To help us with the set-up of the administrative side of the new organisation, I set up an electronic document management system to hold our management documents such as policies and procedures.

The system I set up was a modified version of OpenDocMan.   This has worked pretty well from the point of view of recording the documents and allowing us to retrieve the issued version, but now we are looking at updating some of the documents, and establishing another part of the organisation, we are finding some limitations.   The most significant problem is that the document does not appear publicly while it is waiting for approval - I want the latest issued document to always be available even while we are reviewing and approving the new version.

I decided that rather than modifying my version of OpenDocMan, it is probably better to write an alternative simple system based on an established software framework.

The new Hartlepool Aspire Trust Document Management System (HDMS) is based on the cakephp framework, which makes interfacing with the database, and dealing with internet http requests very simple, and it automatically produced the code to do basic database record creation/deletion etc. automatically, so I only had to do the 'business' logic.

The concepts for the new system and workflow are shown in these slides, and there is a demo installation here.

Monday, 13 January 2014

Breathing Detection with Kinect - A working Prototype Seizure Detector!

The seizure detector project has come forward a long way since I have been using the Kinect.
I now have a working prototype that monitors breathing and can alarm if the breathing rate is abnormally low.   It sends data to our 'bentv' monitors (image right), and has a web interface so I can see what it is doing (image below).   It is on soak test now.....

Details at http://openseizuredetector.org.uk.


Sunday, 5 January 2014

Breathing Detection using Kinect and OpenCV - Part 2 - Peak detection

A few days ago I published a post about how I am using a Microsoft Kinect depth camera and the OpenCV image processing library to identify a test subject from a background, and analyse the series of images from the camera to detect small movements.

The next stage is to calculate the brightness of the test subject at each frame, and turn that into a time series so we can see how it changes with time, and analyse it to detect specific events.

We can use the openCV 'mean' function to work out the average brightness of the test image easily, then just add it onto the end of an array, and trim the first value off the start to keep the length the same.
The resulting image and time series are shown below:

 The image here shows that we can extract the subject from the background quite accurately (this is Benjamin's body and legs as he lies on the floor).  the shading is the movement relative to the average position.










The resulting time series is shown here - the measured data is the blue spiky line.  The red one is the smoothed version (I know I have a half second offset between the two...).

The red dots are peaks detected using a very simple peak searching algorithm.
The chart clearly shows a 'fidget' being detected as a large peak.  There is a breathing event at about 8 seconds that has been detected too.

So, the detection system is looking promising - I have had better breathing detection when I was testing it on myself - I think I will have to change the position of the camera a bit to improve sensitivity.

I have now set up a simple python based web server to allow other applications to connect to this one to request the data.

We are getting there.  The outstanding issues are:

  • Memory Leak - after the application has run for 30 min the computer gets very slow and eventually crashes - I suspect a memory leak somewhere - this will have to be fixed!
  • Optimum camera position - I think I can get better breathing detection sensitivity by altering the camera position - will have to experiment a bit.
  • Add some code to identify whether we are looking at Benjamin or just noise - at the moment I analyse the largest bright subject in the image, and assume that is Benjamin - I should probably have a minimum size limit so it gives up if it can not see Benjamin.
  • Summarise what we are seeing automatically - "normal breathing", "can't see Benjamin", "abnormal breathing", "fidgeting" etc.
  • Modify our monitors that we use to keep an eye on Benjamin to talk to the new web server and display the status messages and raise an alarm if necessary.
The code is available here.







Wednesday, 1 January 2014

Breathing Detection using Kinect and OpenCV - Part 1 - Image Processing

I have had a go at detecting breathing using an XBox Kinnect depth sensor and the OpenCV image processing library.
I have seen a research paper that did breathing detection, but it relied on fitting the output of the Kinect to a skeleton model to identify the chest area to monitor.  I would like to do it with a less calculation intensive route, so am trying to just use image processing.

To detect the small movements of the chest during breathing, I am doing the following:
Start with a background depth image of empty room.

Grab a depth image from kinect
Subtract Background so we have only the test subject.




Subtract a rolling average background image, and amplify the resulting small differences - makes image very sensitive to small movements.


Resulting video shows image brightness changing due to chest movements from breathing.

We can calculate the average brightness of the test subject image - the value clearly changes due to breathing movements - job for tomorrow night is to do some statistics to work out the breathing rate from this data.

The source code of the python script that does this is the 'benfinder' program in the OpenSeizureDetector archive.

    Tuesday, 31 December 2013

    A Microsoft Kinect Based Seizure Detector?

    Background

    I have been trying to develop an epileptic seizure detector for our son on-and-off for the last year.   The difficulty is that it has to be non-contact as he is autistic and will not tolerate any contact sensors, and would not lie on a sensor mat etc.
    I had a go at a video based version previously, but struggled with a lot of noise, so put it on hold.

    At the weekend I read a book "OpenCV Computer Vision with Python" by Joseph Howse - this was a really good summary of how to combine openCV video processing into an application - dealing with separating user interface from video processing etc.   Most significantly he pointed out that it is now quite easy to use a Microsoft Kinect sensor with openCV (it looked rather complicated earier in the year when I looked), so thought I should give it a go.

    Connecting Kinect

    When I saw a Kinect sensor in a second hand gadgets shop on Sunday, I had to buy it and see what it can do.

    The first pleasant surprise that I got was that it came with a power supply and had a standard USB plug on it (I thought I would have to solder a USB plug onto it) - I plugged it into my laptop (Xubuntu 13.10), and it was immediately detected as a Video4Linux webcam - a very good start.

    System Software

    I installed the libfreenect library and its python bindings (I built it from source, but I don't think I had to - there is an ubuntu package python-freenect which would have done it).

    I deviated from the advice in the book here, because the Author suggested using the OpenNI library, but this didn't seem to work - looks like they no longer support Microsoft Kinect sensors (suspect it is a licensing issue...).   Also the particularly clever software to do skeleton detection (Nite) is not open source so you have to install it as a binary package, which I do not like.   It seems that the way to get OpenNI working with Kinect is to use a wrapper around libfreenect, so I decided to stick with libfreenect.

    The only odd thing is whether you need to be root to use the kinect or not - sometimes it seems I need to access it as root, then after that it works as a normal user - will think about this later - must be something to do with udev rules, so not a big deal at the moment....

    BenFinder Software

    To see whether the Kinect looks promising to use as a seizure detector, wrote a small application based on the framework in Joseph Howse's book.   I had to modify it to work with libfreenect - basically it is a custom frame grabber.
    The code does the following:
    • Display video streams from kinect, from either the video camera or the infrared depth camera on the kinect - works!  (switch between the two with the 'd' key).
    • Save an image to disk ('s' key).
    • Subtract a background image from the current image, and display the resulting image ('b' key).
    • Record a video (tab key).

    The idea is that it should be able to distinguish Benjamin from the background reliably, so we can then start to analyse his image to see if his movements seem odd (those who know Benjamin will know that 'odd' is a bit difficult to define for him!).

    Output

    I am very pleased with the output - it looks like it could work - a few images:

    Output from Kinect Video Camera (note the clutter to make detection difficult!)
    Kinect Depth Camera Output - Note black hole created by open door.



    Depth Camera Output with background image subtracted - note that the subject stands out quite clearly.
    Example of me trying to do Benjamin-like behaviours to see if I can be detected.

    Conclusion & What Next

    Background subtraction from the depth camera makes the test subject stand out nice and clearly - should be quite easy to detect him computationally.
    Next stage is to see if the depth camera is sensitive enough to detect breathing (when lying still) - will try by subtracting an each image from the average of the last 30 or so, and amplifying the differences to see if it can be seen.
    If that fails, I will look at Skeltrack to fit a body model to the images and analyse movement of limbs (but this will be much more computationally costly).
    Then I will have to look at infrastructure to deploy this - I will either need a powerful computer in Benjamin's room to interface with the Kinect and do the analysis, or maybe use a Raspberry Pi to interface with the kinect and serve the depth camera output as a video stream.

    Looking promising - will add another post with the breathing analysis in the new year...

    Thursday, 5 December 2013

    Using a Kobo Ebook Reader as a Gmail Notifier

    A certain person that I know well does not read her emails very often and sees it as a chore to switch on the computer to see if she has any.  And no, I can't interest her in a smartphone that will do email for her....This post is about making a simple device to hang on the wall like a small picture next to the calendar so she can always see if she has emails to know if it is worth putting the computer on.

    I was in WH Smith the other day and realised that they were selling Kobo Mini e-book readers for a very good price (<£30).   When you think about it the reader is a small battery powered computer with wifi interface, a 5" e-ink screen with a touch screen interface.    This sounds like just the thing to hang on the wall and use to display the number of un-read emails.

    Fortunately some clever people have worked out how to modify the software on the device - it runs linux and the manufacturers have published the open source part of the device firmware (https://github.com/kobolabs/Kobo-Reader).   I haven't done it myself, but someone else has compiled python to run on the device and use the pygame library to handle writing to the screen (http://www.mobileread.com/forums/showthread.php?t=219173).  Note that I needed this later build of python to run on my new kobo mini as some of the other builds that are available crashed without any error messages - I think this is to do with the version of some of the c libraries installed on the device.
    Finally someone called Kevin Short wrote a programme to use a kobo as a weather monitor, which is very similar to what I am trying to do and was a very useful template to start from - thank you, Kevin! (http://www.mobileread.com/forums/showthread.php?t=194376).

    The steps I followed to get this working were:

    • Enable telnet and ftp access to the kobo (http://wiki.mobileread.com/wiki/Kobo_Touch_Hacking)
    • Put python on the 'user' folder of the device (/mnt/onboard/.python).
    • Extend the LD_LIBRARY_PATH in /etc/profile to point to the new python/lib and pygame library directories.
    • Add 'source /etc/profile' into /etc/init.d/rcS so that we have access to the python libraries during boot-up.
    • Prevented the normal kobo software from starting by commenting out the lines that start the 'hindenburg' and 'nickel' applications in /etc/init.d/rcS.
    • Killed the boot-up animation screen by adding the following into rcS:
            killall on-animator.sh
            sleep 1
    • Added my own boot-up splash screen by adding the follwing to rcS:
            cat /etc/images/SandieMail.raw | /usr/local/Kobo/pickel showpic 
    • Enabled wifi networking on boot up by referencing a new script /etc/network/wifiup.sh in rcS, which contains:
            insmod /drivers/ntx508/wifi/sdio_wifi_pwr.ko
            insmod /drivers/ntx508/wifi/dhd.ko
            sleep 2
            ifconfig eth0 up
            wlarm_le -i eth0 up
            wpa_supplicant -s -i eth0 -c /etc/wpa_supplicant/wpa_supplicant.conf -C         /var/run/wpa_supplicant -B sleep 2
            udhcpc -S -i eth0 -s /etc/udhcpc.d/default.script -t15 -T10 -A3 -f -q
    • Started my new gmail notifier program using the following in rcS:
            cd /mnt/onboard/.apps/koboGmail
            /usr/bin/python gmail.py > /mnt/onboard/gmail.log 2>&1 &
    The actual python program to do the logging is quite simple - it uses the pygame program to write to a framebuffer screen, but uses a utility called 'full_update' that is part of the kobo weather project to update the screen.   The program does the following:
    • Get the battery status, and create an appropriate icon to show battery state.
    • Get the wifi link status and create an appropriate icon to show the link state.
    • Get the 'atom' feed of the user's gmail account using the url, username and password stored in a configuration file.
    • Draw the screen image showing the number of unread emails, and the sender and subject of the first 10 unread mails, and render the battery and wifi icons onto it.
    • Update the kobo screen with the new image.
    • Wait a while (5 seconds at the moment for testing, but will make it longer in the future - 5 min would probably be plenty).
    • Repeat indefinitely.
    The source code is in my github repository.

    The resulting display is pretty basic, but functional as shown in the picture.

    Things to Do

    There are a few improvements I would like to make to this:
    1. Make it less power intensive by switching off wifi when it is not needed (it can flatten its battery in about 12 hours so will need to be plugged into a mains adapter at the moment).
    2. Make it respond to the power switch - you can switch it off by holding the power switch across for about 15 seconds, but it does not shutdown nicely - no 'bye' display on the screen or anything like that - just freezes.
    3. Get it working as a usb mass storage device again - it does usb networking at the moment instead, so you have to use ftp to update the software or log in and use vi to edit the configuration files - not user friendly.
    4. Make it respond to the touch screen - I will need to interpret the data that appears in /dev/input for this.  The python library evdev should help with interpreting the data, but it uses native c code so I need a cross compiler environment for the kobo to use that, which I have not set up yet.  Might be as easy to code it myself as I will only be doing simple things.
    5. Get it to flash its LED to show that there are unread emails - might have to modify the hardware to add a bigger LED that faces the front rather than top too.
    6. Documentation - if anyone wants to get this working themselves, they will need to put some effort in, because the above is a long way off being a tutorial.   It should be possible to make a kobo firmware update file that would install it if people are interested in trying though.