Accessible software with Brain Computer Interface and Open Data- Mozilla Festival 2018
The slides for the session are available here – https://speakerdeck.com/bhoomika10/accessible-software-with-brain-computer-interface-and-open-data
On 27th October, 2018, I had the chance to present a session on ‘Accessible Software with Brain Computer Interface and Open Data’ in the Diversity and Inclusion space.
We began the session with an introduction to the speakers- Bhoomika Agarwal and Abhiram Ravikumar and the agenda of the session. Following this, we had a round of introduction of the attendees in order to get to know what they expected from the session. While some of the participants were neurologists and programmers, some others had no background in the field. Hence, we had to begin right from the basics of Brain Computer Interface (BCI).
The first topic that we looked into was the definition of disability. In order to help the participants place themselves in the shoes of persons with disability, we conducted a simple activity. We asked them to stay still and close their eyes for 2 minutes. During this 2-minute window, we saw people open their eyes, grow restless and much more. After the activity was done, we asked them to imagine a lifetime in this fashion.
We then looked into the dictionary definition of disability and tried to redefine it. As humans, we cannot fly or see during the night. To overcome this, we use tools such as airplanes and night vision goggles. We tried to redefine disability as nothing but a shortcoming of the body that can be overcome with the right tools. Today, persons with disability are more restrained by the way society looks at them than by their physical disabilities. We spoke about the challenges faced by persons with disability- limited access to technology, limited access to the web, limited access to spaces, pity from society, etc. To overcome this, we must make our spaces, software and anything that we design accessible. Also, inclusion is important to incorporate unique and diverse perspectives. As developers, it is our social responsibility to make anything that we design accessible.
Once we had redefined disability, we introduced BCI as a tool to overcome severe physical and motor disability. Our use case is for people who are severely physically disabled or locked in and have no motor functions. However, their brains are just as active. One such example is Stephen Hawkings- a renowned astrophysicist. BCI directly reads the electrical impulses generated by the brain and then translates them to artificial motor functions or actions. It is a medium through which a person uses his brain to directly control a machine.
We then delved into the technicalities of BCI and how it works. The electrical impulses produced by our brain can be classified as per their frequency and be mapped to different states of the mind such as sleep, concentration, cognitive and motor functions, etc. Once we have acquired the waves in the required frequency, these waves are pre-processed to remove any possible noise, the features are extracted and then classified. These classified features can be used as device commands for any machine ranging from wheelchairs to robotic arms to virtual reality to gaming. The BCI media player is one such application that opens doors into the world of software.
Currently, the hardware is available in various types and can be scaled easily. However, scaling the software to multiple dimensions requires huge additional costs. Also, the training time of most existing systems is quite high. The BCI media player aims to solve these issues by using minimal hardware to scale up software easily, thereby reducing the costs. It uses the concept of motor imagery- the electrical impulses generated by the brain during imagined movement are the same as during actual movement. Mu waves in the frequency of 8-12Hz produced in the sensorimotor cortex are used for motor imagery. We use a combination of binary digits to scale up easily, where the left hand imagined movement maps to a ‘0’ and the right hand imagined movement maps to a ‘1’. Pre-processing is done using a bandpass filter, followed by dimensionality reduction using Common Spatial Pattern and then feature classification using Linear Discriminant Analysis. We achieved a processing accuracy of 74% and demonstrated a proof of concept of how the binary combination can be used to scale easily with a demo of the BCI media player.
We concluded with the thought that technology is very powerful, and we should use it to build tools to help persons with disability overcome their shortcomings. Diversity and inclusion is a quintessential topic that all of us must be aware of and must keep in mind when we design any product.