Stony Brook Team Led by Professor I.V. Ramakrishnan Receives $1M+ Award to Develop Next-Generation Screen Magnifier
Smartphones have revolutionized the way people live, work, shop, etc. For people with low-vision impairments (currently numbered at about 253 million worldwide, according to the World Health Organization), the availability of a magnification option is vital, but current options ("Zoom" on iPhone and "Magnifier" on Android) often fall short in functionality and ease of use. A cross-disciplinary team of researchers, led by Professor I.V. Ramakrishnan, Associate Dean for Strategic Initiatives in the College of Engineering and Applied Sciences and a Professor in the Department of Computer Science has received a $1.1 million award to address these shortcomings by creating a next-generation screen magnifier.
Funded by a three-year grant from the National Institutes of Health/National Eye Institute, Ramakrishnan, Xiaojun Bi (Computer Science), Christian Luhman (SBU Psychology), Vikas Ashok (Old Dominion University; SBU PhD '19) and Syed Billah (Penn State University; SBU PhD '19) will work to improve the user experience by developing an assistive-technology software program called "CxZoom" for the Android platform. CxZoom will address the current magnification issues by:
- Performing object-aware magnification by identifying the objects in the graphical interface and compacting the space between them so contextually related objects are close together in the magnified view
- Leveraging the untapped built-in sensors such as accelerometer, geometric field and barometric pressure sensors to expand the default surface gestures with surfaceless natural gestures for easy-to-do and easy-to-learn magnification operations that can be done with only one hand
- Incorporating a novel keyboardless gesture-based text entry and editing technique to eliminate the text-entry difficulties that arise with virtual keyboards in magnification mode
These three focus points of CxZoom will make interaction with smartphones far more usable for people with low vision, reducing barriers to their productivity and empowering them to utilize these devices to the fullest possible extent for participation in the digitized economy.
In addition, on the scientific front, the algorithms and gestures developed will open up new research directions such as a new class of intelligent magnification methods, a new class of augmented surfaceless gestures for screen magnification and a new direction in keyboardless text entry. On the end-user front, CxZoom will help low-vision users handle digital content much faster, enabling them to experience higher productivity and improved access to education and employability, thereby furthering the purposes of the Rehabilitation Act, as well as the 21st Century Communications and Video Accessibility Act.
About the Researchers
I.V. Ramakrishnan is the Associate Dean for Strategic Initiatives in the College of Engineering and Applied Sciences and a Professor in the Department of Computer Science. His research interests include: artificial intelligence, computational logic, machine learning/computational logic combination, information retrieval and computer accessibility.
Xiaojun Bi is an Assistant Professor in the Department of Computer Science. His research interests include: human computer interaction, mobile computing, interactive systems, interaction techniques and theoretical issues of UI design.
Christian Luhman is an Associate Professor, Cognitive Science in the Department of Psychology in the College of Arts and Sciences. His research interests include: decision making, learning and computational modeling.