Audio Interfaces

Audio Fingerprint Applications

Advertising and media industry has shown rapid growth in past few decades by aligning with the increased popularity of mobile phones. As a result, advertising firms tend to try new technologies to capture the target audience attractively. Since its method of identification is independent of the application, audio fingerprinting has been used for various purposes including content-based audio retrieval. This project aims to demonstrate one potential application of audio fingerprinting algorithm in the media industry in the form of a mobile application which is able to detect and identify advertisement tracks and notify the user of related details and offers. The audio fingerprinting algorithm extracts different attributes from an audio file, processes them into audio fingerprints, and compares them with a database of audio fingerprints to find the closest-matching audio file. The proposed application has tested on a small-scale database and shown a respectable performance.

In collaboration with Andrew Putra Kusuma, Vajisha U. Wanniarachchi, and Ng Wee Keong, NTU, Singapore.

Video UbiComp 2018
Twittener

Despite the popularity, Twitter is less accessible when the user is driving, exercising, or multitasking which requires the user to be more attentive. To address this limitation, Twittener was initially introduced to facilitate Twitter users to listen to interesting tweets instead of the conventional way of reading them. After an initial preliminary study, Twittener has enhanced by aggregating several news channels and google trends providing latest information to the user. It uses Natural Language Processing to process the crawled tweets to generate sentiment value of each tweet. Text-to-speech technology which convert the categorized tweets into audio form are used to enhance the usability of the system. With the massive information provided by Twitter and news sources, Twittener was built with the goal of helping the user filter and break down information automatically. This is to enhance the efficiency of staying up to date with only the relevant information.

Twittener aims to improve user experience of Twitter by proposing an alternative way to interact with Twitter, by allowing users to listen to interesting tweets, instead of the conventional way of reading them. This research is carrying out under AcRF Tier 1 Grant, M4011802.020, S$50,000.00 (2017). An IRB application (IRB-2018-12-025) has been approved for this research. The initial version of this research was published in ACM Creativity & Cognition conference (C&C 2017).

In collaboration with Erik Cambria and Ng Wee Keong, NTU, Singapore.

Video Video Video WWW ACM C&C 2017 Cyberworlds(CW) 2019
Audio Narrowcasting and Privacy for Multipresent Environment

Our group is exploring interactive multi- and hypermedia, especially applied to virtual and mixed reality multimodal groupware systems. We are researching user interfaces to control source?sink transmissions in synchronous groupware (like teleconferences, chatspaces, virtual concerts, etc.). We have developed two interfaces for privacy visualization of narrowcasting (selection) functions in collaborative virtual environments (CVES): for a workstation WIMP (windows/icon/menu/pointer) GUI (graphical user interface), and for networked mobile devices, 2.5- and 3rd-generation mobile phones. The interfaces are integrated with other CVE clients, interoperating with a heterogeneous multimodal groupware suite, including stereographic panoramic browsers and spatial audio backends & speaker arrays. The narrowcasting operations comprise an idiom for selective attention, presence, and privacy--- an infrastructure for rich conferencing capability.

In collaboration with Micheal Cohen and Spatial Media Group, University of Aizu, Japan.

Wireless and Mobile Computing 2009 Awareness Systems 2009 IEICE 2006 CIT 2006 ICIA 2005 ICDCS 2004

Ambient Media

AmbiKraf : Non-emissiv Fabric Display

This project presents, AmbiKraf, a non-emissive fabric display that subtly animates patterns on common fabrics. We use thermochromic inks and peltier semiconductor elements to achieve this technology. With this technology we have produced numerous prototypes from animated wall paintings to pixilated fabric displays. The ability of this technology to subtly and ubiquitously change the color of the fabric itself has made us able to merge different fields and technologies with AmbiKraf. In addition, with an animated room divider screen, Ambikraf merged its technology with Japanese Byobu art to tighten the gap between traditional arts and contemporary technologies. Through this AmbiKraf Byobu art installation and other installations, we discuss the impact of this technology as a ubiquitous fabric display. With focus to improvements of some limitations of the existing system, we present our future vision that enables us to merge this technology into more applications fields thus making this technology a platform for ubiquitous interactions on our daily peripherals.

In collaboration with Roshan Lalintha Peiris, Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Video Multi. Tools and Applications 2013 Inter. with Computers 2013 MindTrek 2011 AMI 2011 SIGGRAPH 2009
dMarkers: Ubiquitous Dynamic Makers for Augmented Reality

This paper presents a proof of concept technology for a novel concept of dynamic markers for Augmented Reality. Here, by dynamic we mean markers that can change on external stimuli. Thus, the paper describes the use of ambient dynamic Augmented Reality Markers as temperature sensors. To achieve this technology we print patterns on an AR marker using thermochromic inks of various actuation temperatures. Thus, as the temperature gradually changes, the marker morphs into new marker for each temperature range. Thus here we present our preliminary results for three temperature ranges and discuss this work can be extended and applied in the future.

In collaboration with Roshan Lalintha Peiris, Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

AMI 2011 VRCAI 2011
Empathetic Living Media

We describe a new form of interactive living media used to communicate social or ecological information in the form of an empathetic ambient media. In the fast paced modern world people are generally too busy to monitor various significant social or human aspects of their lives, such as time spent with their family, their overall health, state of the ecology, etc. By quantifying such information digitally, information is semantically coupled into living microorganisms, E. coli. Through the use of transformed DNA, the E. coli will then glow or dim according to the data. The core technical innovation of this system is the development of an information system based on a closed-loop control system through which digital input is able to control input fluids to the E. coli, and thereby control the output glow of the E. coli in real time. Thus, social or ecological based information is coupled into a living and organic media through this control system capsule and provides a living media which promotes empathy. We provide user design and feedback results to verify the validity of our hypothesis, and provide not only system results but generalized design frameworks for empathetic living media in general.

In collaboration with Tim Merrit, Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Video ACM DIS 2008

Augmented & Mixed Reality

Augmented Reality Hologram

Augmented reality (AR) is the technology, which combines both the real and virtual world, by displaying computer-generated content over the view of the real-world. It overlays virtual objects such as graphics, GPS, sound or video on top of the real-world environment.Nowadays, AR has become one of the most popular research field in computer science. For example, augmented reality technologies could allow users to look at the objects on their mobile phone anywhere. People always wanted to create their own holographic content and show their creation to their peers but are restricted by the high cost of professional lightings and cameras.Inspired by this, the project aims to build a well-developed mobile application in Android system. The application focuses on multiple industries, which requires them to create their own holographic objects at a minimal cost. This project aims to show how the functionalities are implemented and testing would be conducted to ensure the accuracy of this project.

In collaboration with Gan Jia Jun, NTU, Singapore.

Video Cyberworlds(CW) 2019
Socially Mediated Augmented Reality Applications for User Enhancement and Customer Engagement

Enriched customer engagement is an inevitability when considering an enormous growth of every industry, and certainly, insurance is not an exception. With the rapid development of digital technologies and competitiveness between companies in the same industry or with alternative industries, businesses tend to excite their customers to obtain the competitive advantage. Conversely, with the understanding of the competitiveness of businesses, customers change their expectation and demand for a better service and engagement. As a result, digital technologies such as Bluetooth Low Energy Beacon technology overruled the existing methods of customer engagement. Though there are a lot of businesses which have adapted beacon technology to enhance customer engagement, there is lack of evidence of its adaptation in the insurance sector. Therefore, this paper aims to use beacon technology to enhance the customer engagement in insurance companies.

Engaged with AIA Edge Lab (Edge Lab initiative (AIA) Grant: M4061770.020, S$148,000.00 [2016]) on producing novel interfaces and innovative user engagement methods to develop inventive and innovative research results. Two IRB applications (IRB-2016-08-019) were approved for this research and necessary user studies were successfully conducted. A preliminary version of this research has been submitted to 10th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia(SIGGRAPH ASIA 2017).

In collaboration with Vajisha U. Wanniarachchi and Yohan Fernandopulle, NTU, Singapore.

SIGGRAPH ASIA 2017
Origami Recognition System using Natural Feature Tracking

This paper introduces a system that can recognize different type of paper-folding by users. The system allows users to register and use their desired paper in the interaction, and detect the folding by using Speed Up Robust Feature (SURF) algorithm. The paper also describes a paper-based tower defense game which has been developed as a proof of concept of our method. This method can be considered as the initial step for seamlessly migrating meaningful traditional art of origami into the digital world as a part of the interactive media.

In collaboration with Kenning Zhu, Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

ISMAR 2010 ICAT 2010 CHI EA 2012
Science Museum Mixed Reality Digital Media Exhibitions for Children

This paper will introduce digital media exhibitions developed for the Singapore Science Center. Digital media has become a new form of literacy for children. Allowing them to experience hands on advanced media technology should be the best way for children to understand these new technologies. To achieve this goal, we have developed exhibitions in the Singapore Science Center such as Blog Wall, Media Mirror, and Evolution Table. These cover advanced mixed reality, and human interaction technologies to have new experiences of learning and playing for children.

In collaboration with Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

DMAMH 2007
Metazoa Ludens: Mixed-Reality Interaction and Play for Small Pets and Humans

Although animals and pets are so important for families and society, in modern urban lifestyles, we can only spend little time with our animal friends. Interactive media should be aimed to enhance not only human-to-human communication but also human-to-animal communication. Thus, we promote a new type of interspecies media interaction which allows human users to interact and play with their small pet friends (in this case, hamsters) remotely via the Internet through a mixed-reality-based game system “Metazoa Ludens.” We used a two-pronged approach to scientifically examine the system. First, and most importantly, the body condition score study was conducted to evaluate the positive effects to the hamsters. Second, the method of Duncan was used to assess the strength of preference of the hamsters toward Metazoa Ludens. Lastly, the effectiveness of this remote interaction with respect to the human users as an interactive gaming system with their pets/friends (hamster) was examined based on Csikszentmihalyi's Flow theory. Results of both studies inform of positive remote interaction between human users and their pet friends using our research system. This research is not only just aimed at providing specific experimental results on the implemented research system but is also aimed as a wider lesson for human-to-animal interactive media. Therefore, as an addition, we present a detailed framework suited in general for human-to-animal interaction systems inferred from the lessons learned.

In collaboration with Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

IEEE Transactions on Systems, Man, and Cybernetics: Systems 2011

Crowdsourcing

Snake Pattern Detection

Snake bite have been serious health problem for a long time, in rural countries, especially in Africa. These countries lacks proper healthcare system, resources and medical officers to treat snake bites. To reduce snake bites and raise public awareness, SnakeAlert provides the public information on snakes and their locations through the use of crowdsourcing technique. The public can report a snake using SnakeAlert mobile application and the location will be shown to other users. Given the reported snake location, user can take preventive measures when travelling to these areas to prevent snake bite ncidents.This project is a continuation of the SnakeAlert System. The objective of this project is to develop an Apple IOS mobile version of the SnakeAlert System and further improve on current SnakeAlert System. In addition, extend image recognition into the mobile application by implementing TensorFlow Lite into android and Apple IOS mobile application. The mobile application includes downloadable map which works offline and alert users when approaching reported snake locations.

In collaboration with Ponnampalam Gopalakrishnakone, Department of Anatomy, Yong Loo Lin School of Medicine, NUS, Singapore.

Video    WWW Digital Health 2017
Web and mobile based GPS trail planner

There have been high usage of GPS features in both web and mobile applications. The four main features are location, mapping, tracking and navigation. Out of the four features, navigation comes out on top as the most frequent use. Numerous navigation applications that are out in the market allows users to enter their starting and destination coordinates. As these navigation applications mimicked how the users should navigate through a display of the route on the two-dimensional plane map, it still falls short in the interactivity of the application. GPS Trail Planner is a web-based application which includes interactivity of the application which the applications in the market fell short. It uses a JavaScript library called Hyperlapse.js to enable users to view the selected route in a 360 Hyperlapse video. The video is formed by stitching the images of Google Street View. In addition to the web-based application, the project also aims to develop the mobile-base counterpart. This helps to improve the flexibilityof the application usage.The main features implemented in GPS Trail Planner are API server, visualization of the route and the flow of the Hyperlapse video. The API server handles the connection between GPS Trail Planner with the database. The visualization of route displays the Hyperlapse video of the selected route. Blur motion and additional images are added to improve the Hyperlapse video.

In collaboration with Andreas Chrisna Mayong, Vajisha U. Wanniarachchi, and May Oo Lwin, NTU, Singapore.

Video    Cyberworlds(CW) 2018
Mo-Buzz: Socially Mediated System for Vector-Borne Diseases Surveillance, Engagement, and Communication

Mo-Buzz is a social media system which can be used to prevent dengue in Sri Lanka and potentially in the rest of the South and Southeast Asia. This work was published in prestigious journals including Acta Tropica Journal (Impact Factor: 2.218), Health Education & Behavior (Impact Factor: 2.312), and Medical Internet Research (Impact Factor: 4.532).

Research work related to Socially Mediated System for Vector-Borne Diseases Surveillance, Engagement, and Communication (Technical Disclosure: TD/248/14) received widespread publicity and news coverage in Sri Lanka (Sri Lankan Newspapers and TV channels over the period of March-April 2018). It has made significant impact in helping combating Dengue among school children with the ‘DengueFreeChild’ app. The app enables users to proactively report of Dengue or suspected Dengue fever so that action can be taken to the premises around which the infection is prevalent for mosquito breeding areas as well as to alert other parents to keep a keen eye on Dengue-like symptoms in their own children. This app was made using our Mo-Buzz framework (Technical disclosure: TD/248/14) which was licensed to SKOLL GLOBAL THREATS FUNDS.

In collaboration with May Oo Lwin and COSMIC (Centre of Social Media Innovations for Communities) Center, NTU, Singapore.

Video Video JMIR 2016 Health Education Research 2015 Health Educ Behav 2015 Acta Tropica 2014 HIMI 2013

Cultural Computing

The Hidden Shrines of Singapore: Mapping and Narrating Multi-Religious Heritages

The Hidden Shrines of Singapore is a digital humanities research project that aims to document and explore different shrines in Singapore with the use of augmented reality. Additionally, due to the success of documenting shrines in Singapore, we have included shrines from Sri Lanka as well. This project will comprise of two parts, the website and android application. The website is used as repository to crowdsource information and images about shrines. It achieves that by allowing users to upload images of shrine and its details. Through the website, staff (user with administrative rights) will be able to approve or reject shrine application.On the other hand, the android application allows user to download shrines for offline use. The shrines can be downloaded with the use of clustering algorithms. Once the shrine is downloaded, users will be able to utilise augmented reality to explore and discover shrines in Singapore. Using the augmented reality feature, they can detect images of shrine and pull out its relevant information from the database. The information that is pulled out, can be in the form of text or video. In addition, user will be able to favourite shrines that they are interested in and save it to a watchlist for viewing later. Moreover, the app also provides a convenient way to search for the nearest shrine to user’s location. The planning, development and technology used in this system are detailed within this report.This research is carrying out under NHB HRG Grants, S$ 70,000 (2018).

In collaboration with Sujatha Arundathi Meegama, School of Art, Design and Media, NTU, Singapore.

Video WWW Cyberworlds(CW) 2019
Kawaii/Cute interactive media

Cuteness in interactive systems is a relatively new development yet has its roots in the aesthetics of many historical and cultural elements. Symbols of cuteness abound in nature as in the creatures of neotenous proportions: drawing in the care and concern of the parent and the care from a protector. We provide an in-depth look at the role of cuteness in interactive systems beginning with a history. We particularly focus on the Japanese culture of Kawaii, which has made large impact around the world, especially in entertainment, fashion, and animation. We then take the approach of defining cuteness in contemporary popular perception. User studies are presented offering an in-depth understanding of key perceptual elements, which are identified as cute. This knowledge provides for the possibility to create a cute filter that can transform inputs and automatically create more cute outputs. This paper also provides an insight into the next generation of interactive systems that bring happiness and comfort to users of all ages and cultures through the soft power of cute.

In collaboration with Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Universal Access in the Information Society 2012 Art and Technology of Entertainment Computing and Communication 2010
CULTURAL ROBOTICS: The Culture of Robotics and Robotics in Culture

In this paper, we have investigated the concept of Cultural Robotics with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

In collaboration with Hooman Samani, Elham Saadatian, Doros Polydorou, Natalie Pang, and Ryohei Nakatsu, IDMI, NUS, Singapore.

International Journal of Advanced Robotic Systems 2013 Culture and Computing 2013
Personalized Cultural Information for Mobile Devices

Many systems are available for sightseeing information on mobile devices. The main drawback of such systems is that those systems are incapable of providing information according to users' feelings. In this approach, we have developed a mobile interface which provides sightseeing information according to users' feeling. This system used "thought forms" (concatenation, balance, division, unification, and crisscross) to add a swaying element within the word relationships. Users can interact with the system using their mobile devices and obtain interesting information such as sightseeing information related to cultural heritage sites in Kyoto.

In collaboration with Naoko Tosa and Tosa Lab, Kyoto University, Japan.

Culture and Computing 2011
Kabuki Mono: The art of Kabuki transformed

The limitations created by geographical and technological boundaries profoundly influenced the development and sustainability of traditional artistic cultures, which have long been popular among the locals to whom the art form is native. Today, however, with the connectedness computing brings along, it is possible to transform such art forms into globally accepted traditions. Japan is a nation that flourishes with tradition. From traditional Kabuki to pop culture oriented Manga and Cosplay, the fundamental motivation behind these traditions and modern cultures remains as the ardent desire of the human to transform himself as another who is more appealing in character and aesthetics. Kabuki Mono is a system that binds tradition with modernization leveraging the power of computing.

In collaboration with Naoko Tosa and Tosa Lab, Kyoto University, Japan.

ACM CiE 2012 Culture and Computing 2012
Modeling Literary Culture Through Interactive Digital Media

In the rapidly transforming landscape of the modern world, people unconsciously refrain from interacting in public spaces, containing their communications that are extensive and universal within the home and relatively individually. The mass connectivity and technological advancement created new cultural values, thus altering the human perception of the world. This state of affairs is jeopardizing some of the cultural identities that have surmounted few centuries, shaping the values and associated customs of numerous generations. Furthermore, computer technology became integrated exceedingly with the modern culture, which prompted us to introduce and explore the avenues of cultural computing that is the familiar ground of the modern society. With the intention of promoting values of distinct cultures, which will greatly assist in enhancing social relationships, we have developed a framework to communicate literature through digital media, which introduced the platform to create Poetry Mix-up

In collaboration with Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Video Video Virtual Reality 2011 ACM CiE 2011 ACE 2009 ACE 2009
BlogWall: social and cultural interaction for childrena

Short message service (SMS) is extremely popular today. Currently, it is being mainly used for peer-to-peer communication. However, SMS could be used as public media platform to enhance social and public interactions in an intuitive way. We have developed BlogWall to extend the SMS to a new level of self-expression and public communication by combining art and poetry. Furthermore, it will provide a means of expression in the language that children can understand, and the forms of social communication. BlogWall can also be used to educate the children while they interact and play with the system. The most notable feature of the system is its ability to mix up and generate poetry in multiple languages such as English, Korean, Chinese poems, or Japanese" Haiku" all based on the SMS. This system facilitates a cultural experience to children unknowingly, thus it is a step into new forms of cultural computing.

In collaboration with Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Mobile Information Communication Technologies 2011 Advances in Human-Computer Interaction 2008 MobileHCI 2007

Educational Research

Assessing Source Code of Undergraduate/Postgraduate with Code Quality Assessment Tool

This project focuses on developing algorithms and introduces a set of metrics that are specific to assess the quality of undergraduate projects. This research is carrying out under NTU-Edex Grants, M4081772.020.500000, S$18,000.00 (2016). The initial version of this research was published on 8th Annual International Conference on Computer Science Education: Innovation and Technology (CSEIT: 2017), Global Science and Technology Forum.

In collaboration with Vajisha U. Wanniarachchi, Chaman Wijesiriwardana, and Prasad Wimalaratne, University of Colombo, Sri Lanka.

CSEIT 2017
Productive Failure via Educational Games for Tertiary Education

The goal of this project is to develop a framework to design Tertiary Educational GAmes (TEGA). This framework will enable lecturers/tutors to create single/multi-user games capturing the essence of a lesson. This research is carrying out under MOE2016-2-TR04, S$ 285866.40 (2017).

In collaboration with Jeffrey Hong Yan Jack, Anupam Chattopadhyay, and Hock Soon Seah, NTU, Singapore.

ACE 2018 Cyberworlds(CW) 2019 VRCAI 2019
Personalized interventions during e-lecture

This project develops machine learning techniques that provide personalized, adaptive, and constructive feedback to the learner during an e-lecture. The research is carrying out under NTU-EdeX Grants, M4082010.020, S$ 40,000 (2017). The initial version of this research was published on 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)

In collaboration with Jagath C. Rajapakse, NTU, Singapore.

ICARCV 2018
A multimodal virtual anatomy learning tool for medical education

Computer-aided learning (CAL) has great potential in facilitating learning. In medical education, several approaches using CAL have been used. In this paper, we present a novel software platform which we developed to provide a virtual learning environment to support anatomy teaching and learning. This learning platform provides accurate, interactive models which are derived from actual CT and fMRI scans. The virtual 3D environment is particularly useful to help students identify key anatomy structures and their complex spatial relationships. The intuitive computer graphic interface and virtual reality 3D environment make learning interesting and engaging. The platform also allows instructors to easily customize the anatomy model by adding additional digital supplementary learning material including hyperlinks, images, animation, audio, video, and PowerPoint presentations which are all supported within the platform.

The paper titled A Multimodal Virtual Anatomy Learning Tool for Medical Education, has been selected as the best paper from the session of Learning Technologies / Strategies for Assessing Student Learning and Teaching, at the 2nd International Conference on Education, Training and Informatics (ICETI 2011), held in Orlando, Florida, USA, on March 27th - 30th, 2011.

In collaboration with Ponnampalam Gopalakrishnakone, Department of Anatomy, Yong Loo Lin School of Medicine, NUS, Singapore.

ICETI 2011

Multimodal Interfaces & Interactive Systems

Doctor Robot with Physical Examination for Skin Disease Diagnosis and Telemedicine Application

This paper illustrates various aspects of a doctor robot for physical examination such as for skin disease diagnosis. In modern society, most of the people are too busy to have a physical examination or some people who live in the remote area are difficult to go to the hospital. If everyone has a Doctor Robot at home when they feel uncomfortable, they just go to the front of the doctor robot, and it will start to check your body situation automatically. It will check your body situation and give you the diagnosis result by word and voice. It is, therefore, suitable for a variety of users, including the visually impaired people. Simultaneously, the result is connected to the cloud, so human doctors can see the results from the hospital, if necessary and doctors can help with Telemedicine. User can check data result by using the face. When Doctor Robot sees the user, it will connect it to the recorded data in the system.

In collaboration with Hooman Samani and AIART Lab,Department of Electrical Engineering, National Taipei University, Taiwan.

ICSSE 2018
Subtle, Natural and Socially Acceptable Interaction Techniques for Ringterfaces: Finger-Ring Shaped User Interfaces

This study analyzes interaction techniques in previously proposed 16 user interface concepts that utilize the form factor of a finger-ring, i.e. ringterfaces. We categorized the ringterfaces according to their interaction capabilities and critically examined how socially acceptable, subtle and natural they are. Through this analysis we show which kind of ringterfaces are likely to become general-purpose user interfaces and what factors drive their development toward commercial applications. We highlight the need for studying context awareness in ambient intelligence environments and end-user programming in future research on ringterfaces.

In collaboration with Mikko J. RissanenSamantha Vu, Natalie Pang, and Schubert Foo, NTU, Singapore.

Video DAPI 2013 CHI EA 2013 CHI EA 2013
An integrated physical and virtual pillbox for patient care

Based on three constructs of alerts, care, and education, we describe the design and development of CuePBox, an instrumental pillbox, with a physical as well as a virtual entity, that aims to address needs for enhanced patient care, in the monitoring of medication adherence on a continuous basis. CuePBox will be facilitated by social media technologies that support patient-to-patient communities and patient-to-healthcare communities to exchange state-of-affairs information and testimonials, with advice and encouragement towards speedy recovery. The care component along with the alerts (audio, visual, and vibration) integrated within the CuePBox hopes to empower patients to manage their health conditions.

In collaboration with Yin-Leng Theng and COSMIC (Centre of Social Media Innovations for Communities) Center, NTU, Singapore.

Video Video Video CHI EA 2013 CHI EA 2014
Paper Computing

Paper, as a traditional material for art and communication, shows great potential as a medium for organic user interfaces, with its ubiquity and flexibility. However, controlling and powering the sensors and actuators that enable interactive paper-crafts has not been fully explored. We present a method of selective inductive power transmission (SIPT) to support interactive paper crafts. The novelty of this method is that the power transmitter can be controlled to selectively activate one specific receiver at a time through inductive power transferring with multiple receivers. This was achieved by changing the output frequency of the power transmitter to match the impedance of the receivers. The receivers could be embedded or printed to drive paper-crafts. Based on inductor capacitor oscillating circuit and a function generator with a power amplifier, we developed two different prototypes of SIPT. By comparing the performance of both prototypes, we discussed the advantages and disadvantages of the two systems, and their applications in different contexts of paper-crafts. In addition, we proposed the instructions for using SIPT in developing interactive paper-crafts. With this technology and instructions, we hope to facilitate users to easily design new types of paper-craft systems without being concerned about the arrangement of wire connections to power supply on a massive scale.

In collaboration with Kening Zhu, Adrian David Cheok and Mixed Reality Lab, NUS, Singapore.

Interacting with Computers 2013 AMI 2011 SIGGRAPH 2011
Low cost infant monitoring and communication system

This paper proposes a low-cost, mobile-based monitoring and advisory system that continuously monitors the baby and remotely updates the mother on child status. This technology involves continuous measuring of the temperature, heart rate and motion and send it to a server where the data is processed. The server analyzes the received data and sends the processed biological information of the baby to the mother and generates an alert system if the conditions of the baby are found abnormal. These alert messages are transmitted to support systems and nearby health clinics in emergency situations. Also, advisory first-aid information is sent to the mother in order to take immediate action. Thus, this ubiquitous system would enhance mother's awareness of their baby health status.

In collaboration with Ponnampalam Gopalakrishnakone (Department of Anatomy, Yong Loo Lin School of Medicine, NUS, Singapore) and Adrian David Cheok (MIxed Reality Lab, NUS, Singapore)

IEEE Colloquium on Humanities, Science and Engineering 2011
Robot device and platform for social networking

A small microcontroller based robotic toy includes various rich features on its small footprint. User input sensing includes a smooth scrolling enabled resistive or capacitive touch sensing pad primarily for child-friendly menu navigation. Pressure activated squeeze areas of the robot surface facilitate exchange of special gifts and emoticons through a network. Additionally, the robotic toy allows a user to experience a multimodal engagement visually via a miniature OLED graphics display, audibly by an embedded audio system for producing sounds and through haptics using a vibrotactile effects generator.

Petimo: social networking robot for children, won the first prize in an international innovation competition in Milan Milan, Italy (Apr 22nd 2010). The prize is InventiON: concorso di idee per inventori, InventiON: competition of ideas for inventors. Petimo won the ?rst prize in the ICT (information and communication technologies) track. The competition is sponsored by the Municipality of Milan and the Chamber of Commerce, and is co-organized by a service company (Alintec) together with Nova-Sole 24 ore, (Italian ?nancial times). The main sponsor of the competition is 3M. ( http://www. innovationcircus.it/2009/)

Petimo: Winner of the C4C (Como for Children) Competition at IDC 2009 (The 8th International Conference on Interaction Design and Children): Learning and Playing in the pre-school of the future, Como, Italy, June 2009 (http://www.idc09.polimi.it/c4c.html/)

In collaboration with Adrian David Cheok and MIxed Reality Lab, NUS, Singapore.

Video Patent ACM CiE 2011 IDC 2009 SIGGRAPH ASIA 2009

Research Grants

    CO-PI for the project titled, The Hidden Shrines of Singapore: Mapping and Narrating Multi- Religious Heritages, NHB HRG Grants, S$ 70,000 (2018).

    CO-PI for the project titled, Personalized interventions during e-lecture, NTU-EdeX Grants, M4082010.020, S$ 40,000 (2017).

    CO-PI for the project titled, Productive Failure via Educational Games for Tertiary Education, MOE- TRF Grant: MOE2016-2-TR04, S$ 285866.40 (2017).

    CO-PI for the project titled, Personalized interventions during e-lecture, NTU-EdeX Grants, M4082010.020, S$ 40,000 (2017).

    Principal Investigator for the project titled, Socially mediated augmented reality for enhanced user experience and customer engagement, Edge Lab initiative (AIA), M4061770.020, S$148,000.00 (2016).

    Principal Investigator for the project titled, Assessing Source Code of Undergraduate/Postgraduate with Code Quality Assessment Tool (CQAT), NTU-Edex Grants, M4081772.020.500000, S$18,000.00 (2016).

    Collaborator for the project titled, Potential Acceptance of a Mobile Phone Based Influenza Communication System among Adolescents, Parents, and Teachers: Role of Peers and Social Influence, MOE AcRF Tier 1, 2013-T1-002-062, S$84,880.00 (2013).