Connecting Users in the Virtual and Physical World
What if…I could team up with other online users browsing the same websites as me? Or I could ask mobile users on-site if there’s currently a construction site close to the hotel?
What if…I could get the latest tweets about the hotel? I could meet with other people being on-site with me? Or I could find online users who can provide me with more detailed information about the hotel?
This method performs people counting in real-time. The system detects how many people “IN” or “OUT” in the building by using a basic frame-to-frame distance measure algorithm. This system is useful for evacuation of people in the building when there is an unexpected circumstance such as fire, etc. Figure 2 shows a general scenario for people counting on a day-to-day basis.
Social ENhanced SurvEillance System
The SENSE system is a next generation surveillance search engine which integrates sensors and big data extracted from multiple modalities, such as CCTV cameras, social media networks, and human sensors. The SENSE system includes components for:
- Collection and analysis for each form of data (i.e., images, videos, and text) individually.
- Making intelligent sense of the combined data in multiple modalities, including making appropriate reactions to the sensed inputs.
- An event alert engine for multiple channels such as web, email, and mobile applications.
- Interactive user interfaces for easy viewing and searching of event alerts, and for users to report and share information with other people.
For Multi-Camera Collaborative Sensing
SeSaMe Research Centre has built an integrated toolbox which include various visual analytics engine for multi-camera collaborative sensing.This work aims to provide seamless integration on various technologies developed in the Centre, as well as, to provide fast prototyping of a real-world application. The visual analytics engine enables various conventional video analytic applications such as background modeling, people counting, person/object identification, video summary, visual saliency, and active sensing. These applications aim to provide robust and effective analytics for business intelligence, security application and individual analysis.
The NUS Navigator is an application under development that will navigate a user from door-to-door across the entire NUS campus. It incorporates seamless indoor and outdoor navigation and provides various options of paths, such as a combined pedestrian-commuter bus route, or a pedestrian-driving route, or even a pedestrian only route. Through indoor localization, it can identify the present starting location of the user, if he is indoors and then work out the shortest route to the destination.
Looking glass aims to deliver a working system that enables users to use smartphones and wearables to interact with real world. Augmented reality (AR) systems currently overlay information based on location and general compass orientation. However, interacting virtually with such information in the environment through AR is still in its infancy. With the proliferation of mobile phones and wearables, we foresee applications that will adopt AR as one of their key modes for user interactions.
Readpeer is an annotation management and retrieval system. The website features social reading, document enrichment and annotation transfer. Readpeer is a cross-platform system and has iOS and Android apps, Chrome plugin and Readbot. iOS and Android apps allow users to make video and audio annotations. Chrome plugin enables users to search for relevant annotations while reading webpages. Readbot connects Readpeer with the physical book. Users can take a snapshot of the book pages and relevant annotations will be returned from Readpeer. Readpeer IVLE is an application of Readpeer in education. Students can use Readpeer IVLE to ask questions and make annotations. Teachers can answer questions using the system.
Automatic content based linking of sensor-annotated smartphone images.
Unstructured photo collections captured from well-attended ad-hoc events are usually captured from diverse viewpoints and subjected to visual noise such as occlusion and lighting. Autolink is a tool which automatically organizes such photo collections into an hierarchy which allows users to navigate between high context images to high detail. Autolink requires little content training and prior information, as it leverages content redundancy in the photo collections and smartphone sensor data associated with the photos to create the hierarchy. This hierarchically structured image collection facilitates the design of applications for navigation and discovery, analytics about user photography patterns, user taste, and content/event popularity.
Health and Gaming
Playing video game is totally different now, all thanks to Nintendo Wii, we can play video games using motion control. Such healthy, fun and intuitive gameplay enhanced by a person’s own body movement will get the entire family up and off the couch for unforgettable gaming sessions. Sony Playstation and Xbox 360 have also introduced motion control gameplay into their gaming systems over the years.
Researchers from computing to public health see the unprecedented potential for both personal and societal health in analysing activity and other life-logging data on a large scale. However, extracting useful information from this huge amount of data is a big challenge. As a result, existing tools only provide intra-day activity data generally shown as a line plot or variation.