Resynthesizing reality

Driving vivid virtual environments from sensor networks

Don Derek Haddad, Gershon Dublon, Brian Mayton, Spencer Russell, Xiao Xiao, Kenneth Perlin, Joseph A. Paradiso

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The rise of ubiquitous sensing enables the harvesting of massive amounts of data from the physical world. This data is often used to drive the behavior of devices, and when presented to users, it is most commonly visualized quantitatively, as graphs and charts. Another approach for the representation of sensor network data presents the data within a rich, virtual environment. These scenes can be generated based on the physical environment, and their appearance can change based on the state of sensor nodes. By freely exploring these environments, users gain a vivid, multi-modal, and experiential perspective into large, multi-dimensional datasets. This paper presents the concept of "Resynthesizing Reality" through a case study we have created based on a network of environmental sensors deployed at a large-scale wetland restoration site. We describe the technical implementation of our system, present techniques to visualize sensor data within the virtual environment, and discuss potential applications for such Resynthesized Realities.

Original languageEnglish (US)
Title of host publicationACM SIGGRAPH 2017 Talks, SIGGRAPH 2017
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450350082
DOIs
StatePublished - Jul 30 2017
EventACM SIGGRAPH 2017 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2017 - Los Angeles, United States
Duration: Jul 30 2017Aug 3 2017

Other

OtherACM SIGGRAPH 2017 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2017
CountryUnited States
CityLos Angeles
Period7/30/178/3/17

Fingerprint

Virtual reality
Sensor networks
Sensors
Wetlands
Sensor nodes
Restoration

Keywords

  • Ubiquitous Sensing
  • Virtual Environments

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Software
  • Computer Vision and Pattern Recognition

Cite this

Haddad, D. D., Dublon, G., Mayton, B., Russell, S., Xiao, X., Perlin, K., & Paradiso, J. A. (2017). Resynthesizing reality: Driving vivid virtual environments from sensor networks. In ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017 [3085027] Association for Computing Machinery, Inc. https://doi.org/10.1145/3084363.3085027

Resynthesizing reality : Driving vivid virtual environments from sensor networks. / Haddad, Don Derek; Dublon, Gershon; Mayton, Brian; Russell, Spencer; Xiao, Xiao; Perlin, Kenneth; Paradiso, Joseph A.

ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017. Association for Computing Machinery, Inc, 2017. 3085027.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Haddad, DD, Dublon, G, Mayton, B, Russell, S, Xiao, X, Perlin, K & Paradiso, JA 2017, Resynthesizing reality: Driving vivid virtual environments from sensor networks. in ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017., 3085027, Association for Computing Machinery, Inc, ACM SIGGRAPH 2017 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2017, Los Angeles, United States, 7/30/17. https://doi.org/10.1145/3084363.3085027
Haddad DD, Dublon G, Mayton B, Russell S, Xiao X, Perlin K et al. Resynthesizing reality: Driving vivid virtual environments from sensor networks. In ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017. Association for Computing Machinery, Inc. 2017. 3085027 https://doi.org/10.1145/3084363.3085027
Haddad, Don Derek ; Dublon, Gershon ; Mayton, Brian ; Russell, Spencer ; Xiao, Xiao ; Perlin, Kenneth ; Paradiso, Joseph A. / Resynthesizing reality : Driving vivid virtual environments from sensor networks. ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017. Association for Computing Machinery, Inc, 2017.
@inproceedings{a3047eb568e9458aa20dcf2207ce9985,
title = "Resynthesizing reality: Driving vivid virtual environments from sensor networks",
abstract = "The rise of ubiquitous sensing enables the harvesting of massive amounts of data from the physical world. This data is often used to drive the behavior of devices, and when presented to users, it is most commonly visualized quantitatively, as graphs and charts. Another approach for the representation of sensor network data presents the data within a rich, virtual environment. These scenes can be generated based on the physical environment, and their appearance can change based on the state of sensor nodes. By freely exploring these environments, users gain a vivid, multi-modal, and experiential perspective into large, multi-dimensional datasets. This paper presents the concept of {"}Resynthesizing Reality{"} through a case study we have created based on a network of environmental sensors deployed at a large-scale wetland restoration site. We describe the technical implementation of our system, present techniques to visualize sensor data within the virtual environment, and discuss potential applications for such Resynthesized Realities.",
keywords = "Ubiquitous Sensing, Virtual Environments",
author = "Haddad, {Don Derek} and Gershon Dublon and Brian Mayton and Spencer Russell and Xiao Xiao and Kenneth Perlin and Paradiso, {Joseph A.}",
year = "2017",
month = "7",
day = "30",
doi = "10.1145/3084363.3085027",
language = "English (US)",
booktitle = "ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017",
publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - Resynthesizing reality

T2 - Driving vivid virtual environments from sensor networks

AU - Haddad, Don Derek

AU - Dublon, Gershon

AU - Mayton, Brian

AU - Russell, Spencer

AU - Xiao, Xiao

AU - Perlin, Kenneth

AU - Paradiso, Joseph A.

PY - 2017/7/30

Y1 - 2017/7/30

N2 - The rise of ubiquitous sensing enables the harvesting of massive amounts of data from the physical world. This data is often used to drive the behavior of devices, and when presented to users, it is most commonly visualized quantitatively, as graphs and charts. Another approach for the representation of sensor network data presents the data within a rich, virtual environment. These scenes can be generated based on the physical environment, and their appearance can change based on the state of sensor nodes. By freely exploring these environments, users gain a vivid, multi-modal, and experiential perspective into large, multi-dimensional datasets. This paper presents the concept of "Resynthesizing Reality" through a case study we have created based on a network of environmental sensors deployed at a large-scale wetland restoration site. We describe the technical implementation of our system, present techniques to visualize sensor data within the virtual environment, and discuss potential applications for such Resynthesized Realities.

AB - The rise of ubiquitous sensing enables the harvesting of massive amounts of data from the physical world. This data is often used to drive the behavior of devices, and when presented to users, it is most commonly visualized quantitatively, as graphs and charts. Another approach for the representation of sensor network data presents the data within a rich, virtual environment. These scenes can be generated based on the physical environment, and their appearance can change based on the state of sensor nodes. By freely exploring these environments, users gain a vivid, multi-modal, and experiential perspective into large, multi-dimensional datasets. This paper presents the concept of "Resynthesizing Reality" through a case study we have created based on a network of environmental sensors deployed at a large-scale wetland restoration site. We describe the technical implementation of our system, present techniques to visualize sensor data within the virtual environment, and discuss potential applications for such Resynthesized Realities.

KW - Ubiquitous Sensing

KW - Virtual Environments

UR - http://www.scopus.com/inward/record.url?scp=85033385086&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85033385086&partnerID=8YFLogxK

U2 - 10.1145/3084363.3085027

DO - 10.1145/3084363.3085027

M3 - Conference contribution

BT - ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017

PB - Association for Computing Machinery, Inc

ER -