Welcome!
Welcome to the echolab website at Virginia Tech. Our mission is to explore and develop methods to foster empathic interactions among individuals using computational systems. Our research is grounded in the empathy framework depicted below. It focuses on three key themes: computational tools for perspective sharing by empathizers, computers as an expressive medium for targets, and the facilitation of empathic communication for both groups. Additionally, we are committed to studying how to create safe environments where targets can be safely vulnerable.
To Prospective Students
Echolab is not recruiting any graduate students until otherwise noticed.
echolab is always looking for Ph.D. students who are interested in creating novel interactive systems. We are also looking for exceptional undergraduate students and master students to work with. Please apply to CS@VT.
If you are in Virginia Tech already and would like to do research in Echolab, please fill out the following survey and I will respond. https://forms.gle/gfsvheq8U6YNrGby7
We envision that echolab is…
- A platform where members develop into independent researchers and achieve their career goals.
- An opportunity for individuals to discover that giving—through teaching, helping, sharing, and collaborating—enhances their own growth and success.
- A source of social capital that enables learning from each other.
- A community where inclusion is prioritized, and members feel connected.
- A group that contributes to society and the world by producing and disseminating new knowledge.
- An environment where individuals can be more productive than in any other setting.
- A socio-technical system designed to scale the above visions.
Recent News
- (2024/11) Sang Won Lee was invited to give a talk at CITIC at the University of Costa Rica (link)
- (2024/11) One SIGCSE poster was accepted.
- Understanding the Effects of Integrating Music Programming and Web Development in a Summer Camp for High School Students
- (2024/9) Congratulations to Donghan Hu for successfully defending the dissertation.
- (2024/9) Congratulations to Donghan Hu for getting a post-doc position at New York University. Looking forward to his new adventure!
- (2024/8) Sang Won Lee was interviewed at WDBJ7 TV for their recent cybergrooming project. (link)
- (2024/7) One ISMAR Journal paper (IEEE TVCG) was accepted.
- Investigating Object Translation in Room-scale, Handheld Virtual Reality
- (2024/7) One VL/HCC paper was accepted.
- Beyond TAP: Piggybacking on IFTTT to Connect Triggers and Actions with JavaScript
- (2024/6) SHARP received an Honorable Mention Award ACM C&C 2024.
- (2024/5) One more CSCW 2024 paper and CSCW 2024 poster were accepted.
- Investigating Characteristics of Media Recommendation Solicitation in r/ifyoulikeblank (paper)
- Evaluation of Interactive Demonstration in Voice-assisted Counting for Young Children (poster)
- (2024/5) Congratulations to Daniel Vargas, Yi Lu, and Emily Altland for passing their master’s thesis exam!
- (2024/5) Many Echolab students will attend CHI workshops with four workshop papers!
- (2024/5) One C&C 2024 paper was accepted!
- SHARP: Exploring Version Control Systems in Live Coding Music
- (2024/4) Sang Won Lee was awarded a CHCI planning grant with collaborators to study how generative AI can affect the values of expertise. (link)
- (2024/3) Sang Won Lee was awarded SaTC NSF program grant as a Co-PI. A collaborative team of researchers will focus on empowering adolescents to be resilient to cybergrooming (link)
- (2024/3) One CHIWORK paper was accepted.
- Unpacking Task Management Tools, Values, and Worker Dynamics
- (2024/2) One CHI Paper was accepted.
- Exploring the Effectiveness of Time-lapse Screen Recording for Self-Reflection in Work Context
- (2024/2) Sang gave a seminar at the GVU Lunch Lectures at Georgia Tech. (Talk Video)
- (2024/2) Sang gave a seminar at CS Colloquium Talks, University of Pittsburgh (Talk Video)
- (2024/2) Sang gave a seminar at the Human-Computer Interaction Institute at Carnegie Mellon University.
- (2024/1) Two CSCW 2024 papers were accepted
- Understanding Multi-user, Handheld Mixed Reality for Group-based MR game
- Understanding the Relationship Between Social Identity and Self-Expression Through Animated Gifs on Social Media
- (2023/11) Sang was invited to give a talk at HCI@KAIST Fall Colloquium 2023
- (2023/11) Sang was invited to give a guest lecture at Prof. Kenneth Huang’s class, Crowdsourcing & Crowd-AI Systems, Penn State University
News (roughly) older than a year will be available here.
Research Highlights
(ongoing) Facilitating socially constructed learning through a shared, mobile-based virtual reality platform in informal learning settings
TBD
TBD