{"status":"ok","elements":"
Early development of reading habits significantly influences a child’s cognitive and linguistic growth. Shared reading\u2014practices involving a child and a caregiver\u2014is essential in bolstering vocabulary, comprehension, and critical thinking. Traditional joint reading sessions with printed books often involve the adult reading all dialogues in a single voice and pace, which can be monotonous and cognitively demanding for parents. With the advent of digital technologies, the landscape of traditional reading has evolved, bringing new modalities like audiobooks or e-books with voice narration. These digital resources offer numerous benefits, including auditory cues and multimedia effects that can emulate the positive impact of …<\/p>\n
TaleMate: a reading platform for parents and children<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Trauma is the physical, emotional, or psychological harm caused by deeply distressing experiences. With increased online interactions, social media content can trigger existing trauma and even retraumatize a person. Some platforms add content warnings related to common triggers like self-harm, violence, suicidal thoughts, etc. Similarly, some users add trigger warnings to posts dealing with sensitive content, but there are no universal ways that social media platforms handle these warnings and most platforms rely on users to add warnings for their posts (whether text, photo, video, or some combination of these). The misunderstanding of how trigger and content warnings should be …<\/p>\n Investigating Social Media Users’ Perceptions on Trigger and Content Warnings<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> IFTTT enabled a novel programming experience to connect different intelligent services. For example, one can create an algorithm that will send a text message (e.g., “Don’t forget your umbrella!) if it is raining on a particular day. However, it only allows users to create simplistic algorithms (if-this-then-that). For example, it does not allow you to write an algorithm that requires two different conditions (e.g., “Send me a text message if it is raining and if it is a weekday.”) This is because the simple if-this-then-that structure cannot cover complex algorithms that require multiple conditions, iteration (e.g., incrementing counters), or randomness. …<\/p>\n iThem – programming internet-of-thing<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> In the digital age, individuals are given various ways to portray themselves and manage self-presentation online. A growing body of research has demonstrated people express and perceive online identity through textual and visual cues such as emoji and avatar. Animated Gifs, distinct from other media, are highly engaging, versatile, malleable, and can communicate layers of abundant hidden meanings. Communicating copious non-verbal cues, Animated Gifs can thus be personal devices of self-embodiment and effective manifestation of emotions and affects like no other, and little work has been done to investigate how users portray themselves via Animated Gifs. In this project, we …<\/p>\n Understanding Animated GIF<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> The advancement of machine learning and the availability of Big Data have opened up the possibilities of data-driven analysis and decision-making across nearly every industry. One example is the Construction industry, which increasingly deploys various sensing technologies in the field, e.g., GPS, RFID, etc. Data from these sensors can provide insight into task planning, safety risks, and worker productivity. Construction specialists need to make sense of this data, but it is often difficult to do so without knowing a general-purpose programming language. In this project, we are developing an environment called Octave (Observable Connections between Tables, Algorithms, and Visualization in …<\/p>\n Octave: Making Sense of Sensor Data<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> You have been frequently watching videos from platforms such as YouTube or TikTok, odds are you have come across videos with titles such as, \u201cREACTING TO\u2026\u201d or \u201cFunny reactions of\u2026\u201d. This genre of videos is called “reaction videos”, namely a video displaying the reaction of people towards another video clip. The creators, or “reactors”, provide their comments both verbally and non-verbally with, sometimes, overexaggerated facial expressions. There are reaction videos to almost anything: video games, movie trailers, full-length anime episodes, and music videos. Genre-dedicated channels can get hundreds of thousands of subscribers and millions of views per video. These channels …<\/p>\n Watch me watch: Understanding watching reaction videos.<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Project SHARP is aimed at assisting live coders as they code by providing version-control like functionality to each part of the piece they create. Live coding is the process in which a user uses a programming language to create something on the fly. This project focuses on Tidal Cycles, software written in Haskell that works with SuperCollider to synthesize sounds. The Project SHARP add on to Tidal gives history information to the live coder and allows them to revert to any point in the file’s history, as well as an individual line’s history. The software accomplishes this by displaying a …<\/p>\n Project SHARP<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Voice-based conversational assistants are growing in popularity on ubiquitous mobile and stationary devices. In order to reap the benefits of these assistants fully, it is important to understand how users interact with such systems. My research project, Speech to Task, aims to inform the design of conversational assistants that help users record voice memos on the go and maintain an automatically generated hierarchical list of action items in the form of a to-do list. By analyzing how users interact with these systems through the Wizard of Oz research method, we plan on categorizing the different types of interactions, as well …<\/p>\n Speech to Task – generating to-do lists on the go<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> In online spaces like YouTube and Reddit, algorithms are increasingly playing a significant role in shaping users’ newsfeed by personalizing content. This personalization often leads to filtering out content opposing to users’ interests, causing filter bubbles. Such filters can limit users’ exposure to diverse content, depriving them of opportunities to reflect on their interests compared to others. In this work, we investigate how exchanging recommendations with strangers can help users discover new content and reflect. We test this idea by developing OtherTube—a browser extension for YouTube that displays strangers’ personalized YouTube recommendations. OtherTube lets users (i) create an anonymized profile …<\/p>\n OtherTube: Exchanging YouTube Recommendations with Strangers<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> This project aims to understand the state of the art of the current technologies for humanitarian demining in Colombia as a medium to generate relevant technological applications to contribute to the humanitarian demining process for both civilian and military population. How can the ability to divert\/avoid land mines become more accessible to the public in rural areas of the world? Is the question that better defines the whole research purpose. Through a set of interviews with different stakeholders involved with humanitarian demining we aim to understand four key aspects of the process: Current Practice (Positionality of stakeholders) Information Flows \/ …<\/p>\n MineSafe: Understanding needs for socio-technical interventions in rural areas of the world affected by Landmines<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> People with limited digital literacy struggle to keep up with our increasing dependence on websites for everyday tasks like paying bills or booking flight tickets online. Remote methods of assistance from peers may help but at times leads to communication issues, due to a lack of shared visual context as people with low digital literacy are not acquainted with the terminology associated with web navigation. To bridge the gap between in-person support and remote help, we are developing Remo, a web browser extension, which allows the helpers to create interactive tutorials with ease by demonstration. These tutorials will be embedded …<\/p>\n Remo: Generating Interactive Tutorials by Demonstration for Online Tasks<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> This is an abstract from our recent grant that got awarded by NSF (NSF #2119011). The summary, proposal, and references are available here (Downnload). Sang Won Lee (Principal Investigator)Myounghoon Jeon (Co-Principal Investigator)Jeffrey Ogle (Co-Principal Investigator)Phyllis Newbill (Co-Principal Investigator)Chelsea Lyles (Co-Principal Investigator) Virtual reality (VR) technologies have great potential in STEM education because they provide immersive learning experiences that one cannot have in the real world. However, interactivity using VR head-mounted displays is often a solitary experience, isolating learners from the social and learning context. This makes it challenging to learn through collaborations with peers and instructors. Furthermore, many learners are …<\/p>\n Facilitating socially constructed learning through a shared, mobile-based virtual reality platform in informal learning settings<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> As my master\u2019s thesis project, I created a mobile music instrument for audience participation. In this performance, audience members use their mobile phones as musical instruments. While the audience\u2019s phones are dialed up by live performers in Dialtones Telesymphony, I try to give the ownership of the musical instruments back to audiences and let them literally play music for themselves. Each audience member will play a simple musical interface and will generate sounds from his\/her own handheld device while a master musician on stage will control the set of notes audience members can play. The layering of sounds will create a unique harmonic …<\/p>\n echobo, Networked musical instruments for audience participation<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> SGLC : collaborative textual performance environment for laptop orchestra SGLC is an extension of LOLC. In this extended environment, laptop musicians generated real-time music notation on the fly by typing commands in the environment and instrumental musicians sight-read the generated score for collaborative improvisation. Here the outcome of text-based interaction is real-time notation, not music. The generated music score is rendered in various forms so that it gives space for instrument performers to interpret (e.g., open-form score, such as graphical or textual score). The user study showed that the system effectively integrates acoustic instrument players into a mixed ensemble. We believe …<\/p>\n SGLC, Live Coding Real-Time Notation<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> One limitation of the voice-based interface is that it is not skimmable and the information does not persist over time. The voice utterance is ephemeral and listening to the recording again is time-consuming. How do we make auditory display more skimmable and persistent? Some motivating examples are here. Imagine you read a paragraph in a book vs. heard a paragraph in an audiobook. How long would it take to get a gist of it by reviewing it again. Skimming a structure of program code on a screen vs. screen reader software. Seeing a picture from a distance. Can we have …<\/p>\n Persistent and skimmable voice-based information<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> with Akito Van Troyer, Jason Freeman, Andrew Colella, Shannon Yao, Sidharth Subramanian, Scott McCoid, and Jung-Bin Yim In LOLC, the musicians play laptop as an instrument to create rhythmic motives based on a collection of recorded sounds. The environment encourages musicians to share their code with each other, developing an improvisational conversation over time as the material is looped, borrowed, and transformed. LOLC is supported by a grant from the National Science Foundation as part of a larger research project on musical improvisation in performance and education (NSF CreativeIT #0855758). Download You can download LOLC here. Papers Evaluating Collaborative Laptop Improvisation with LOLC (paper).Sang Won Lee, Jason Freeman, Andrew …<\/p>\n LOLC: collaborative textual performance environment for laptop orchestra<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Crossole is a musical meta-instrument that allows you to switch the level of controlling music. The word \u201cCrossole\u201d is a portmanteau of \u201ccrossword\u201d and \u201cso-lee(\uc18c\ub9ac)\u201d which, in Korean, means sound. Literally, Crossole is a crossword of sound. Chord progression of music is visually presented as a set of virtual blocks that will eventually start to resemble a crossword puzzle. With the aid of the Kinect sensing technology, you can either build music in high level by gestures of building a score(blocks) as well as you can play note by note by stepping into the low level(grid). In the way that …<\/p>\n Crossole<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Listen to the music that sounds like your life. Stickies Music is an interactive sound installation that reflects a personal schedule represented by a set of stickies. The goal was to design a musical interface for a novice user. To make an interface accessible to novice-users, I hoped to get rid of the notion of performing instruments. Most of the time, an instrument generates sound when a musician applies physical gestures(bowing, plucking, blowing, or pressing). The sound varies by the gesture itself and the position where the gesture is applied in the interface. Instead, in Stickies Music, what makes music are not …<\/p>\n Stickies Music<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div> Sharing earphones is romantic. Especially for a couple, sharing earphones is a wonderful thing because two people in the same place listening to the same music at the same time. While listening to music, they can read each other\u2019s facial expressions, watch each other\u2019s nodding, sing-along, and talk about how they like music instantly. Conversation brings new inspiration and they know exactly which song they should listen to next. This whole process gives them memorable moments because it\u2019s about two people getting to know each other\u2019s musical tastes and understanding their emotions. Features Playlist Synchronization Listen to the same music at …<\/p>\n SharePhone – synchronized virtual music listening<\/span> Read More »<\/a><\/p>\n <\/div>\r\n <\/div>\r\n<\/div>