voice, stepper motors, laser cutter, uCs, LEDs, water jet cutter
The boundary between subject and object is becoming ever-the-more blurred by the creation of new types of computational objects. Especially when these objects take the form of robotic creatures do we get to question the powerful impact of the object on the person. Couple this with the expression of internal, unspoken experience through the making of non-speech sounds and we have a situation that demands new thoughts and new methodologies. This thesis works through these questions via the design and study of syngvab, a robotic marionette that moves in response to human non-speech vocal sounds. I draw from the world of puppetry and performing objects in the creation of syngvab the object and its stage, showing how this old tradition is directly relevant for the development of non-anthropomorphic, non-zoomorphic robotic creatures. I show how this mongrel of an object requires different methodologies of study, pulling from actor-network theory to examine syngvab in a symmetric manner with the human participants. The results of a case study interaction with syngvab support the contention that non-speech sounds as drawn out by a robotic creature are a potent means of exploring and investigating the unspeakable.
syngvab, along with syngvaa, were my Master’s thesis projects at the MIT Media Lab.
plywood, motors, radio transmission modules, uCs, h-bridges
The division between subject and object, agent and non-agent, has consistently been dubious philosophically, with physical manifestations of automatons the exception rather than the rule. Now we are increasingly faced with computational objects and relational artifacts that put into question cherished notions of human agency and intentionality. syngva is a creature that develops through evolutionary processes idiosyncratic movements in response to singing. syngva serves two parallel roles. For the user, syngva enables a form of non-linguistic reflection, serving as a catalyst for novel vocal behaviors provoked by the motions of the object. For myself, syngva acts as a sociological probe, allowing me to study in-situ relationship formation, agency, and control in response to an “intelligent” creature. The evaluation approach draws heavily from actor-network theory (ANT), a methodology that in part places objects on the same ontological level as human agents. This re-centering of agency intimates a different way of looking at the person-object dyad that focuses on the interactions themselves without reference to pre-existing theories.
A note on naming: The word “syngva” comes from Old Norse meaning simply “to sing”. I see this project taking a number of forms as it develops. Rather than enumerating each new revision with the suffixes “Version 1.0″, “Version 2.0″, and so on, I have decided to add letters to the end of the word instead, moving to the next letter of the alphabet with each new revision. Thus the first version is “syngvaa”, the second is “syngvab”, and so on.
A fashion event featuring innovative and experimental works in computational apparel design, interactive clothing, and technology-based fashion.
This production provides an exhibitory, creative outlet for students to present their works. Each project [re]interprets the conceptual goal of a seamless relationship between technology and fashion.
The last fashion event at the media lab occurred in 1997 with the enormous wearables show. The zeitgeist [and hence the tone of the event] was of the beginning of the internet-optimistic era, with rosy visions of the future. Now, with the elegant ubiquity of cellphones and ipods [and nary a head-mounted display in sight], we’re ready to redefine and reinvent the form and function of clothing within a technological scope.
Clothing communicates. It identifies, it connects, it remembers.
seamless will feature clothing that speaks to us on a personal level, re-evaluating our relationship with clothes, ourselves, others, and our environment. Street-savvy and culture-conscious, these are real clothes that inspire and provoke.
Press included a cover story in ID Magazine, feature article in the Boston Globe, and a mention on CNN.com
seamless is a fashion event featuring innovative and experimental works in computational apparel design, interactive clothing, and technology-based fashion. each project [re]interprets the conceptual goal of a seamless relationship between technology and fashion. these are real clothes that inspire and provoke.
seamless version 2.0 featured original fashions created by students of MIT, RISD, Parsons, and NYU; plus young designers from Boston, New York, Seattle, and Cleveland. The fashion show displays innovative and experimental works that reinvent how we think about clothing and the body. The designs approach this reinvention with an array of perspectives that include the physical, psychological, social, technological, political, educational, and aesthetic.
blender, laser cutter, Tangible Media with Hiroshi Ishii
amia is the first device within a framework that we call “amiable media”, or technologies that aim at integrating the physical and digital worlds to give rise to new forms of interpersonal communication.
We did a simple user survey beforehand to get an idea of how people communicate with loved ones, what that form of communication misses from face-to-face communication, and whether or not people would be interested in a device like amia.
amia is a device for helping two people keep in touch: think academic couples separated by long distances, a spouse who travels frequently, and so on. The device has two main modes: passive and active. In passive mode, microphones on one device pick up the ambient noise level and transmit this noise level to the companion device. A band the encircles amia glows in response to the ambient noise level of the other device. In this way, we have a means of indicating the presence of the other person without being too intrusive.
The second mode is the active mode, made up of a number of components. First, heat sensors on device will activate when a person handles the device; this information is transferred to the companion device and translated into a more “reddish” color as the temperature on the other device increases. Secondly, capacitive sensors pick up hand motions across the outside of the device, which are converted into pulses or sequences of light; the user can then “compose” sequences of light patterns to send to the companion device, thus allowing for abstract interpersonal communication. Finally, actuators on the surface of the devices can be pushed in or popped out; pushing in on one device leads to popping out of the corresponding actuator on the companion device. Thus the users can, perhaps, send messages or play games using these push-in, pop-out components.
The goal with this project was to try and find a way for two people separated by distance to keep in contact non-verbally, since cell phones and IM are so prevalent, but yet miss out on a lot of the nuances of real-life interpersonal communication.
uCs, copper, tilt sensors, Tangible Media with Hiroshi Ishii
We introduce a new framework called DRIP (Drinking Real-time Information Protocol) for the display, consumption, and sharing of digital information by infusing liquid with digital bits. Presently, the process of information acquisition is integrated into our everyday lives, while the process of information sharing has become more distant and impersonal. By merging the affordances of beverage containers and digital information, we aim to bring back the social component of face-to-face information sharing while also creating an environment for serendipitous interactions to occur. The DRIP platform has three main components. First, DRIP enables people to attach digital bits to specially designed computer-mediated beverage containers. Second, embedded within the beverage containers are displays that allow people to view and browse the attached information. Third, through intuitive drinking manners, such as stirring and toasting, one can alter or exchange information with others. The milieus we seek for implementation are those that inherently foster interpersonal exchanges, such as: coffee shops, teahouses, and cocktail lounges.
That people have emotional responses to music is a truism. However, we have little understanding of the ways in which music brings about these emotions. Indeed, we lack decent ways to measure these responses in a quantitative way. As an early step in this area, we devised a listening experiment with a novel response paradigm. Listeners chose from a set of around twenty emotional descriptors, selecting a strength value for each chosen word. Importantly, we did not prevent the listener from selecting conflicting words, or limit her to only one choice. We then used unsupervised machine learning techniques to explore the space of responses. Early results show good agreement with prior studies, but with the potential for more nuanced understanding. We plan to extend this work into considering a broader space of influencing factors on emotional response.
Beginning in the early twentieth century, composition branched out into a variety of new representations, the most common being the graphical score. Cage’s Variations II is a prime example, utilizing only dots and lines as its basis. We have created an interactive version of Cage’s piece, called here Variations 10b, where a performer can change the score and get immediate feedback as to the result. We hope that both listeners and performers will develop a more nuanced understanding of the score through the use of the interface.
1. The graphical score. Within the “variations10b” directory are a set of folders for the graphical score application for Windows, OS X, and Linux. Additionally there is the Processing source code; note that in order to run the Processing code you additionally need the OSC and Fullscreen libraries.
2. The sound server. The sounds for Variations 10b are generated using scanned synthesis in csound. I originally chose scanned synthesis both for its characteristic timbre as well as for more pedestrian pedagogical reasons. Unfortunately, the version of csound that comes with most recent Ubuntu distributions does not include the scanned synthesis opcodes. In those cases you will have to compile csound from scratch, something that is more difficult than it should be, and that is beyond my ability to explain here.
Nevertheless, in order to configure the sound server you will need to change at least two variables in “variations10bConfig.ini”: the location to csound, as well as the command-line options. Within this configuration file you can also change the scanned synthesis table; included with Variations 10b are two options, the cylinder and the torus. Beware of writing your own table! It is very easy to create a table that will quickly “explode” into “infinity”!
Therefore, to run Variations 10b do the following:
1. Start the sound server: “python variations10b.py”
2. Start the graphical score by either running one of the pre-compiled programs within the “variations10b” directory, or running the source code from within Processing.
The code in Variations 10b is available under the GNUGPL V3 (http://www.gnu.org/copyleft/gpl.html) with the following modifications:
The words “you”, “licensee”, and “recipient” are redefined to be as follows: “You”, “licensee”, and “recipient” is defined as anyone as long as s/he is not an EXCLUDEDPERSON. An EXCLUDEDPERSON is any individual, group, unit, component, synergistic amalgamation, cash-cow, chunk, CEO, CFO, worker, or organization of a corporation that is a member, as of the date of acquisition of this software, of the Fortune 1000 list of the world’s largest businesses. (See http://money.cnn.com/magazines/fortune/global500/2008/full_list/ for an example of the top 500.) An EXCLUDEDPERSON shall also include anyone working in a contractor, subcontractor, slave, or freelance capacity for any member of the Fortune 1000 list of the world’s largest businesses.