FLEFF Labs: Open Space Lab was a half-semester, one credit course co-taught with Claudia Pederson. The purpose of the course was to introduce students to alternative ways of conceptualizing social media. From the syllabus:
This one credit pass/fail course explores the concept of open space through a range of theories and practices of social media, social networking, emerging technologies, user generated content, and other structures. Students will engage in group projects that combine conceptual investigations of open space modes with digital interfaces and social media.
Students will explore the concept of space through five “variables”, namely: responsive environments, public, utopian, commercial, and ecologic spaces. Students will work in groups toward final projects, each addressing one of these concepts. The role of the instructors is to provide students with conceptual and practical guidance toward the completion of class projects. The final works will be permanently displayed on the FLEFF website.
In the seven sessions of the course students read key theoretical and artistic texts, learned about current artistic practices, participated in remote presentations with contemporary artists, and created prototypes of their own projects.
Eric Cantor ( R ), the incoming House majority leader, is asking people to look for ‘wasteful’ National Science Foundation (NSF) funding. In his view, this would include projects that can be found using the keywords “success, culture, media, games, social norm, lawyers, museum, leisure, stimulus”. Cantor asks people to search for these keywords on the NSF website, make note of the offending award numbers, and submit them to a web-based form. This is an instance of so-called “crowd-sourcing” being used against the very researchers who are key in developing and studying this phenomenon.
I have written a simple script to upload your own “suggestions” to this form. These suggestions consist of texts such as Alice’s Adventues in Wonderland, Capital, Communist Manifesto, and works by De Sade. Additionally, the uploads come from referers such as “http://let.the.air.force.have.a.bake.sale.to.raise.money.gov” and “http://learn.about.research.before.you.cut.what.you.dont.know.gov”. The project follows in a long line of similar interventions such as the FloodNet by EDT and b.a.n.g. lab.
Note: the script that processes the results of the form on Cantor’s site is actually hosted on the personal site of Matt Lira, well-known technical operative of the GOP. Thus this script never connects to any .gov website.
The script and accompanying text files can be downloaded here. All you need is python 2.5 or higher to run. Comments at the top of the file explain any changes you might want to make.
Fourth, the production of democratic subjects is a challenge; if it has been done before, it has only been achieved temporarily. The production is neither certain, secure, nor robust. There are lots of glitches and faux democrats out there on the market.
I agree with them for the most part, even if I don’t see the requirement for the “Party” as she suggests in the comments…I rather take things from David Graeber’s perspective that functioning democracies can exist without recourse to a Party structure; see his Fragments of an Anarchist Anthropology for some examples. While the obvious rejoinder to such a suggestion is, “How do things work at scale?”, such a question presupposes large-scale as a necessary condition. I don’t think that is a requirement, but that is fodder for a much longer post.
voice, stepper motors, laser cutter, uCs, LEDs, water jet cutter
The boundary between subject and object is becoming ever-the-more blurred by the creation of new types of computational objects. Especially when these objects take the form of robotic creatures do we get to question the powerful impact of the object on the person. Couple this with the expression of internal, unspoken experience through the making of non-speech sounds and we have a situation that demands new thoughts and new methodologies. This thesis works through these questions via the design and study of syngvab, a robotic marionette that moves in response to human non-speech vocal sounds. I draw from the world of puppetry and performing objects in the creation of syngvab the object and its stage, showing how this old tradition is directly relevant for the development of non-anthropomorphic, non-zoomorphic robotic creatures. I show how this mongrel of an object requires different methodologies of study, pulling from actor-network theory to examine syngvab in a symmetric manner with the human participants. The results of a case study interaction with syngvab support the contention that non-speech sounds as drawn out by a robotic creature are a potent means of exploring and investigating the unspeakable.
syngvab, along with syngvaa, were my Master’s thesis projects at the MIT Media Lab.
plywood, motors, radio transmission modules, uCs, h-bridges
The division between subject and object, agent and non-agent, has consistently been dubious philosophically, with physical manifestations of automatons the exception rather than the rule. Now we are increasingly faced with computational objects and relational artifacts that put into question cherished notions of human agency and intentionality. syngva is a creature that develops through evolutionary processes idiosyncratic movements in response to singing. syngva serves two parallel roles. For the user, syngva enables a form of non-linguistic reflection, serving as a catalyst for novel vocal behaviors provoked by the motions of the object. For myself, syngva acts as a sociological probe, allowing me to study in-situ relationship formation, agency, and control in response to an “intelligent” creature. The evaluation approach draws heavily from actor-network theory (ANT), a methodology that in part places objects on the same ontological level as human agents. This re-centering of agency intimates a different way of looking at the person-object dyad that focuses on the interactions themselves without reference to pre-existing theories.
A note on naming: The word “syngva” comes from Old Norse meaning simply “to sing”. I see this project taking a number of forms as it develops. Rather than enumerating each new revision with the suffixes “Version 1.0″, “Version 2.0″, and so on, I have decided to add letters to the end of the word instead, moving to the next letter of the alphabet with each new revision. Thus the first version is “syngvaa”, the second is “syngvab”, and so on.
froi is a suite of programs for the analysis of functional magnetic resonance imaging (fMRI) data using a region-of-interest (ROI) approach. Users can perform all necessary functions with froi, from creating and modifying ROIs, to using ROIs as a way to constrain analyses, to combining ROIs in a number of different ways.
froi is used in nearly all research of the Kanwisher Lab, with the results of analyses performed with froi published in journals such as The Journal of Neuroscience. froi is also in use in other research laboratories.
froi is currently written in a combination of perl, shell scripts, and matlab, and was planned to be re-written in python and C.
blender, laser cutter, Tangible Media with Hiroshi Ishii
amia is the first device within a framework that we call “amiable media”, or technologies that aim at integrating the physical and digital worlds to give rise to new forms of interpersonal communication.
We did a simple user survey beforehand to get an idea of how people communicate with loved ones, what that form of communication misses from face-to-face communication, and whether or not people would be interested in a device like amia.
amia is a device for helping two people keep in touch: think academic couples separated by long distances, a spouse who travels frequently, and so on. The device has two main modes: passive and active. In passive mode, microphones on one device pick up the ambient noise level and transmit this noise level to the companion device. A band the encircles amia glows in response to the ambient noise level of the other device. In this way, we have a means of indicating the presence of the other person without being too intrusive.
The second mode is the active mode, made up of a number of components. First, heat sensors on device will activate when a person handles the device; this information is transferred to the companion device and translated into a more “reddish” color as the temperature on the other device increases. Secondly, capacitive sensors pick up hand motions across the outside of the device, which are converted into pulses or sequences of light; the user can then “compose” sequences of light patterns to send to the companion device, thus allowing for abstract interpersonal communication. Finally, actuators on the surface of the devices can be pushed in or popped out; pushing in on one device leads to popping out of the corresponding actuator on the companion device. Thus the users can, perhaps, send messages or play games using these push-in, pop-out components.
The goal with this project was to try and find a way for two people separated by distance to keep in contact non-verbally, since cell phones and IM are so prevalent, but yet miss out on a lot of the nuances of real-life interpersonal communication.
uCs, copper, tilt sensors, Tangible Media with Hiroshi Ishii
We introduce a new framework called DRIP (Drinking Real-time Information Protocol) for the display, consumption, and sharing of digital information by infusing liquid with digital bits. Presently, the process of information acquisition is integrated into our everyday lives, while the process of information sharing has become more distant and impersonal. By merging the affordances of beverage containers and digital information, we aim to bring back the social component of face-to-face information sharing while also creating an environment for serendipitous interactions to occur. The DRIP platform has three main components. First, DRIP enables people to attach digital bits to specially designed computer-mediated beverage containers. Second, embedded within the beverage containers are displays that allow people to view and browse the attached information. Third, through intuitive drinking manners, such as stirring and toasting, one can alter or exchange information with others. The milieus we seek for implementation are those that inherently foster interpersonal exchanges, such as: coffee shops, teahouses, and cocktail lounges.
That people have emotional responses to music is a truism. However, we have little understanding of the ways in which music brings about these emotions. Indeed, we lack decent ways to measure these responses in a quantitative way. As an early step in this area, we devised a listening experiment with a novel response paradigm. Listeners chose from a set of around twenty emotional descriptors, selecting a strength value for each chosen word. Importantly, we did not prevent the listener from selecting conflicting words, or limit her to only one choice. We then used unsupervised machine learning techniques to explore the space of responses. Early results show good agreement with prior studies, but with the potential for more nuanced understanding. We plan to extend this work into considering a broader space of influencing factors on emotional response.
Firefox extensions, USASpending.gov, Google News Search, PR News Search, Google Image Search, IRS 990 forms, McCoy
MAICgregator is a Firefox extension that aggregates information about colleges and universities embedded in the military-academic-industrial complex (MAIC). It searches government funding databases, private news sources, private press releases, and public information about trustees to try and produce a radical cartography of the modern university via the replacement or overlay of this information on academic websites. This is a necessary activity in light of the contemporary financial “crisis”.