Eric Cantor ( R ), the incoming House majority leader, is asking people to look for ‘wasteful’ National Science Foundation (NSF) funding. In his view, this would include projects that can be found using the keywords “success, culture, media, games, social norm, lawyers, museum, leisure, stimulus”. Cantor asks people to search for these keywords on the NSF website, make note of the offending award numbers, and submit them to a web-based form. This is an instance of so-called “crowd-sourcing” being used against the very researchers who are key in developing and studying this phenomenon.
I have written a simple script to upload your own “suggestions” to this form. These suggestions consist of texts such as Alice’s Adventues in Wonderland, Capital, Communist Manifesto, and works by De Sade. Additionally, the uploads come from referers such as “http://let.the.air.force.have.a.bake.sale.to.raise.money.gov” and “http://learn.about.research.before.you.cut.what.you.dont.know.gov”. The project follows in a long line of similar interventions such as the FloodNet by EDT and b.a.n.g. lab.
Note: the script that processes the results of the form on Cantor’s site is actually hosted on the personal site of Matt Lira, well-known technical operative of the GOP. Thus this script never connects to any .gov website.
The script and accompanying text files can be downloaded here. All you need is python 2.5 or higher to run. Comments at the top of the file explain any changes you might want to make.
That people have emotional responses to music is a truism. However, we have little understanding of the ways in which music brings about these emotions. Indeed, we lack decent ways to measure these responses in a quantitative way. As an early step in this area, we devised a listening experiment with a novel response paradigm. Listeners chose from a set of around twenty emotional descriptors, selecting a strength value for each chosen word. Importantly, we did not prevent the listener from selecting conflicting words, or limit her to only one choice. We then used unsupervised machine learning techniques to explore the space of responses. Early results show good agreement with prior studies, but with the potential for more nuanced understanding. We plan to extend this work into considering a broader space of influencing factors on emotional response.
Musical recordings are final: once put on disk all of the messiness of the production process is smoothed away. Gone are multiple takes, different interpretations, other options. Is there a way to recapture some of these nuances, to enable a different listening experience each time you hear the “same” piece?
Mutable Recordings is provides some answers to these questions. As a software interface it allows the splicing of excerpts based on user choices.
Development of Mutable Recordings is on hold, but please contact me if you would like more information.