I’m excited that Alexander Galloway will be coming to Cornell to give a talk this Wednesday, 2 March, at 4:30PM in the AD White House. The title for his lecture is “Are Some Things Unrepresentable?”, which dovetails nicely with my work on the voice and robotics. (And look forward to more on this idea soon, as I have an exhibition opening the last week of March…)
Many have written about the impending demise of delicious, one of the first so-called “Web 2.0” companies, and one of the few cloud-based services that I actually use. delicious was useful because of its simplicity, with a pared-down interface, clean bookmarklets or browser plugins, and the ability to easily tag new bookmarks. I used to use it extensively to find new sites, but stopped doing that a few years ago. These days it was basically a way to store bookmarks in a place that was accessible anywhere I had an internet connection.
But as we have expected for a while, and are witnessing at the moment, the cloud is disappearing, its solidity nothing but a mirage. We are in for trouble in the future, methinks, because so many institutions, including universities, have moved their online activities from local hosting to this mythical “cloud”.
The point of this post is not to go into the deeper issues here, but to explain, in some detail, how you can setup a delicious-like interface on your own server. I’m going to be using Drupal, one of the most well-known free software content management systems (CMS). Doing this requires a certain amount of technical ability, and I won’t be able to go into that here. But if you’re interested, read on.
While there are Drupal modules for creating weblinks, in my brief testing I found them to be both too cumbersome, and too limited for what I wanted to do. In any event I wanted to see how easy it might be to do this using the tools Drupal provides.
Turns out that we can easily create a Bookmarks type using CCK. The two additional fields I created are for the “URL” and a “Post Date” (so that we can ensure we have the correct dates for the bookmarks when we import our data below). The first is of type “text” and the second is of type “date”. The exported CCK code can be found below.
Update: By using the Unique Field module, we can require that the URL is unique, redirecting you to the existing bookmark if desired. This helps prevent posting of identical bookmarks, and might remind you to look through your bookmarks if you keep saving things that you’ve already saved…not that that just happened to me.
I decided to create a separate vocabulary for my bookmarks tags, and I thus edited this vocabulary to allow editing by the Bookmarks content type. Make a note of the vocabulary ID, as we will need this later.
To easily import our bookmarks, it’s best to use Drupal’s Services module. Using services you can add nodes, get taxonomy items, and so on using XML-RPC or JSON. Setting up and configuring Services is beyond the scope of this post, but the documentation is actually fairly good.
Next, we need to import our bookmarks. To begin, we need to export our bookmarks from Delicious. The code to this, on Mac OS X or Linux, is:
where <username> is your Delicious username and <password> is your Delicious password. Your bookmarks will then be downloaded and saved in the
We’re now ready to parse this XML file and post it to our new Bookmarks content type. I wrote a dirty python script called
parseDelicious.py to do this; you can find the link to it at the bottom of the page. It requires the
lxml library for
etree. The script is probably highly inefficient, especially in the parts that check whether or not existing tags are in your Drupal taxonomy. The script requires a number of options that are explained by calling it with the
-h switch. To run:
Here <server> is the hostname of the Drupal server (without
/services/xmlrpc), <username> is the username of a user on the Drupal server, <password> is your password, <vid> is the numeric vocabulary ID of the desired vocabulary for your Delicious tags, and <bookmarks xml> is the path to the XML file just downloaded. Depending on how many bookmarks you have, this might take a long time.
Next, let’s create a simple view that will give us our bookmarks in a format that is close to that of Delicious. Here we use the Views module, as well as Semantic Views to allow us to style the resulting data. I’ve designed the view to make it easy to style in CSS. Basically the logic is that we pull the node title, the CCK post date, the CCK URL, the node body, and the taxonomy terms. We then link the node title to the CCK URL, create useful classes on each element, setup a pager, and set our CSS classes. The is at the bottom of the page.
It is also possible to create a tag cloud of all our bookmark tags using the Tagadelic module.
Finally, we can create a simple bookmarklet for posting to your site. To do this we need the Prepopulate module. The bookmarklet code below is modified slightly from the example given in the documentation for the module.
Simple change “example.com” to the name of your host. The names of the values are based off of the content type we created above.
You can see the bookmarks that I’ve imported from Delicious, with minimal CSS formatting, here.
Of course this method means that we only host the bookmarks on our own site, thus losing all of the social capabilities of delicious. The challenge for the future, given the disappearance of the cloud, is how to store data locally, but be able to access it distributively. There is no reason that something like Delicious could not be implemented using a distributed hash table, thus preventing any one company from being a source of failure. It seems like it would be possible, then, to write a program that would store bookmarks as hashes in this table, and then build on top of that the necessary metadata such as tags, users, groups, etc. This would parallel the development of thimbl, using as well public-key encryption to provide security. But this is a project for more than a single day; if you are interested, please let me know.
2-4 Dec 2010
Eric Cantor ( R ), the incoming House majority leader, is asking people to look for ‘wasteful’ National Science Foundation (NSF) funding. In his view, this would include projects that can be found using the keywords “success, culture, media, games, social norm, lawyers, museum, leisure, stimulus”. Cantor asks people to search for these keywords on the NSF website, make note of the offending award numbers, and submit them to a web-based form. This is an instance of so-called “crowd-sourcing” being used against the very researchers who are key in developing and studying this phenomenon.
I have written a simple script to upload your own “suggestions” to this form. These suggestions consist of texts such as Alice’s Adventues in Wonderland, Capital, Communist Manifesto, and works by De Sade. Additionally, the uploads come from referers such as “http://let.the.air.force.have.a.bake.sale.to.raise.money.gov” and “http://learn.about.research.before.you.cut.what.you.dont.know.gov”. The project follows in a long line of similar interventions such as the FloodNet by EDT and b.a.n.g. lab.
Note: the script that processes the results of the form on Cantor’s site is actually hosted on the personal site of Matt Lira, well-known technical operative of the GOP. Thus this script never connects to any .gov website.
The script and accompanying text files can be downloaded here. All you need is python 2.5 or higher to run. Comments at the top of the file explain any changes you might want to make.
Update: see the excellent post by Micha Cárdenas on Occupy Everything regarding the ongoing wikileaks situation and its relationship to student protests around the world.
Update 2: I’ve updated the code to be able to use Tor; if you have Tor running, just set TOR = True at the top of the file.
Apple has finally released its guidelines for “acceptable” applications in its App Store. Of course the document is itself covered by a non-disclosure agreement, but someone has helpfully posted it anyway. What’s interesting are some of the following items:
“2.7 Apps that download code in any way or form will be rejected”
This clause will still prevent Scratch from being approved; Scratch was sadly removed from the App Store earlier this year. What makes this such a problem is that Apple was once known for their educational software, for the ability to easily program in Basic from boot. And now the ability for new students to learn the fundamentals of programming on the device is not available.
“14.1 Any app that is defamatory, offensive, mean-spirited, or likely to place the targeted individual or group in harms way will be rejected
14.2 Professional political satirists and humorists are exempt from the ban on offensive or mean- spirited commentary” (emphasis added)
This is due to a major spat earlier this year when Apple rejected an app by a Pulitzer Prize-winning cartoonist. Yet the devil is in the details, is it not? What is the definition of “professional”? Does it require one to win a Pulitzer? And what does this mean for people wanting to become better known as satirists?
“15.3 ‘Enemies’ within the context of a game cannot solely target a specific race, culture, a real government or corporation, or any other real entity” (emphasis added)
This one is really curious. Now your latest satirical game that targets BP/Goldman Sachs/Wal-Mart is going to be automatically rejected? Something like molleindustria’s McDonald’s video game would be rejected under these rules. It’s clear that with this clause Apple is preemptively shutting down an avenue for activists to develop applications for the iPhone and iPad.
As I’ve said many times before, the problem is less that Apple has made these restrictions; they are allowed to do so, however wrongheaded they might be. Rather, the problem is that so many academics lend their weight to Apple’s regime by continuing to buy their products (my Linux-running Macbook, purchased by my school years ago, will be the last Apple item I own), basing classes around programming for the iPhone or iPad, or giving away free iPads to incoming students. (It should not cost money to become a developer, like it does to join Apple’s developer program. That is not “open” in any way, shape, or form.) Apple’s ecosystem is becoming more and more closed, and as academics we should not be supporting that. Similarly, given these guidelines, journalists should not be ceding editorial control to a separate corporation and should avoid producing apps for Apple. Just Say No to the Apple.
The Accelerationism Conference is taking place soon. For more background on the term, see this post by k-punk, Benjamin Noys’ post, and a longer post by splinttering bone ashes. Yet I have to ask the question:
Who is run over in accelerationism?
That we continue to hurtle our bodies in hunks of metal down roadways at extreme speeds with margins of inches is simply barbaric. And it will be accepted as such, someday. The erotic potentials notwithstanding, following Ballard. (But in my reading of his work, this is not something to be valorized.) If we want to see accelerationism of capital at work today, then we only have to look at China. (To pick one example of many.) Edward Burtynsky’s photographs and film, Manufactured Landscapes provide visual confirmation. And Coco Fusco’s performances and texts on the exploitation of latinas in maquiadoras provides a braking force to those who want to accelerate capital. Whose bodies are run over, left mutilated at the side of the road, as capital accelerates without control down the highway? As far as I can tell this is a subject that is not broached in the competing posts about desiring the acceleration of capital.
A sociologist should do a study about how so much of the work (at least that I read) comes from those in the UK. Is there something about the contemporary milieu of the UK, and of London in particular, that draws out these kinds of responses? What would this theory look like if it were situated from Detroit, New Orleans, Juarez, Chongqing, Mexico City, or elsewhere?
My interest in this comes from my reading of speculative realist thought, as it is so called, and my desire to engage with the libidinal aspects of Land, Lyotard, Irigary, D+G, and others. Yet I have a profound worry about any project that would seem to not ask the question “Who is run over?” from the start. And what happens to those who are not prepared for the coming acceleration?
These days I am examining the trope of noise, not only as it is thrown about (problematically, and not in the good sense of that word) within sound studies, but also in the expanded sense used by Michel Serres in Genesis. How is noise capitalized, how does it exceed its bounds within information theory, how does noise perturb capital? How does it create its own perturbation theory that can be harnessed (without complete control) to create productive dysfunctions? Would the thrown wrench that stops the machine, the pulled emergency cord be a better way to engage with the present acceleration of capital? This is one of the key questions for me at the moment. To understand how noise (and its ability, in a combinatory fashion, to call forth the sacred) can never be fully controlled, never fully divorced from the signal, but rather only guided, sent down other channels to recombine with the “signal” at some future point. What is this becoming-dysfunctional that guides the noisy other than the work of the Yes Men? We are beginning to see the cracks in this methodology, of course, which only means we need to lower ourselves into them to see where they lead. And perhaps inside are options that do not leave too many, unheard, on the side of the road.
“We regret to announce that our Google scraper may have to be permanently retired, thanks to a change at Google. It depends on whether Google is willing to restore the simple interface that we’ve been scraping since Scroogle started five years ago. Actually, we’ve been using that interface for scraping since Google-Watch.org began in 2002.
This interface (here’s a sample from years ago) was remarkably stable all that time. During those eight years there were only about five changes that required some programming adjustments. Also, this interface was available at every Google data center in exactly the same form, which allowed us to use 700 IP addresses for Google.
That interface was at www.google.com/ie but on May 10, 2010 they took it down and inserted a redirect to /toolbar/ie8/sidebar.html. It used to have a search box, and the results it showed were generic during that entire time. It didn’t show the snippets unless you moused-over the links it produced (they were there for our program, so that was okay), and it has never had any ads. Our impression was that these results were from Google’s basic algorithms, and that extra features and ads were added on top of these generic results. Three years ago Google launched “Universal Search,” which meant that they added results from other Google services on their pages. But this simple interface we were using was not affected at all.
Now that interface is gone. It is not possible to continue Scroogle unless we have a simple interface that is stable. Google’s main consumer-oriented interface that they want everyone to use is too complex, and changes too frequently, to make our scraping operation possible.
Over the next few days we will attempt to contact Google and determine whether the old interface is gone as a matter of policy at Google, or if they simply have it hidden somewhere and will tell us where it is so that we can continue to use it.
Thank you for your support during these past five years. Check back in a week or so; if we don’t hear from Google by next week, I think we can all assume that Google would rather have no Scroogle, and no privacy for searchers, at all. “
While I cannot get on board the embracing of Thanatos and the acceleration of the deterritorialization of capital, Nick Land’s comments—from 1993—are eerily prescient. And this is on the heels of a day where a trading glitch wrecked havoc in the spectre of numbers changing in the memory banks of corporate computers.
The obsolete psychological category of ‘greed’ privatizes and moralizes addiction, as if the profit-seeking tropism of a transnational capitalism propagating itself through epidemic consumerism were intelligible in terms of personal subjective traits. Wanting more is the index of interlock with cyberpositive machinic processes, and not the expression of private idiosyncrasy. What could be more impersonal — disinterested — than a haut bourgeois capital expansion servo-mechanism striving to double $10 billion? And even these creatures are disappearing into silicon viro-finance automatisms, where massively distributed and anonymized human ownership has become as vacuously nominal as democratic sovereignty (478).
This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources. Digitocommodification is the index of a cyberpositively escalating technovirus, of the planetary technocapital singularity: a self-organizing insidious traumatism, virtually guiding the entire biological desiring-complex towards post-carbon replicator usurpation.
The reality principle tends to a consummation as the price system:a convergence of mathematico-scientific and monetary quantization, ortechnical and economic implementability. This is not a matter of anunknown quantity, but of a quantity that operates as a place-holder for the unknown, introducing the future as an abstract magnitude. Capital propagates virally in so far as money communicates addiction, replicating itself through host organisms whose boundaries it breaches, and whose desires it reprograms. It incrementally virtualizes production; demetallizing money in the direction of credit finance, and disactualizing productive force along the scale of machinic intelligence quotient. The dehumanizing convergence of these tendencies zeroes upon an integrated and automatized cyberpositive techno-economic intelligence at war with the macropod (479).
Land, N. “Machinic desire”. Textural Practice, 1993, 7, 471-482
I’ve known little about the Cybernetic Culture Research Unit (CCRU) for a while. The CCRU comes up as an aside, usually linked to the Proper Names of Nick Land, Sadie Plant, Matthew Fuller, Kode 9, Kodwo Eshun, and Mark Fisher, who has posted an historical article on the unit. Unfortunately the group website, http://ccru.net, doesn’t respond to requests anymore, and their wikipedia page was just deleted by overzealous editors (how can something like this not be relevant or important if you have wikipedia pages for a number of these same people?) Their work seems to be carried on in the Collapse journal, whose publisher is also releasing an anthology of Land’s writings this year. The importance of sound and music to their work is what I am most interested in at the moment (besides their jubilant merging of theory and fiction); see Fisher’s link for more details. I wish there were more written about the CCRU, but perhaps that is part of why it is so interesting…an organizational form specific to their moment in time that can serve as a reminder of possibility in these times of the closure of philosophy departments.