Lecture 003

[I'm Google] (http://dinakelberman.tumblr.com/) by Dina Kelberman

The project is an (almost) infinitely scrolling website that shows a collection of pictures in a column. The later picture is searched by the similarity index of the image before it. Therefore, the content of the image changes gradually likes performing an (almost) random walk on the latent space. It inspires me of what kinds of applications can be done using the latent space of an image. The project, with its title "I'm Google" opens to a variety of interpretations. It looks like the author created the project by using a fixed dataset and a trained image classification model. The project might be more interesting if it starts with a random image instead of a fixed one. It will also add to interactivity if it allows the user to upload an image as a starting point. Although this project is less interactive than "Image Atlas", but it is more aesthetically pleasing to look at.

[Image Atlas] (http://www.imageatlas.org/) by Aaron Swartz and Taryn Simon

This project is an image search engine in which the result is filtered based on the location of the search. It demonstrates how the result of the search can be vastly different or similar in a search using the same keyword. From my interaction with the artwork, although I expect different skin colors as I search with the keyword "person", the result came up to be not that different. I suspect it uses a language translation API before applying a search from google images, which might explain the similarity and some strange phenomenon. (Indeed, I validated my theory by search on google with the same keyword. In Spanish, the word "person" is "persona" and therefore images came up to be anime characters instead). There are definitely ways to bypass this limitation: using the location of the uploader can be a better prediction of the "style" of the image generated by the search. The artwork also comes with the option of "Sort results by GDP", but this defeats the point since the search result is not at all representative of one country. Nevertheless, the work teaches me that there is a trade-off between the capacity to be customized and convenience when using any API. Like the work "I'm Google", it displays images in relation to some measure of similarity.

[David Horvitz] (http://davidhorvitz.com/)

This work is a webpage with 38 buttons titled by the name of a book each link to an audio play of reading single words and phrases alphabetically. The listening experience is good: the page does not tell you what it reads and if there is any meaning associated with these words. In the first minutes, I realized that the words are alphabetically ordered, therefore I suspect it is reading off from a dictionary. Into the second minute, I realized the words are far less than an ordinary dictionary, which let me suspect that these are sorted words from a book. The idea is, just by listening to a series of ordered words, I can grasp the direction of the conversation in the book. The project can be improved in the following ways: instead of sorting and removing duplicated words, it can be made more interesting by a random selection of words without removing duplicates because duplication removal can greatly reduce the appearance of the key phrase of the book, which may not be desired. The choice of using audio allows an entirely different viewer's experience than plain visual words. Like all the works above, it displays images in relation by some measure of similarity (in this case, it is the alphabetical order).

Table of Content