This is the space where I’ll be posting my weekly, reflective writing about Computational Arts-related readings and discussions. Most posts will consist of congregated passing thoughts, memories, random examples, feelings, and attempts to a personal appropriation of a subject. If anything, the exercise is aimed at improving my knowledge in what is a new field for me. I will also use it as a directory of newly discovered artists and practices. © Copyleft—all rights reversed.
1
3
4
5
6 Group preparation and research interests
7
8
9
10
11
12
13
15
17
18
This post gathers my afterthoughts on Femke Snelting’s essay “A fish can’t judge the water”, which she wrote in May 2006 as a contribution for OknoPublic011, a three-day festival presenting “
Following the recommandation of Theo Papatheodorou, I have recently watched the 2001 documentary film Revolution OS4, by J.T.S. Moore. It has slightly broadened my understanding of the
The important amount of work put into free/open source softwares, often distributed free of charge, or in exchange for suggested donations, suggests a disregard for profit, and thus makes it temporarily share similar features to an anti-capitalistic effort. However, the ultimate goal behind such attitude seems to be an ideal of cooperation, as well as the necessity to challenge the monopole of dominant corporations. Because it embeds the possibility of an absolute economical freedom, the movement could equally be viewed as taking a stand for
Snelting’s own practice admittedly operates within the field of free software and open source.
Facing a situation of growing complexity, the most common reaction is to feel either alienated, or to try and find ways to escape complexity. A lot of programmers’ backstory begins with the desire to gain control over something that for a long time seemed out of reach; to become both a
“Can we think ourselves outside it?”
In its context, this rhetorical quote refers to the immersive essence of software. What if we were to replace the word “software” with “ideology”?
According to the classical Marxist definition, ideologies are discourses that promote false ideas (or “false consciousness”) in subjects about the political regimes they live in. To be able to criticize ideology (or software), according to this definition, one must disclose the truths concealed from him/her.
“Then, so the theory runs, subjects will become aware of the political shortcomings of their current regimes, and be able and moved to better them.”
Ideology, the way I understand it, comes before (and after) the act of thinking. It is a set of tools (or “ideas”) one is willing to utilize without question asked to interpret a given situation. Similarly, a software is a tool or set of tools meant to make decision-making easier, or quicker.
Examining the previously mentionned situations that calls for alternative propositions, an example comes to mind. A
In this analogy, the closed/locked door evidently compares to a proprietary software, whose owners you have to promise that you will not modify or share it, and sign such promise in a legally binding fashion. Even where the contract is not binding, big tech companies will threaten to sue with no legal grounds. An example of such attitude goes back to 2011, when the hacker community was racing to make the kinect open source. Microsoft first reacted by threatening to prosecute anyone who’d develop for Kinect on a PC. Then, once it had been done, they unironically pretended they wanted the kinect to be accessible and open source and claimed their love for the hacker community.
The Open-source software movement aims to remove the door’s lock, thus enabling the public to open the door, even without a key. The Free software movement has the ambition to remove the entire door. The nuance is important, because anyone could still mount a new lock on an unlocked door, claiming what’s in the room for themselves. A room with with no door makes it significantly more difficult for someone to privatize the room and its content.
This week’s reading is titled
An algorithm is a set of steps.
One of the oldest algorithm1 was phrased 2000 years ago by Euclid and allows to find the greatest common divisor of two numbers:
— If you have two distances, AB and CD, and you always take away the smaller from the bigger, you will end up with a distance that measures both of them.
1
— Start out with two positive integers m and n.
— If the value of m is less than the value of n, switch the values of m and n
— Find a number r equal to m modulo n
— Let m have the same value as n
— Let n have the same value as r
— If n does not have the value of 0, go to step 2
— The wanted value is in m.
Finally, its
int euclid_gcd_recur(int m, int n)
{
if(n == 0)
return m;
else
return euclid_gcd_recur(n, m % n);
}
When given the same input, an algorithm will always provide the same output. But “input” can be deceiving, because little are we aware of the nature of data that we carry with us. An adult german lady that likes tango googling a set of words from her laptop will not provide the same “input” as a teenager from Argentina will when typing the exact same words from his computer.
“Simple recursive algorithm”
“Divide and conquer algorithms”
“Greedy algorithms”
“Brute force algorithms”
“Randomized algorithms”
The various categorizations of algorithms inspire mistrust and malfeasance.
A turning point was reached when algorithms became a forum topic, after being blamed for their influence on the Brexit vote, or Trump’s election. If I were to ask anyone about the word “algorithm”, I expect to get a lifting eyebrow and embarassed answer about “a calculation that’s too complicated to explain”. Unfortunately, this answer cannot be satisfactory, for these tools are very much present in our daily interactions. They should therefore be transparent and understandable to the broader audience, for they have a significant impact on their “free willingness” to do things. While there certainly is a variety of perspectives on the algorithms, because it has become a matter of public safety, the definition that is most important is that of the critical mass. This is somewhat similar to the increasing phenomenon of paid for news2.
“the algorithm was wrong, the algorithms is responsible for…” When they go off track, corporations shelter behind the “unintelligible” nature of the algorithm they leverage. They dissociate from the algorithm, acknowledging they have no control over it. Nevertheless, the premises have not changed since 1958, when Charles & Ray Eames were trying to convey a sense of the place the machine would occupy in our future lives.
[…]with the computer, as with many tools, the concept and direction must come from the man. The task that is set, and the data that is given must be man’s decision, and his responsability.
So has has the meaning of this assertion become less true? Hasn’t the public the means to challenge corporation’s liability? “Technocapitalism” and “algorithm” are scary, for they are often found in the words of people with influence. Should corporations be allowed to use complex algorithms? Should they not prove their innofensivity first, the same way a new medication needs to be approved by the FDA before it can be marketed?
… algorithm as synecdoche
Algorithms are not the
On the opposite side of the moral compass, there is also an overall overratedness of algorithms. Like science and medicine carry the promise of a brighter future3, algorithms are believed to be panacea. While they may be misled by those who sell algorithms, this hubris lies in the minds of those who believe in it, not in a cold set of instruction.
The people of the village knew it had nothing to do with his strange habits and rituals yet they never spoke aloud of it for fear of provoking the gods…
If algorithms’ concealed biases can have so devastating effects4, (democratic denying, judicial error, stock market crash…) and if no one can truly be held responsible for it, using them is similar to provoking the gods, whose heavenly wrath we shall suffer equanimously. Gillespie’s essay summons numerous analogies such as
Unlike the examples listed above, algorithms sometimes pertain to the theoretical world, and carry all kind of promises. Some algorithms cannot be performed as of 2018, for the current state of computational hardware is not powerful enough to calculate their output.
One of the most famous, unresolved issue in computer science is the
An example of a morally ambiguous case is the use of algorithms in managing terrorists’ ‘kill-list’ by the US government.
Guided by the Obama presidency’s conviction that the War on Terror can be won by ‘out-computing’ its enemies and pre-empting terrorists’ threats using predictive software, a new generation of deadly algorithms is being designed that will both control and manage the ‘kill-list,’ and along with it decisions to strike. 9
Paradoxically, the existence and operation of such methods comes at no surprise me. Warfare management is usually not a subject for the usual public, it makes sense that such an unregulated area is used as a field test for future wider usage. This particular use of algorithms reminds me of the driverless car dilemma, which served as the basis for an online game by MIT professors Iyad Rahwan, Jean-Francois Bonnefon, and Azim Shariff.10
The calculating nature of algorithms is often invoked to advocate their objectivity. They have even made their debut in the justice system in England11 and there is a recurring concern that they may go as far as replacing a jury or judge.
“Algorithm have neither hands nor hearts, nor preferences or attitudes. They simply perform calculations and provide objective, neutral and reliable responses to your questions.” 12
By using the definition given at the beginning of this post, a peer jury is nothing but a low-performing algorithm, whose bias are—at least partially—assumed. During the jury selection process for instance, potential jury members get
“How to simulate a jury, whether we can, and why we should care? Assuming that a trial is a computer, The premise is simple: simulate a group of human beings in a trial from voir dire to verdict. The challenge is daunting and perfection cannot be expected. What can be hoped for is a tool to bring reason to the inherently passion filled arena of human conflict. First we will consider why we want to do so, follow with thoughts on whether we can, and conclude by considering how.13
But, very much like the functionment of society in general, the bias happen long before any so-called “objective” system is even born. It is in its initial parameters that the idea that an objective algorithm is heavily debatable. This notion of
In both the context of a “death-deserving-targets-defining” or a “sentencing” algorithms, I merely see it as an opportunity to evaluate the current state of society’s decision-making processes. An occasion to
Sadly but unsurprisingly, the knowledge of appalling events in South Asia and the Middle East, appear to evoke little emotion any more. According to the Construal level theory16, this may have to do with the distance at which such event take place, which directly correlates with the level of identification I am able to perform with the victims. The moralful, standardized Manichean educations we tend to grow up with lead us to believe that the line that separates good from evil we shall not cross. I tend to believe that such binary separation has no reason to be, and that it rather is part of a spectrum.
As part of the exercise, let’s try to word a concept for an imaginary artwork that would include an algorithmic process and exploring the notions discussed above. First, let’s isolate a few questions that rise from it. Can an algorithm only be understood by its tangible effects? Can their hidden subjective essence be surpassed? Can one become more performing than humans, even at judging the human nature?
I would like to create a device that would expose some of their not immediately recognizable but nonetheless funeste features. This could take the form of an installation where the audience of a play or the visitors of a venue would be used as subjects. A camera could run data in a face recognition system (algorithm), and publish its outcome in real time on a screen: name, origin, credit score, any footprint left online by the unsuspicious subject, who might thus become more aware of it. The simulation could then go on by organising the data collected over a given period into statistic, maybe publish rankings of visitors, from the more likely to be found guilty or to be run over by a driverless car facing dilemma.
After browsing through both JAR (Journal of Artistic Research) and Leonardo (curated by MIT), it became apparent that the first includes articles on a wide variety of art forms that are as inclusive as possible, while the latter focuses on the specific entanglement of art and science/technology. I was pleased to find out that JAR featured a recommandation for my friends’ project OAR (Oxford Artistic and Practice Based Research Platform), which I designed and help maintaining up to this day. Yet I quickly decided to research further in MIT’s journal, as the topics covered fell more into my current area of interest, or more accurately my current needs for theory.
After a few quick read-across some of the contents, one article caught my attention, mainly because of what sounded like a highly paradoxical title.
On a side note: In my research, I found out that Emil Hrvatin and two fellow Slovene contemporary artists all changed their names in 2007 to “Janez Janša” (which was the name of the then Prime Minister of Slovenia), as part of a mesmerizing performance piece called “NAME–Readymade” (->dedicated website). An earlier version of this website can be found on web.archive.org and reads: “Janez Janša is the name of Slovenia’s economic-liberal, conservative prime minister—and has also been the name of three well-known Slovenian artists since the summer of 2007[…]. All of their works, their private affairs, in a word their whole life has been conducted under this name ever since.” This has to be one of the most interesting conceptual performance piece I have heard of in a long time. But as I would soon discover, this performance can be interpreted as a
The terminology
The article focuses on one of the group’s main project:
Postgravity art is defined as all art created in zero gravity conditions. In these new living conditions it will create systems that we are not aware of .
Being personally interested in looking at the field of experimental physics as a source of ideas for installations, I was eager to find out that it existed as a whole discipline. However the definition followed with:
Postgravity art is not a stylistic formation and does not intend to become that either.
The basic outline of the performance consists in a play, being performed every ten years (1995; [~]2015; 2025; 2035; 2045) with the same set of performers, for a total of five times. It thus appeared to be more to be a speculative, anticipation exercise rather than a presently existing discipline that challenges nature and its forces. That is until you reach the final proposition of the piece: If one of the actors deceases before the end of the program (which has already occured: “Before the second replay, a process to replace the first deceased actress’s body by technological substitution was already set in motion”), he or she has to be replaced by a “remote-controlled symbol, their spoken text represented by sounds (melody for women, rhythm for men)”3 .On the last iteration of the play, on April 20th, 2045, all of actors are assumed to be dead. A cosmonaute will then use a spacecraft to convey the 14 “umbots” (the androids) into geostatic orbit, from where they will transmit to Earth the signals representing the roles they played. “This action is intended to establish the rule of non-corporeal art”, says the author. This is a key aspect of the definition of Postgravity art, which states:
Space is also an especially suitable environment for artificial life, as it does not provide natural conditions for terrestrial forms of life.
After the sham bodies will have been put in orbit, it would then be possible to interact and get information from them remotely, and they should have the ability to “develop”.
This base would be a technological form of life under zero-gravity conditions and an intelligent artificial being or system […] it should acquire awareness and intelligence, which have the potential to develop into something new, different from human consciousness.”
The title of the piece comes from scientist Herman Potočnik (
5
Locally regarder as a fantaisist, Potočnik is now regarded as having been one of the founders of astronautics. It is unknown at this stage if the most radical bit of the proposition (the placing in orbit of the AI versions of the actors) will see the light. My initial feeling facing “Postgravity” and the surrealist ambitions of the
During this short 50 minute exchange with the other students, I was very please and surprised by the effectiveness of the process. Suspicious in nature, I did not have high hopes as to how to exercise may set my thinking in motion. Because I had had very little time to prepare, I wasn’t sure which center of interest I would even be able to discuss with the others. But it turned out that the discussion happened naturally. I initially talked about my interest for linguistics (
Batool and Keita also mentionned existing emerging projects for programming languages using arabic/japanese characters. I was also very interested to listen to Lola’s interest in VR’s promises regarding potential mental health cures, by simulating disembodiement and possible simulation of multiple personalities.
I then considered looking into the geopolitics involved in the extraction of the rare metals necessary to the construction of computers. Because younger and older people will increasingly use more and more computers, our reliance on technology (and thus on rare metals) will increase accordingly. But this then made me realize that beyond the scope of computer making, I have always wanted to know more about the heavy and numerous infrastructures behind the world wide web and digital communication-at-large. I thus oriented my interest towards a “geography of the digital”, as in “the spatial study of the infrastructures” used in our digital paradigm. From the thousands of kilometers of wires at the bottom of the oceans to the secured facilites from Antartica to the North Pole, I really would like to be more aware of who owns and operates this stuff. I found a nice section on this subject in the book
But then I realised than I really wanted this project to be practice-based, and that it would be complicated for me to come with practical ideas relating with this subject. So I finally opted for the subjects that I am interested in in my artistic practice: The observation of physical phenomenons, a fascination for experimental science, the use of primitive/natural elements (water, fire, earth, rock, plants…) in conjonction with universal/fundamental physical notions (gravity, acceleration, Planck mass, speed of light, magnetism, the birth of mass and time etc.).
The reading at scrutiny below is an excerpt from Lucy Suchman’s book
It has been a few weeks since I actually read the paper, and since then many of the concepts approach have proved to be absolutely central in computational art theory. The paper is paricularly abundant in references and assumes existing knowledge of Donna Haraway’s theories on
At some point, she adresses the relationships that emerge between the creator and the machine (HCI = Human-Computer Interaction). A given AI/robotics isn’t anything but a subset of the human mind, may its existence reach the current limits of sociomateriality. Very much like the tale of Pygmalion, who fell in love in its creation, and preys Aphrodite to give it life. Or, as more recently
I have the intuition that most questions around AI are essentially technical. A read philosophical debate on the conditions under which it is appropriate to create artificial intelligence would require the field to have figur(-at)ed out actual intelligence. Neuroscience progress indicates that it is by the approximation of big numbers of neuronal connections that intelligence emerges. Technically, there’s a big gap before any physical, manufactured or even digital (which relies strongly on physical artefacts) simulations could go even close to simple organisms’ neuronal capacities. The big and ancient—human—questions of the house of the mind/soul, the functioning of consciousness and subconsciousness and so on are gaining insights from the development of AI. Trying to grasp artificial versions of these concepts is equal to putting the cart before the horse. After all, language itself is a human tool that has existed for longer than Recorded History. It’s been evolving empirically and its current limitations are still little taken into account.
But this comparison between human and machine is somewhat pointless. AI to not need to posess each and every human feature to outperform humans intellectually and on many other levels. We initially know of our world through our perceptions, which in turn depend on the time allocated to performing such perceptions of the world. A machine has a very different relation to time. A setup where a machine would constantly monitor a human, and perform calculation and preprogrammed assumption based on this data would eventually know the subject better than the subject itself. While the most traditional ways we engage with the world are interpretable, by looking at patterns or making simple deductions, even by humans, some of the most imperceptible ways we behave may only be accounted for by a machine. Looking simultaneously at the blink of the eye, heart rate and geo-localisation of a subject (potentially, illimited amount of sources) and noticing correlations is something machine alone can perform, unrestricted by time and not assessing the value and interest of the potential outcomes. The promises of unsupervised deep machine learning (absence of biases, fewer labour, the power of computational calculation put to free walking/imagination…) make a case against human responsabilities involved. The hubris, combined with the race for profits give sufficient reason for concern. In a situation where politics is reforming at a human pace, while industry is advancing at a computational speed, the gap between responsability/power, accountability, traceability might only grow bigger.
Our research is a case-study of a touch based interface, developed by the start-up Ultrahaptics. Their technology create three-dimensional fields that create a sensation of touch on the palm when passing the hand through. These ultrasounds is modulated by sophisticated algorithms and is produced by an array of transducers.
The video below shows a record of our experience at Ultrahaptics during a field trip to Bristol. Our experimentations led us to wonder what ‘it’ was we were touching; more than a digital representation, yet far from a solid form. We figured looking into the neuroscience and cognitive psychology behind it would help us get a better understanding of how touch works.
Ultrahaptics hardware uses acoustic radiation force, a force generated when the modulated ultrasound is reflected off the skin. When focused onto the surface of the skin, it induces a shear wave that will be sensed by the neurons in our hand. The displacement caused by the wave triggers mechanoreceptors within the skin, generating a haptic sensation. On an experiential level, it creates a vibrating membrane describing the shape. Its material nature is not solid and left us with the feeling that it lies in another state of existence. Its application are mostly imagined and depicted in the fields of VR, AR and as general purpose touch interfaces.
Ultrahaptics started in the academic world as a PhD research project. Now a company, it is funded by four large investment companies. While these companies provide the means to develop, it may also move its potential outcomes towards certain money driven ‘figurations’. It was also interesting to perceive some of the company’s publicity through Lucy Suchmann’s idea of “cultural imaginaries”. [fig. 1]
Cognitivism sees the human body as partly similar to a computer, in the way that it computes complex multiple signals into intelligible information. Studies in multisensory integration show that the modalities of the sense of touch interact at many levels. The neurons in our fingertips turn stimuli into information, either through the sensory nervous system or the cortical system. This gives us a sense of the complexity with which the sense of touch operates, and the the impossible task of simulating it with the current technology at-hand. We also begin to get a representation of what the physical properties of Ultrahaptics’ “half-tangible” simulations approximate as they only activate a small segment of the full set of modalities.
Cognitivism sees the human body as partly similar to a computer, in the way that it computes complex multiple signals into intelligible information. Studies in multisensory integration show that the modalities of the sense of touch interact at many levels. The neurons in our fingertips turn stimuli into information, either through the sensory nervous system or the cortical system. This gives us a sense of the complexity with which the sense of touch operates, and the the impossible task of simulating it with the current technology at-hand. We also begin to get a representation of what the physical properties of Ultrahaptics’ “half-tangible” simulations approximate as they only activate a small segment of the full set of modalities.
These considerations led us to imagine where the future of touch and materiality may lie. In the animation below we speculate on the possibilities of a new temporal form. It shifts instantaneously in-between existing materials, and even envisages materials outside of our comprehension. We believe this could totally reinvent how we can communicate our with devices and each other, allowing new languages of touch to evolve and letting us translate what we cannot put into words, into touch.
A paper by Donna haraway published in e-flux, Journal #75, September 2016. Available on e-flux.com. Donna Haraway is a professor in the History of Consciousness Department and Feminist Studies Department at the University of California, Santa Cruz. She is a leading figure in feminist science and technology studies (feminist STS), and authored milestonic books “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century” (1985) and “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective” (1988).
As a way to expand the set of tools to do research, we were introduced by Helen Pritchard to the Material Semiotics approach. It is a methodological approach that aims to look at the interconnectivity of the studied element, its network and relationships. In 1996, when a computer program called Deep Blue beat Garry Kasparov in a game of chess, it was a big step forward in the history of Human-Computer Interactions. It was later explained by members of the team of programmers that the new paradigme they used in this winning version of the program wasn’t trying to blindly assess every possible (“legal”) move at hand, but rather to first dismiss all the moves that were unlikely to be played. Nowadays, this would be achieved by looking at large datasets of archived games, to quickly determine winning paths without the need to calculate every rhizome created. Because the amount of “threads” uncovered by a Material Semiotics methodology could go out of control rather quickly, it is best to try and identify the most relevant ones and then pull them as far as possible. Interestingly, the chessboard is commonly used to visualize the scope of exponential growth, notably in the
This approach most likely draws on the Rhizomatic learning theories devised by Gilles Deleuze and Felix Guattari in
In the article, Haraway proposes the existence of an “elsewhere” and “elsewhen” called the
She summarizes the tales of the furies, the harpies and the gorgons, all ancient greek deities of no particular gender (but nowadays figured as females) who avenge crimes against the natural order. Despite–or is it because of?—their vital role in the balance of natural forces, heroes are usually dispatched to destroy them and affirm mankind’s ownership of the earth. Furthermore, they usually achieve so thanks to divine help from Athena or another god, who a such participate in the slow demise mankind brings upon itself. In the greek tradition, gods are understood as reflection of humans, only with greater powers and reach. They are portrayed as vain and colerics. In the christian tradition, the Holy Spirit rather sounds like an omniscient force, who knows what’s best for humans against their own will. Whatever tradition humankind pertains to as of now, it seems that the idea of a collective responsibility has not yet made its way.
One of the point I think Haraway tries to make is the deterministic effect of categorizing something as vast as the entire world and time we live in. Could the fact of naming said age with ambitions (Egalitarian-cene? Undiscriminating-cene? Feminist-cene?) instead of expectations have an effect on the outcome?
This work is a demonstration of the real life implication of ideas such as “digital witnessing”. The video was self-initiated and ended up being used in a human-rights court to demonstrate NATO’s abuses and the policital mayhem that has become the mediterranean. The activist role endorsed by the artists is reminiscent of Garnet Hertz’s idea for disobedient apparati. Not that it is the only way to create and to draw useful links between practices, but I have to admit that it is the most convincing I’ve seen.
Like many of the other researches carried by the Forensic Architecture group, the topic is grave and important, and uses visual communication (pictures, graphs, films, sounds) in unprecedented ways to convey the message. Forensic Architecture develop innovative methods for visualising evidence and make human rights abuses case. Their work can be seen in courts of law and in exhibitions of art and architecture. For reasons that might be strategic or simply contingent, FA have made a name for themselves in the contemporary art world. Maybe this gives them a reach that the “conventional” academic research world cannot. Or maybe this gives them access at more funding sources. Maybe they intentionally address the art world, who is quick to become convoluted when it comes to politics or taking action. In fact, the explosion of the prices in the contemporary art market can be interpreted as a way to gag the contestation it once contained. In that regard, I see FA has a response. They developped a unique way of imposing their thematics in some circles and have gained international attention, working with influential NGOs but also for the New York Times.
On the 19th of October, Eyal Weizman gave a lecture of one of their latest resarch, as part of the Conspiracy symposium, organised by the Centre for Investigative Journalism (CIJ) at Goldsmiths University. The case is related to the ongoing investigation on an alleged race-motivated murder committed by Israeli policemen in occupied territories. Consistently with their general approach, they used the grant nomation’s money to produce the video. I wrote down the following sentence he said, when asked a general question about FA’s practice:
The video produced is the reconstitution of the events that have led to the killing of Yakub Musa Abu al-Qi’an, a Bedouin and resident of the village of Umm al-Hiran, and Erez Levi, an Israeli policeman. The police described the events as a 'terror attack' and that the policemen had to use lethal forces in defense. The many eye-witnesses (and their digital witness gears) quickly contradicted this version. The police reported that al Qi’an drove towards the policement who then opened fired. The video, reconstitued from various sound-recordings, video footage and 3d reconstitutions demonstrates that gunshots were fired before al Qi’an panicked and accelerated, most likely in an attempt to escape the shots. His behaviour showed no threat and at the very least, the official is not consistent with the raw recordings of the events.
The panel was followed by a presentation by illustrator and activist Molly Crabapple, who said the following thing about comissioned art practice:
“Succesful artists are always working with the worst kind of people. That’s the way it is, at least in america”.
We live in a time where we question the problematics of what’s innate and what’s acquired. Both seem to go with their share of problems: While “innate” features, which may be someone’s milieu, his-her beauty; family’s wealth; health… evoke causes of inequalities, “acquired” feature might recall a neoliberal discourse based on merit and individual responsibility. Naming is commonly referred to as an important part of constructing one’s identify. While this is arguably a tool for constructing a very given and limited range of identities, they certainly are a government’s best friend for traceability and surveillance, along with birth dates. In uncertain times, a little prescience dictate to not be overconfident into any political system. Any rapid shift could see any advocate become an enemy of the state. Could biometric data challenge the need for such rudimentary tools?
The name we carry from only a few days after we are born can be a burden more often than it can be helpful. There is something inherently submissive in the way we are used by our names. They induce traceability. Given the harmony–or lack thereof– between the rest of your family’s ideas or past and your life dreams, a family name can be powerfully harmful. As if the physical characteristics that we unintentionally carry with us were not enough, about the second thing we are likely to know about someone is their name.
The pilote episode of the animation show “Futurama” shows a future where everyone carries a chip with their assigned, unequivocal occupation. In a sense, this idea is more of one from the past than one of Sci-Fi, since this once used to be family names’ function in the western world (Archer, Baker, Brewer, Butcher, Carter, Clark, Cooper, Cook, Dyer, Farmer, Faulkner). The function of family names has evolved throughout the centuries, and this is no longer true. However, just because it lost its initial function does not mean it lost all function. It’d be hard to detail the process through which family names shifted from this litteral use to the one we can observe nowadays. Our names may reflect our continent of origin, the language we speak, our religious affiliation, …
A light-hearted psychological hypothesis called Nominative determinism looks at how people supposedly tend to choose occupations that fit their surnames (Dr. Freud…). A Native American naming tradition allows youngs to choose their own name, when they reach a certain age. An article on psychology today argues it “enriches your sense of self”. It is overly common for Hollywood and adult films actors to use stage names. In a previous post, I mentioned the
From these various ideas and examples, I figured my proposal should try and address the deterministic nature of surnames, as well as the overly convenient tool for traceability they represent. But even an idea as simple as “choosing your own name” does not sound fit. There are many reasons and occasions where one might not wish to be him-her-them-self, or that they simply would prefer not having to be anything at all. Getting rid of any given names (and thus traceability) altogether seems to pose as many problems as it solves. Similarly, imitating native americans’ tradition and choosing your own name assumes that you possess the adequate judgement to make such a call. Because the problem of discrimination precisely attacks who you are, I would not trust myself to choose a name that hacks all possible situations. Also, a possible recipe for a disobedient device with regards to family name would be that everyone would possess an array of family names (between 7—11) from which to choose, depending on the context. It would be important to have a set of names that could see ahead many situations. Maybe the choosing could be assisted by an algorythm, who would make sure to either choose existing names, or on the contrary to make sure that everyone gets at least one totally unique family name (a namechoosing VPN of some sort).
For the time being, short of a convincing algorithm to do the job, here’s an attempt at creating an array of possible names I shall henceforth be known as: Julien ROSA LUXEMBURG BANNON POLGAR CONFUCIUS TAUBIRA BATISTUTA JUNG DOE ROTSCHILD SCHOPENHAUER VON NEUMANN BAADER MEINHOF PINOCHET VIDELA THATCHER ISLAM BOLKIAH DUTERTE MALBOLGE FLOWER.
The
The project is very interesting for it gathers projects that are often quick iteration of a radically simple idea. The shape, the looks or the presentation of most projects do not matter as much as the message they carry. One of the memorable project it features consists in smuggling morning-after pills and birth-control devices between Germany (where they are widely available) and Poland (where they are banned, restricted, or hard to get). Two other artists (Marie Louise Juul Søndergaard and Lone Koefoed Hansen) created PeriodShare, a disruptive project for a connected underwear that monitors one’s menstrual activity. Using the rhetorics of the neoliberal society they claim aims to control women bodies, they launched a kickstarter campaign to fund the project, turning the farce into a global performance. The Transparency Grenade by Julian Oliver is a device to drop in places of power to perform counter surveillance and send the data over bespoke servers. Some of the other projects address themes such as cyberpunk, surveillance, transhumanism and gender politics. I discovered the work of “Sexy Cyborg” Naomi Wu, whose beef with the magazine Vice has really caught my attention. In a nutshell: An article was published about Wu’s practice in Shenzhen, where she advocates for girls in coding, and often so through staging her own body and adressing gender stereotypes. Unmistakebly, Vice decided to do an article on her. Despite an agreement that they allegedly have, Vice published some personal details about Wu and her peers, despite the tense political situation in mainland China. Though every people who later read the piece in the west “did not see the issue” with what had been published, she convincingly explains that what might sound innocent or a banality to our ears could sound like like a provocation worthy of grave repercussions in her homecountry. I took a trip to Shenzhen in 2012 as part of my undergraduate diploma project on architectural knock-offs. As such, some of my research conclusions where closely related to many differences in the eastern sensibility and relationship to communication. Whatever the intention, the attitude behind Vice’s decision to publish the piece with no regard for Wu’s protest sounds vindictive and out of place.
I consider my homecountry to suffer from political apathy, as well as artistic slothfulness and ideological Onanism. Most art school in Switzerland tend to focus on a 1990s trend for abstract geometric painting. Though it may once have been the outcome of a long process, I suspect it has became an excuse for decorative practices, for artists who are hoping to do passive figuration in a commercial art fair. The Disobedient Electronic proposition is quite refreshing, and much like the Forensic Architecture researches and other material semiotics and feminist STS practices, it gives me hope for an art scene with a purpose.
This processing program allows the user to generate random pages of text based on one or several source texts (Bart Simpson’s quotes, the Bible, Marcel Proust, or Donald Knuth’s writings) and export it with handwritten-like lettershapes, with customizable setting (size, space, shakiness…). The idea is to create simple, machine-made artefacts that appear to be entirely handmade, thus passing the Turing Test.
Markov Chain models have been used in a variety of ways for the past decades. They are a simple yet powerful way to make predictions based on probability, without taking into account the entirety of past events but rather the one last step taken. They used to be a lot more fashionable before the time of machine learning, which made me want to understand them, before moving on with ML in term 2. In simple terms: a markov chain will “read” (loop over) an input that you give it, like text. For each word (or given n-gram) it reads, it will store the probabilities of what the next word can be, based on the input data. Then, you want to generate a text that will choose each word but based on the statistics it stored.
Knowing little about Markov chains, I assumed that I would need to do some research before I’d be able to come up with a sensible idea. I wasn’t exactly sure what I’d use it for although I already had interest for some text-generator programs. During a one-to-one session in week #6 (a haiku generator program), Lior Ben-Gai suggested that I looked into Markov chains. I was attempting to create a haiku generator at the time, but with very simple rules and dataset to pick words from.
After a little research, I found some good instructions for a java-based markov chain program on https://rosettacode.org/wiki/Execute_a_Markov_algorithm#Java. However, the OOP programming was a little ahead for my technical skills. I attempted to understand and convert the provided sample code to something I could make sense of, but with limited success. I then found additional ressources on Daniel Schiffman’s youTube channel. He made three videos on markov chains and how to program them. I started porting his p5.js/javascript-based program to processing/java. The early results I got were promising, but I had trouble with making objects and classes at some point. I read what I could find on objects and classes in the “Learning processing” book, but the basic understanding I managed to get after a couple of days did not allow me to move ahead with my code. I seeked help on the processing.org forum, while investigating on other ressources.
This is when I came across the RiTa library, which is a powerful ressource for computational literature. It made me realize that the field that was already very well covered and that finding my own, modest voice would be challenging. I watched all the tutorials on the RiTa library, in order to come up with ideas on how to make the best use of markov chains.
That’s when I remembered another assignement that I had had a lot of fun doing a couple of months before: A Manfred Mohr inspired piece, for which I made a basic, abstract hand writing program based on noise and randomness. (a random walk with constraints) At the time, I wanted to create an actual, readable handwriting tool but my coding skills didn’t allow me to do so. By december however, this sounded more like a technically feasible possibility. Pairing it with a markov chain text generator even started to sound like the beginning of a concept. I could try and build some sort of forged handmade pages. (novels, love letters, manuscripts…) It would have to be thought of as a sort of simple turing test: I’d have to make the forgeries look in a way that no one could tell it has been done by a program.
Thanks to the RiTa library, the technical aspects of the text generator was pretty much figured out. I could then focus on building an interesting handwritten typeface. Every reccuring letter would have to look different than its previous iteration. I looked into my archive of hand drawn letters, from 17th century manuscript to Jean Dubuffet’s poetry, but figured it would make more sense to try and mimic my own handwriting.
I drew each character (lowercases, uppercases, + some punctuation) with simple lines, using bezier vertexes. I added parametric noise to the vertexes, in order to create a semi-controlled deformation everytime a new letter is drawn. However, I had to make sure to add the same noise values on each point and its two bezier handles, in order for the line to feel smooth and continuous.
I then wrote the function that would look at the position (.charAt method) of each character in the markov-generated string, and turn it into X and Y coordinates accordingly. Every time it hit the right margin I had defined, it would trigger a line-counter and the Y position would increase. I also stored a max amount of character per line and per page, that were dependent on the X & Y size, letterspace and line space values. I wanted to be able to play with these settings based on a live visual feed, so I created controls for:
— The width and height of the letters, which controls the size of the unit variable I created when drawing my letters
— The space between letters, which is the increment added to the X position, the linespace, which is the increment added to the Y position,
— And maybe the most interesting one is the amount of noise added on the vertexes, that make the writing look more or less shaky.
Finally, I added ‘refresh’ and ‘export’ buttons. After adding those controls I was quite happy with the simplicity of the tool, and the rough aspect of the outputs it allowed me to create.
When designing for print, one needs to take into account the printing technique that’s going to be used, alongside the design process (not after). In my case, choosing the most appropriate printing method meant finding the one that would reproduce hand writing with the most accuracy. I used a CNC mill, on which I taped simple pen and pencils. It fit very well with the idea of making forgeries/turing-test pages, which you could hardly tell whether it was made by a human or a computer. A pen traced line has a material quality that inkjet printer simply don’t allow.
I noted that I could make my pages look very ‘suspicious’, by drawing really small characters: the overall steadiness makes it look impossible that a human would be able to do this by hand, despite the shaky lettershapes.
I would really like to develop this tool further, and maybe explore this machine/human threshold. Maybe by creating an entire manuscript, without a single blunder. Or by writing the same one page text over and over again, with thousands of different lettershapes. It could also be interesting to use it to add hidden, encrypted content, or visual patterns in the sentences.
Acknowledgements
* Rosetta Code
* RiTA processing library
* Daniel Schiffman tutorials on Markov Chains(P5.js)
* Daniel Schiffman’s Thesis generator using Markov chains
* Allison Parish’s Generative Course Descriptions
* Chris Harrison’s Web Trigrams
* Learning Processing, Daniel schiffman, chapter 8: Objects
* A list of every Bart Simpson’s blackboard quotes
* King James’ Bible on the Gutenberg project
* Simplified theory on Markov chains
* More theory on Markov chains
On the 1st of February, we were lucky to be given a lecture by Dr Sarah Wiseman on the paper she co-authored (with Dr Sandy JJ Gould) “Repurposing Emoji for Personalized Communication: Why pizzaSlice means
One of the obvious effects of communicating with pictures is a removal of traditional linguistic borders. People who do not speak the same language can somehow communicate on an intimate level through emojis. But the strongest aspect of the research is the revelation that people use emojis to express what words can’t. Wiseman’s interviewed 134 people to ask about their personal use of emojis. It reveals that most couple have their own, sometimes unique way, of saying ‘I love you’ to each others. But most of the time, they simply use the pizza slice emoji. A couple interviewed explained they used the ananas emoji when they needed to say ‘I am not joking, this is true, I am being serious’. They agreed on this because of the ‘deceitful nature’ of one of the partners. Weirdly enough, the attempt of the media to cover this story seems doomed to fail. As soon as you start explaining an emoji’s meaning without its context, it doesn’t work anymore.
There also seems to be an interesting effect of comforting anonymity. Under the constant sight of the Panopticon, a case can be made for wishing to disappear. Using emojis with secretly attributed values can partially achieve this. Criminal endeavours were probably the first to realize the benefit that could be derived from it. It would be complicated for a court to use emoji-based communication as evidence, when the purpose they serve is to mean something different for everyone but the sender and the receiver.
In 2016, a huge customers’ backlash stroke Apple after they had changed the design of the peach emoji. Previously heart/buttocks-shaped, the new version suddenly lacked suggestion and looked like a boring… peach. Apple had knowingly changed the design after a research had shown that only 7% of users used this emoji to refer to the fruit. In doing so, they revealed the importance of the cultural reappropriation of emojis as a unique mode of repurposed communication and failed to nurture the phenomenon. Apple eventually stepped back and reintroduced a more suitable peach emoji. Things get political quite often too, as emoji that represent human bodies or faces cannot but fail to represent the variety of users bodies. Short of an abstraction strategy that could make everybody content, the emojis set expands every week to be more inclusive. The ‘family’ emoji now features no less than 25 variants.
👪 👨👩👧 👨👩👧👦 👨👩👦👦 👨👩👧👧 👩👩👦 👩👩👧 👩👩👧👦 👩👩👦👦 👩👩👧👧 👨👨👦 👨👨👧 👨👨👧👦 👨👨👦👦 👨👨👧👧 👩👦 👩👧 👩👧👦 👩👦👦 👩👧👧 👨👦 👨👧 👨👧👦 👨👦👦 👨👧👧
Could the order in which discriminated bodies become part of the set reveal something about ‘our’ preoccupations as a society? Colored bodies, redheads, short people, is there an underlying hierarchy in minorities, and does it relate to market shares/target customers? Generally, the fast pace at which emojis evolved make them a good witness of sociological tendances in a way that language cannot. Surprisingly, adding new emojis is the competence of the Unicode Consortium. This shows the extend of the misconception of emojis as “letters” rather than “words”, (which would be added to the dictionary by yet another white elders gang). Given the sensitive and cultural nature of the task, this sounds as relevant as the IRS’s competence to distinguish a religion from a cult. (as revealed by Alex Gibney/HBO’s documentary about Scientology: “Going Clear”)
The necessity to talk about penises, vulvas and butts with fruits reveals something else of interest. The repurposing of the 'eggplant' emoji as 'penis' and that of the 'peach' emoji as pubis/buttocks shows that their repurposing is inherently related to the need that most users have to text about sex. And yet, though it is mathematically demonstrated that the majority of personal communication would prove the need for 'sexualised' emojis, puritanism seems to survive the ages and cannot be overcome. In an age that we perceive as progressive, open, forward thinking and so on, techno-capitalistism still uses an openly hidden language to talk about bodies, parts and intimacy. The reality of digital communication (furthermore with the accompanying anonimity, or ease to hide behind the technology) is that it is largely used for intimate, personal communication, just as much as for professional communication. Yet, the moral standard somehow makes it impossible to include an actual penis/vulva emoji. This would require a big tech company to officially acknowledge, encourage and give a blessing to a shameful use of their product by users. Obviously, the “peach Emoji Gate” shows their awareness of the topic, as well as everyone’s silent agreement on not raising the issue of censorship in technologies.
Other researchers are actively investigating the relational value of emojis:
— Executing Practices
— Emoji Relational Value
“Do you remember when emoji characters were always yellow?”
Femke Snelting, Modifying the Universal, a conference by Femke Snelting.
The practice based research project I’d like to carry is about opinion-making and self-perception. The past 4–5 years have been critical for the construction of my current thoughts. I became interested in Gender Theory, patriarchy, Social privilege, and Quantum theory. As my interests grew, it became obvious that these questions were parallely becoming more and more prominent for my peers. The fact that it seemed to be a wide phenomenon (at least in my circles) made me wonder what were the modalities that fed these growing interests. In 2017, the #metoo movement accelerated what was an ongoing reconsideration of the patriarchal values. Although I defined myself as a feminist before the scandal and the important media coverage it produced, the effect those despairing stories had on me is undeniable. The magnitude of the phenomenon exceeded the most grim predictions, and it feels like it made me “more feminist,” so to say.
A designation like ‘feminist’ tends to suggest a clear classification. You either ‘belong’ or ‘subscribe’ to the set of values it implies or you don’t. Heisenberg uncertainty principle has stirred many minds, for almost a century. In epistemology and social sciences, it has led to accepting doubt as a valid variable to work with, and that many categories can be bettter understood if placed on a spectrum rather than defined binarily. If I was already (perceiving myself as) a feminist before, but also I felt a lot “more feminist” after, what does it say about those convictions of mine, that I took for granted? This left me wondering, to what extent am I only convincing myself of my good, anti-sexist, anti-patriarchy, anti-homophoby, anti-racist opinions? Logically, these values make sense to me, and I feel comfortable carrying them. They match “who i want to be.” At some point, I must’ve chosen to become aligned with these values. When doing so, am I only trying to fit an existing model, to which I aspire? if so, where does this model come from, how is it shaped, by what agents? Why do so many of us perceive themselves as inclusive, progressist, and liberal, while perceiving the rest of the world as conservative, racist, and sexist?
During a debate or dispute, my reasoning always tell me that there is no ‘convincing’ the other party without deconstructing his or her belief.** (** The least deterministic way to form an opinion demands to “dive into the abyss”; a means to and end; the price to which which true empathy comes at) Out of intellectual honesty, I must go through the same steps in order to be able to tell ‘how’ ‘why’ and ‘when’ did I make an opinion my own. Whenever asserting opinions, there's always an accompanying discomfort, that comes from the incompleteness of these opinions. (namely, doubt) Additionally, I have somehow lost faith in debating, for I have been in too many rooms full of people who already agree with each other for too long. Incidentally, I think that investigating the structure (architecture?) of opinion-making (and self-perception) might bring me some light to why that is.
Do I only become sentient to these problematics once they are imposed on me? If choices are being made for me to an extent, am I not making my own choices? Am I dedicated to them, or do I make these choices out of comfort, in order to belong to the group I currently value most? The underlying philosophical question might be that of Determinism; are all events and moral choices determined by previously existing causes? Consequently, is this the modality in which structural functionalism happens; its agents ‘answering’ a need for cohesion, subconsciously promoting solidarity and stability in the group? In turn, Self-perception theory (Bem, 1972) tells us “If you want someone to believe or feel something about themselves, first get them to do it.” Building upon which: who are the invisible invaders that make me do things, before they become “my own beliefs?”
Machine Learning there is an unprecedented possibility for fast iterations and analysis of large datasets. The resulting statistics can make accurate predictions: the only flaw is the quality or the validity of the data. Such predictions might become reminiscent of Plato’s cave; they can make us more aware of the reality outside humans’ experience, and our sensorial experiences thus become less relevant. (in the context of observing our structural behaviour) Our perception of the phenomena will always be accurate and correlate to the real world. It is the definition of the “real world” that moves.
If these observations on the real world become embodied, and our experience of them become entangled, will we interiorate those observations and make them ‘ours?’ Is the integration of machine learning predictions in our bodies and minds desirable? If we were able to come up with ‘cold’ statistics and ‘hard’ numbers when looking at sexism and racism, would we stand were we think we do on the spectrum? Would this contradict the sentiment of sincerity that usually goes along beliefs? Can this gap be amplified or examplified? Can its origin and its role be uncovered?
I would like to investigate the distance between my self-perception and some more “objective data.” (an attempt of coming with objective data could be to observe my actions, for example) I would like to illustrate this distance in an essay, that would mix theory and the personal experience. What are the systemic agents behind “individual” decision making (medias, the entertainment industry…) in this process. The rationale behind this questioning is to improve my understanding of my behaviour. (hypothetically, by extension, the behaviour of my peers) The hypothesis is as follow: my mind will bend to fit with an ideology, as a result of various phenomenons. How does this play with the notion of sincerity? Is this binary opposition (sincerity vs superficiality) an invention to allow our simple human minds to process complex operations? Through which channels do/did the invisible invaders reach me? How is it that the picture I have of myself tend to fit an average movement?
Obviously, the subject is very vast. I would like to narrow the scope by making it a case study of myself. (the most practical and at-hand subject. Given the “self-perception” and biased nature of the problematic, I need to find a way to leverage the impossibility of objectivity as an asset. I’d like to implement an investigative attitude upon chosen significant events or influences in my life, in order to try and assess their influence on my perception of myself. I would like to look at this problematic through the scopes of comparative ethics, quantum cognition, interactionism, causality/determinism, phenomenology, and/or computational sociology. I would like to look into the existing theories and extract a poetic essay summing up the problematic in the form of a monologue, that I would then “perform” using an avatar. I’d then place this speech in the mouth of a (or several) relevant characters. A mix of artistic/experimental writing and video, a self-referenced monologue. (accounting for the the dualism what I observe/what I observe it with)
This week’s lecture by Helen Pritchard focused on “Sensing Practices,” or the relation between environmental concerns and their material attachments. It seems to reflect on how the representation of an object differs from the object. This is otherwised theorized in the Map–territory relation as enunciated by Alfred Korzybski. “The map is not the territory” and “The word is not the thing.” As we unrolled the concepts, it reminded me of some radical thoughts I had on painting as a medium back in 2010. I was then a Fine Art undergrad student and for the first time, the choice of the medium was a problem I was trying to overcome by observing what was available, admittedly to then choose outside of the list I’d come up with and find the “unavailable medium” in art. In deed, it then seemed that picking up a brush and starting to paint was the ultimate commonplace, and showed a total lack of critical reflection. After centuries of painting, the question did not seem to be “what is left to paint,” but rather “why paint at all?” Photography gave birth to abstraction in painting. And then, figurative painting made a comeback, from pop art to hyperrealism… In the 2000s, many art students would paint Hollywood movie stars, both as a critique and as part of a system… Lately, students seem to have shifted from celebrities to themselves, as if the self-curation of social medias profiles turned them into the celebrities they used to look at. The same inside-outside relationship emerges, and it is unclear where the self-adoration ends and where the critic begins.
Ubiquity seems to be key in Sensing Practices. Sensing Practices are almost always practice based, as they need to entertain an entangled relationship with their subject, rather than observing from the outside. Yet, I find there is a disturbing paradox in the Practice Based paradigm. Practice based research claims to do research through practice. Yet, what understand of “practice” can researcher truly access? It is consistently researchers (understand: theoreticians) who take on a practice, and never a previously practicing practician who takes on the research endeavour. I wonder in what ways this affects the type of practices that are effectively used for research. Practice based research comprises a promess that can not be fulfilled, the way I see it. The knowledge of the body of nurses, lumberjacks, miners, homeless (people who have learnt to survive without a roof) or any other practician who would be given a chance to sit, reflect, compare and think on their unparalleled experience would resolve the palyndrom that is “research based practice.”
I recently read the first essay of the book “Technologies of Gender” as part of my multimedia report/research. It was written in 1987 by Teresa de Lauretis. It addresses the question of gender in the poststructuralist theoretical framework and postmodern fiction. It examines the construction of gender both as representation and as self-representation. De Lauretis is Professor Emerita of the History of Consciousness at the University of California, Santa Cruz.
It builds on several other existing theories such as Keller’s critique of the genderization of science, Foucault’s theory on the technology of sex and Louis Althusser’s theory of ideological interpretation and self-representation. Foucault’s technology of sex proposes that gender, both as representation and self-representation, is the product of various social technologies such as cinema (medias), institutional discourse and critical practices. de Lauretis points out that Foucault’s main flaw is to ignore the differenciated sollications/experience of male and female subjects.
Like many other authors uncovered these past few months, reading de Lauretis makes me feel like a meaningless guest in the galaxy of thoughts that is the feminist rewriting of cultural narratives. I evidently lack the background and intellectual rigour of experienced researchers. This reinforces my hope that the idea to look at myself as a social subject of inquiry might become a valid approach. To uncover the attachments and implications of my “en-genderization,” in an experience of class, sex and race. Gender as described by Foucault: the set of effects produced in bodies, behaviours, and social relations.” But also as extended by de Lauretis: “the process of Social technologies, bio-medical apparati etc. She also claims that “All of Western Art and high culture is the engraving of the history of gender construction.” She notes that Louis Althusser’s definition of the ideological state apparati (including the media, schools, the courts, and the family–nuclear, extended or single-parented) may now be extended to the academy, the intellectual community, the avant-garde artistic practices, radical theory, and feminism.
Marx defines a class as a group of individuals bound together by social determinants and interests. Which, like in the ideological framework, is neither freely chosen nor arbitrarily set. Gender, unlike Marx’s idea of class, represent the relation between individuals, and not a or several individuals.
I hypothesize that opinion making is mostly environmental, but also that it is highly temporal and unstable. All of the daily decisions that rely on quick, iterative necessities or choices do not feel like they define me. “I like this song,” “I am hungry,” “I will read this book this month,” “I think this is the right decision.” Cognitively, any decision in the short term dimension shouldn’t take too long to set, and thus, cannot imply any higher significance. For that reason, we are able to operate these with uncertainties, doubt, and we do not feel attacked when someone puts these “short-term” opinions in balance.
Our opinions, on the other hand, are constitutive of our identity. “I am a feminist,” “I am a moral human being,” “I am open-minded, willing to share, and inclusive…” They are directions that guide us in the long term. They come at play when we need to make an important decision, such as the area in which to work, the persons with whom to spend time with… And in a way, this could align with a need to put ourselves in comfortable zones. While we can tolerate to be put in uncomfortable positions for a short period of time, no one would consciously choose to do so repeatedly. Similarly, we need to be comfortable with our self-representation to be able to live with ourselves. Our self-representation needs to be solid and rarely leaves space for doubt. For that reason, our set of values are cognitively “engraved in the rock.”
Aside from the necessity to feel good about ourselves, I fail to see an obvious logical reason for these affirmations. Another paradox is to be found in the difference between “hard facts” and “beliefs.” Facts might include climate change, endemic corrpution of the political class, the weight of industrial lobbies in the EU… Despite their “absolute,” “simple,” “immutable,” nature, one can find themselves in the position of doubting them. They certainly are subject to partisanship, but I am more interested to their effect at the individual scale. “Beliefs” can be described as impermanent, transitional. They often are more complex, too. They are based on perceptions and interpretations, and never on hard numbers. Yet, cognitively, they carry much more weight than the judgments we make on proven facts. Is it because their complexity makes them so unique in our eyes?
The revolution of quantum mechanics and its supernatural phenomenons (entanglement, the quantum leap, the wave-particle paradox, the weight of “gazing…”) have produced waves in almost every field of research: mathematics; philosophy; humanities; psychology… It has affirmed the scientific value of doubt and uncertainties. It made it possible to use approximated values in calculation, and it allowed unprecedented results that are yet to be exhausted. In various ways, it also dismissed the binary understanding of many problems and definitions, and started placing everything in perspective and on spectrum, rather than on one side or the other of a threshold. It’s easy to imagine how this cognitive revolution paved the way for feminist STS and its methodical way of questioning and reassessing results hitherto taken for granted.
I hypothesize that this new value assigned to doubt has yet to reach our bodies and our way of sensing and experiencing the world. But if it continues to gain importance and to allow results in most critical practices, we will stop to be scared by uncertainties. It is not unreasonable to think that we will learn to function with doubt, that it ill become part of our innate tool set, by some sort of epigenetic process.
In response to these previously described assumptions, my strategy is to:
1 — Investigate my own social technologies, as defined by de Lauretis, Foucault and Althusser. I will focus on “the media” and list key fictional figures (cinema, games…) and assess their ideological value.
2 — From this, create a “persona” or “avatar”, who will represent some assumed ideological value that my body subsequently comprises and has assimilated.
3 — Create another persona based only on self-perception, in the form of artistic writing.
4— As a way to impersonate the cognitive dissonance that I feel towards these two supposedly embodied personas, I will concatenate them in one, final, dissonant character. I am hoping to come up with an interesting representation of the distance between the environmental factors and self-perception. I am thus looking for a way to “break the mirror” of self-perception. Despite the utopia and manifest recursivity that goes with this idea, I feel compelled by an attempt at more lucidity. How can I become the spectator to my own bias? What could a verification process look like?
My reflection on opinion-making and self-representation leads me to ask “is the cognitive recompense the only fuel behind every decision” ? As such, I think it draws on the family of theories about the mechanisms of cognition that is computationalism. In rough terms, computationalism says that cognition = computation. Although the critique of such paradigm (i.e. the fact that it fails to address the broad, complex context of a subject experimenting an experience) is very much sensible, it remains a valid framework to think on various behaviours. Furthermore as we gain understanding in Machine Learning/AI, since machines can be taught complex sensations, that do not seem to fail to adress the context any more.
Helen Pritchard pointed me to authors who have used similar experimental writing methods in the past, as a way to “become a listener”. Nietzsche apparently created conceptual personae. She also suggested I look at Brian Winne, John Law and Alan turing’s “Can computers think?.”
“Paper Synth” is a sound performance made from piece of paper and actuators. They are controlled by a MIDI controller, an arduino, a webcam, and contact microphones are attached to each paper module. The sheets of paper are stretched on wooden frame to maximize the sonic properties of the paper. The slightiest scratch, touch, scramble, graze, brush, scrub, or breeze produces apocalyptic, synthesizer sounds. It is an experiment that summons extremes. The fragility of the paper and the physical intrusion of amplified sounds. Sound intervenes in a spatial dimension, but we cannot act on it directly. You cannot redirect, stop, cancel a sound when it has been emitted and its waves spread at 343 m/s. Paper, too, has a particular relationship to dimensions. It's the tool we use when we try to reduce our three-dimensional environment to a two-dimensional world. The screams of paper are traditionally figured through drawing or painting. But after centuries of torture, the paper gave all it could see. The webcam of one of the modules reads the shadows that are cast on the sheet of paper, and turns them into sounds. Shadows, the ultimate non-physicality, turned into the equally ambiguous paradox of a matter that is sound.
Making art with paper by drawing on it is no longer a very sincere gesture, nowadays. It has become choreographed, trained and repeated through generations. How can we do justice to new ideas with such old methods? To break with tradition, we decided to make a piece of art with paper, but to offer it to the ear rather than to be seen. A way of deceiving the tools, the audience, the performers, the bodies, of really appropriating them, and of making them about new, unchoreographed and unrepeated body gestures. The ambition is to create unfamiliar gestures, which we do not yet know. If no body has played this instrument yet, then we cannot say if it is well played. Marcel Duchamp wrote in his work
The performance was created by Raphael Theolade and myself as part of our final project for the “Special Topics in Programming for Performance and Live” course with pr Atau Tanaka and Balandino Di Donato, in the MA Computational Arts at Goldsmiths University. Camera: Ankita Anand, Eirini Kalaitzidi, David Williams. (thanks) Entirely programmed in Max/MSP/Jitter, using MIDI controllers, Arduino/serial communication and Node for Max with the posenet model for embodied controls.
I have been playing with the idea of using existing formats and frames to write my multimedia report. The idea of using several voices (create a conceptual persona) came to mind, but also simplier ideas such as playing with the typeface settings and “dull” formats, such as a disclaimer… In the first year of my BA Fine Art, I started writing a “Code,” that contained rules and limitations to the formats, subjects and modalities of my future art practice. It was written in a style that mimicked the “code of obligations”, the Swiss civil laws. I was amazed to realise that the way laws were written was so scholarly and literary. It sounded like it equally allowed one assumption or its contrary with equal strengths. For this report, I would like to write a disclaimer that would notify a possible reader of my caveats and how I imagine my work to integrate within all the preceding bodies of work.
Caveats. How is feminist STS concerned by the question of projection? I was first worried about the question of “What is it I could “bring to the table of feminist STS? Very little for sure. Unable to answer this question, I then flipped the “material scenery”. It is not the movie of feminist STS in which I am a cameo, but the movie is about me, and feminist STS is the new character with whom I interact. My seemingly unrelated en-gendered body now plays a role in my own body. The personal is political.
It remains possible that anything I write or produce might be the humble guest in the expanding feminist STS universe. As a well-educated guest, I feel like I refrain from making unsollicited comments on the color of the carpet or the curtains. Rather, the appropriate way seems to be asking my hosts all the questions I can to show my sincere interest. I can then compare and analyze my own experiences and observations as a “social subject en-gendered by/in race, class, and technologies of sex.
The experience of the Computational Arts-Based Research and Theory Journal has been freeing. I think of Donna Haraway’s words and the need for research, notwithstanding the “level” at which it is carried. “We must think! The unfinished Chthulucene must collect up the trash of the Anthropocene.” Research can be simple, sincere and like unsupervised machine learning, its outcomes are unknown. Such approach is stimulating and regreshing. A search for the uncommon, the unthought, the untold, and new cominations of words and signs that induce old or new thoughts in new or old bodies. I think of Lucy Suchman idea on re-figuration on every poster, in every corner of every street. I think of Mojca Punjer description of Postgravity art, as ideas for artworks for a place that doesn’t exist yet. I think of Janez Janša(s) and their politics through art and the administrative/legal apparatus. I think of Forensic Architecture and it makes me want to do more, to enter the reality of suffering bodies, and to try and understand their experience beyond traditional media. I think of the emojis, and how there’s no little research subject. As unsignificant as one might sound at first, it might talk about and to billions of bodies, and we might learn something about the way they function. I think about Garnet Hertz and his collection of disobedient electronics. Naomi Wu and her deceiving experience with Vice. Julian Oliver and his uncompromising approach of art. “Anything non politic doesn’t need to exist.” I’d kindly agree if I hadn’t had the strongest experience with some form of poetry. I think of Allison Parish’s work on Machine Learning. I think of Femke Snelting and the Free Software Movement. I think of Susan Schuppli and how she is able to act on global politics, when it would sound out of reach to any other human. I think of Paul Preciado and the necessity for extreme experiences to reveal higher truth on the pharma industry and its effect on one body, and thus all bodies. I think about Alan Turing and the sad destiny of a unique mind. I think of them all, and I think of the bodies I know, and where they belong in this panoply of bodily description.