Vignettes of the not-yet

02014.03.31

towers

I’ve been writing proposals and papers for the last few weeks. I miss making things up. Here are some made-up things: five snapshots of the near future for people in cities.

Sonopod networks

The technological offspring of cheap audio gadgets that turned any surface into a speaker and the ‘throwies’ developed by early tech pioneers Eyebeam, sonopods were initially used by art pranksters and the remnants of the Occupy movement to set up ad hoc communications networks during events and interventions: the tiny sticky pods, scattered across facades and windows, powered by the kinetic energy of the throw that catapulted them skywards, would set up microvibrations that turned the fabric of the city into a pulse-carrying message platform. Each pod used its weak RFID transmitter to co-ordinate with its nearest neighbours, relaying simple messages to its peers in an ad hoc swarming network that relied on having thousands of sonopod nodes in the hands of the network creators.

Lacking the budget for the direct actions beloved of the early 21st century, instead police recruited Ogilvy in an attempt to counter-jam the culture-jammers, a move that was so successful in co-opting protestor chic to the high street and the boardroom that activists were soon able to rely on more efficient network infrastructure donated by the ad agencies, leaving their sonopods in the basement alongside their V for Vendetta smog masks. Instead, construction companies adopted the technology, using the sonic networks to bypass the increasingly crowded radio spaces that made traditional mobile communication unreliable and unusable for critical infrastructure projects.

Once habitat printing technology had been brough to the mass market by Barrat, contractors began to use more and more resonant materials in their extruded designs, making sonopod communication even more efficient, until the mid-century north-bound locust migrations demonstrated all too tragically the similarities between the resonant frequencies employed on the sonopod network and those generated by the flapping of thousands of wings. Now the devices are used mainly by NGOs for rapidly networking the pop-up cities that spring up in the interstitial regions of the world, helping entrepreneurs and aid agencies alike by establishing pervasive communications in a matter of days rather than years, the only dark spots indicating the presence of resolutely unresonant dwellings made of inexpensive mud, or specially-imported granite.

Piezo hacking

Posing as contractors at a metro station, the gang were able to carry out the theft by posing as workers to install a new layer of piezoelectric flooring over the existing municipal layer, siphoning off commuters’ collective energy. Using human workers for labour-intensive jobs was a new thing, reflecting both a recent public appetite for ‘artisanal’ small-batch engineering projects and a corporate taste for the cheapest labour option: a rash of criminal resource thefts like this and the infamous Melbourne reservoir appropriation would lead to the rapid reappraisal of an approach that made a brief but immense contribution to the criminal ‘resource-laundering’ being carried out on a global scale. The metro company revoked the energy discount on everyone’s travel to make good the loss, before hiring forensic visual data scientists from G4S to track and detain the gang, identifying them in just days despite only being able to make use of cumbersome quantum computers rather than the recently-developed ‘string looms’, still the province of top-priority researchers in hydrology and space weather. The thieves were indicted in over 4000 separate civil cases by individual commuters, in an important landmark for the acceptance of automated legal work. However, as they’d all purchased rural identities some weeks previously, they were able to defray the costs with their own subsidies from UNAGRI, originally designed to mitigate the harassment meted out to rural companies and individual by actors within city jurisdictions.

Cloud mugging

Using modified city-issued personal tasers to rapidly ionise the air around a mark, depleting oxygen levels and causing unconsciousness without contact, perpetrators could avoid the hazards involved in touching computationally-enabled clothing able to analyse the qualities of physical contact and communicate this information (alongside a DNA sample) to the relevant law enforcement agencies. Once their wearer ceased moving, the clothing computers’ batteries would run down rapidly, enabling their replacement with anachronistic cotton garments that left no defence against the indentured service that was the destiny of anyone outside the city networks.

Vermin mesh

Recognising the ability of vermin – rats, pigeons, foxes, seagulls – to penetrate the seams of the city, and aware of the difficulty the city network facilities team faced in achieving a similar degree of coverage, the mood in city hall was ripe for the sort of solution currently being described as a ‘policy flip’, using the weight of a mighty problem to spin it around ju-jitsu style and create a solution as powerful as the previous issue had been dire. The vermin were chipped, swallowing tiny transponders and signal boosters and relay points placed strategically in the dumpsters of petribeef left outside every corner bistrotech in the evenings. Together, powered by the scurrying movements of their carriers, these components came together in a mesh that spanned the city: when, historically, it had been common for cynical citydwellers to say “you’re never more than six feet from a rat”, now it was true that they were never more than a metre from a node in a network that was open, robust and truly pervasive. In practice, rather than talk from people, this city-wide mesh network carried traffic from the infrastructure monitoring bots that kept things running, freeing consumer bandwidth: the chattering, wheeling robogulls, echoing the old modem screams, and the scuttling rats in the ceilings helped building speak to building, away from the busy human and AI noise. Genetically-modified strains of the common pathogens carried in these populations were introduced, performing a similar role in the network as the headers in the old internet protocols.

Now, of course, the biological substrate is largely silicon, since Latvian separatists hacked discarded army Big Dogs and sent them in to cities to hunt the rats, these terrier-ists literally packet sniffing their way through pipes and sewers to unravel the maintenance network and condemn buildings to decay and collapse. The robots succeeded only in adding their clumsy bodies to the network, each one replacing the node it removed: city facilities saw the potential and began to introduce retired medical carebots, early Festo-inspired biomimics and antique Roombas to the network. Below our feet and in our aircon channels, this first generation of servants lives out its days, labouring unnoticed, keeping our towers upright.

City soundtrack

Shivering in their easyprint regolith igloos on Nova Lincoln by the Mare Moscoviense, the real time sounds of the city piped to their jawbone monitors were a comfort to the first Realty Expeditionary Force. Waking to silence one morning, they were the first to realise the extent of the disaster back home.

So I can’t make much of a claim for originality, perhaps. But these might serve as useful catalysts for unpacking other people’s ideas of the future, prompts to act as the ground for critical questions like these:

  • When are these set? What would make them happen earlier? Later?
  • What relationships are in the story? Which ones are new? Which ones exist today?
  • What myths are persistent? Which archetypes has the author struggled to free himself from?
  • What seems new? What is new?
  • Who benefits? Which groups do well in these futures? Who doesn’t? Does that ring true? Would that still be true then?
  • What would need to happen to prevent or encourage these futures? What can you do to help?

No comments

Making located futures

02013.10.28

A while ago I was thinking about the idea of located futures – narratives of the future that explicitly locate themselves in a particular place, reflecting the concerns of the people that constitute it. I tried to explore some of the theoretical foundations of the idea in a paper. However, the point of the idea, for me, was always practical. I think that the process of constructing actual located futures would be a valuable activity for people to undertake.

The point of ‘located futures’ is that they pay attention to the embodied and experiential nature of space. So it seems inappropriate to produce a simply textual account of a possible future for a space. Instead, I want to explore how a group of people can create a way of experiencing a place as if the future they imagine was somehow already present. The intervention they create breaks the boundaries that keep the present and the future separate. Their response to the experience helps them to understand their role in bringing this future about, and the impacts this possible future might have on their lives. It’s about speaking to their hearts and imaginations, and text isn’t very good at doing that. But without doing that, futures remain theoretical and present action seems less necessary. And the focus of this whole process is to help people connect the future to their present actions.

(more…)

No comments

USGNOSTICOM

02013.07.08

One of my favourite short stories contains the line, “A man taking pictures of a man taking pictures: there must be something in that”. It comes to mind whenever I see something that seems somehow meaningful but in ways I’m not perceptive enough to understand. The last time I thought of it was when I read that the US military has set up a unit to deal with cyber-warfare, and that that their Naval Academy will offer a major in “cyber” – in short, that the US military, and presumably by extension all other militaries, now see their operations as taking place in “all areas – air, land, sea, space and cyber”.

Space activities are covered by the air force. So military operations are distributed amongst the air force, navy, army and ‘cyber command’. The correspondence with the traditional elemental quartet is hard to miss. The military organisation divides the world into air, water, earth, and another element – perhaps we can say ‘fire’. It seems a good fit, to me, and shows the world in a new light. Imagining the virtual in terms of an ancient elemental ontology brings it back to its proper place, alongside the rest of the mundane world – digital space has nothing to do with the angelic realms of the pleroma. How we can understand the digital in this elemental way? What would an esoteric analysis of the digital look like? How many apocalyptic conspiracy theories would be lent new urgency by this Gnostic patterning of a hegemonic power? Does employing an ancient form of making meaning help us to counter the insistence that the digital and the networked demand uniquely novel ways of seeing the world?

Who knows. I don’t. But the dominant military power aligning its organisational structure to an ancient way of describing the world – there must be something in that.

No comments

Mime types

02013.06.10

Researchers at the University of Washington have worked out how to detect the tiny variations that a moving human body makes in a wifi field, and built a gesture interface with it.

WiSee is able to detect and identify nine different gestures with 94% accuracy, and the team says that the next version of the system will be able to recognize sequences of gestures and accept a wider “vocabulary” of commands. “The intent is to make this into an API where other researchers can build their own gesture vocabularies,” said Shwetak Patel, one of the lead researchers.

The team has successfully used it to control electronic devices for purposes such as changing the channel on TV or turning on and off the lights, and anticipate many household applications. They acknowledge that they will have to introduce new improvements to the technology to prevent unauthorized use and restrict it to a specific area with a ‘geofence’

I think it’s huge. I mean, I don’t know if it will be huge, but it could fundamentally reconfigure our relationships with the various outposts of machine intelligence with whom we cohabit. It makes worries about the Kinect’s permanent state of eavesdropping seem a bit quaint. You have to assume that you’re not alone, even when you are. Perhaps it’ll be easier to get used to for people with butlers.

How hard it will be – at the moment, lacking conventions of gesture and positioning – to tell what or who someone flailing their arms around in your living room is trying to speak to. When we communicate there’s historically been a way of indicating who we expect to be listening, either with some sort of eye-contact or pointing a remote control at them. This technology breaks that connection. Where’s the feedback? How important is it for other people in the room to know what you’re doing? Is it possible to design politely? This makes Timo Arnall’s “no to no UI” message even more urgent.

So it’s worth thinking about the cultural repositories of gesture that we have access to already and that we could draw on when working out how to fit this new magic into our lives. Tai chi and the priestly movements made in places of worship are both existing modes of embodied intervention in an unseen but pervasive medium – perhaps in these traditions there might be some answers to the problems raised by this new kind of interaction.

There’s a whole other set of questions about what might be done with the knowledge embedded in this system, of course. Imagine a room full of machines that don’t function for people who have been behaving in a way that fits the movement profile of an undesirable. Or wifi that’s faster for tall, graceful people. Or localised versions to accommodate dialects that make more use of body language than standard British English. Perhaps there will be more positive things – maybe an entertainment system that recognises toddlers in the room and turns screens off, or an application that recognises unfocussed flitting between distractions, or texts you when someone in your care has depressive body language. But however you spin it, there’s a huge potential for invasive and constraining applications that limit individual autonomy. Which seems a high price to pay for a new way to change the CD or turn the aircon up, the applications suggested by the demonstration video. I’m increasingly ready to believe in a future me covered in dazzle facepaint walking furtively along a tube station, dressed in Hyperstealth camo, turning my wave-deflecting rosary over and over in my fingers to ward off the data demons.

No comments

Making futures real

02013.05.27

Talking about the future is a very abstract thing to do. It’s hard to relate action in the present to a particular future. Even if you’re convinced of a link between doing certain things today and how the world will look later as a result, it’s difficult to really internalise that relationship if there’s any kind of gap between cause and effect. But, of course, there usually is a gap, and when talking about the future that gap is usually years or decades.

I’ve been thinking about ways to make that connection between present action and future circumstance more visible. There’s a lot of existing work that concentrates on generating material artefacts that can be imagined to come from some future, from people producing things like ‘design fictions’, ‘experiential scenarios’, ‘diagetic prototypes’, or similar. This sort of approach is unquestionably a more engaging and effective communication tool than the text-based scenarios more usually generated by futures work. I think they’re much better at catapulting your imagination into some future time – they’re a fantastic way of making the future less abstract. But they are quite static, arriving fully-formed in the present (at least for those people who aren’t involved in the process of creating these imaginary future artefacts). And the thing that’s been exercising me in particular about the future is the way it goes from being an immaterial and uncertain possibility to a material and certain fact. As time progresses and people keep doing things, what were abstract potential outcomes become real and definite. The dynamic nature of that process, and the way it takes place in a single location, is sometimes lost in traditional futures communications. I end up with a sense that we in the present are in one place, and the people in the future are in another place, and there’s no connection between them.

What if there was a way of making an object that became more real as time went by? One linked to a particular future, so as that future became more likely or more established, as yours or other people’s actions tended towards bringing it about, the object would become more real, more extended in our present world. What if you could see a tangible future unfold in front of you?

I’m imagining some sort of form that gets generated within a constraining template at a rate that corresponds to the progress of a set of indicators. The template stands for a particular willed future, or perhaps a feared future: the indicators are the facts about the world that have been chosen to represent this future. They might be measures of particular substances in the environment, or data from organisational activity, or records of the movement of certain organisms. You’d need to have done a lot of work to establish what sort of data you’d accept as a measure of your future arriving, and what compromises you’d accept in your model.

This arrangement makes it possible to collapse a multidimensional set of indices into a single answer to the question “is it closer or further away?”. It’s the opposite of a big data dashboard, in that sense: rather than representing multiple streams of data that the viewer may or may not be equipped to parse and interpret, it relies on you to have settled on an outcome and a model and then just tells you how things are progressing. Not which things, or which ones are moving faster, or which ones are retarding progress—there are lots of tools that help you do that already—just how they’re all doing cumulatively. This is a way of refocussing on the big picture.

So what does it actually look like? Well, it would need to have a mechanism for growing or building, and a network interface for communicating the current state of the model. It could be something like:

  • coloured gas in a transparent, unreflective container, gradually condensing into a solid form.
  • successive layers of material laid down by tiny nanobots
  • a wire form gradually colonised by ferrous crystals stimulated by varying degrees of current.
  • a solid cube that degrades faster or slower in different places to reveal the future inside (perhaps the whole thing rots if progress ceases)
  • a scaffolding armature along which tiny builder robots zip horizontally or vertically with Lego blocks, laying down the future one brick at a time
  • a bonsai tree whose twists and turns represent inflections in the data, turbulence in the future history of your chosen world

Whatever form is chosen, it ought always to be started partly built, reflecting the historically contingent nature of the future – there are latent futures already with us, future societies and environments not yet visible but in progress. And ideally it would just run on its own, with minimal input from the person tending it: if you want to change the shape, act differently.

Speaking of contingency, the networked nature of this object would make it possible to take the output of other similar objects as an input. We could have a networked forest of futures in different stages of becoming, co-operating or competing or in a dynamic equilibrium.

So an object like this – and the corresponding effort to imagine and model a possible future to shape it – clearly poses some important philosophical issues about time and the way we are able to represent change and possibility. But what sort of practical purpose could you put it to? Maybe you could use it to:

  • Support work towards a common goal – perhaps teams that are distant or working in different fields could each have an object that made their common progress visible.
  • Make progress in a complex situation more visible – environmental issues, for example, are often hard to grasp due to their multifaceted character
  • Give a whole community some insight into a local or global issue – having a large-scale future object as a public installation could support local efforts to work for a shared environmental or social future, perhaps linked to air quality monitoring stations or local crime reports.
  • Connect to existing project management or modelling tools – a Basecamp plugin for a desk-based version might be a useful way for individuals to stay focussed on the big picture.

Really, anything that you’re happy to track could be used as an input. My feeling is that it becomes more valuable as a way of representing the not-yet-here as more dimensions are included in the model. If you only have one thing to track a church thermometer would do a better job. But for representing complex futures that are the result of hundreds of social, economic and environmental interactions, without inviting the viewer to drown in a sea of reports and visualisations, this kind of object might have an important role to play.

Of course, there’s a lot to be done to make one of these real. If I had a chance to try making one I think there are three distinct areas of work:

  • Materiality: what actually works? I’d like to work with designers, technologists and materials scientists to explore the different approaches outlined above, and find a way of growing structures at a controlled rate that’s safe and reliable enough for domestic or office use.
  • Modelling: obviously this is far from a new field, but in the context of this project, what kinds of model offer the most potential? What sort of outputs are most useful? What kind of detail is necessary? How do you ensure such a reductive approach to the future is productive? I could learn a lot from speaking to economists, climate scientists, programme evaluators and other people used to trying to quantify the unquantifiable.
  • User experience: what sort of response do people have to these objects? What kind of form do people relate to most readily? How do you communicate the context around the form so that it isn’t taken for the only future but just one possible future?

I’d love to have a go, I think. It would be a fascinating project, dealing with knotty metaphysical issues and practical challenges, and all in the service of helping people understand what kind of futures they’re making.

No comments