The Morning After: Predictions closer than all of us thought, a Divided Britain
Carlos, Elsa, Duncan and myself (Mike) decided to explore next week’s British election using some of our social physics, specifically our work on uncovering urban hierarchies through percolation theory, which basically consists on looking at the connectivity of ‘representative’ areas in the street system.
This might seem an arcane way of proceeding but when we examine England, Scotland and Wales as a giant connected cluster, and then break it up by successively reducing the distances between its nodes, we first disconnect the periphery – the Scottish islands – and then quite suddenly when the threshold hits 1.4 kms, Scotland breaks off from the rest, pretty much evoking our sentiments about Scottish independence. Reducing this further, the North and West split from the South East at around 900m and this represents the fault line that as a young student in the early 1960s, was introduced to me as the North-South divide. After this, Wales and the West Country split off separately, evoking shades of Welsh and Cornish independence. Once we hit 300m, the big cities appear. At the same time, many of the little cities fill in the backcloth.
You could be forgiven for thinking that this is the way the British electorate might vote in next week’s election given all the hype about Scot Nats, the imminent demise of the Liberal Democrats, and the influence of the new parties such as UKIP which could be massive or could be negligible. We don’t know whether or not there will be a last minute bounce and what kind of bounce, dead cat, or otherwise.
Anyway we have had a go at producing our own predictions. And we have placed a paper in the arXiv (which will be available there next Monday) but we would like you to look at here – Click. Please re-tweet it as we want some feedback on this approach because it is much wider than next week’s election per se for it represents a new way of thinking about cities and regions and nations in this connected age. Elsa has also put our more basic paper on the methodology up at the arXiv too and you can get that here too directly.
You might also want to look at our percolation movie too which is rather neat and you can access by clicking in the link here.
Urban morphology has presented significant intellectual challenges to mathematicians and physicists ever since the eighteenth century, when Euler first explored the famous Konigsberg bridges problem. Many important regularities and allometries have been observed in urban studies, including Zipf’s law and Gibrat’s law, rendering cities attractive systems for analysis within statistical physics. Nevertheless, a broad consensus on how cities and their boundaries are defined is still lacking. Applying percolation theory to the street intersection space, we show that growth curves for the maximum cluster size of the largest cities in the UK and in California collapse to a single curve, namely the logistic. Subsequently, by introducing the concept of the condensation threshold, we show that natural boundaries of cities can be well defined in a universal way. This allows us to study and discuss systematically some of the allometries that are present in cities, thus casting light on the concept of ergodicity as related to urban street networks.
Recently I was asked to speculate on what strides had been made in urban and transport modelling during the last 20 years and what did I think models would evolve to in the next 20. The current editorial in EPB summarises my thinking. In many senses, this was prompted by the oft-quoted sentiment that agent-based models of transport which build on many developments in the last decades including activity time budgeting, discrete choice and the ability of computers to handle many many objects through rapid computation, have not made the world better, but produced much inferior performances than earlier more aggregative model structures. For a while there has been the sneaking suspicion that aggregate models with all their limits in terms of representation, somehow generate more realistic predictions that their micro-dynamic equivalent, Of course there can be no true test as these model types are so different. However what is interesting is whether we can generalise in any way from the widest possible model experiences: as we add more detail and attempt to explain more, all other things being equal, are we more likely to get poorer or better predictions from comparable models? The implications is poorer although the jury is out because the evidence has rarely been assembled. This question remains unresolved, and probably will do so.
To an extent it might be logically plausible to show that aggregate models might perform better if the strong structural constraints that determine how aggregate populations travel are more difficult to represent or rather are more difficult to emerge as the product of many travel decisions within micro simulation models. But all of this involves incredibly well defined controlled experimentation and given the exigencies of the very different situations in which different models are built, it may well be impossible to come to any definitive conclusion in this regard. Moreover the whole question of what is good prediction anyway is raised in this debate which is more about models and science in human affairs than about specific types of model. Yet at the end of the day, we still have to choose between different models and different predictions and learn to live with these tensions that are endemic in our field. The bigger question I think is whether or not our world is becoming more unpredictable or rather more uncertain one might say for this does and will have important implications for modelling. I have written an editorial about all this in the current edition of Environment and Planning B which can download here.