The Programmable City Project is holding a meeting on Data and the City in Maynooth from August 31- September 1st. You can get Rob Kitchin’s and Mike Batty’s papers from this blog but the programme is diverse and the Project web site contains details of the various contributions. I argue in my paper that we need to redefine big data in terms of the tools we use to interpret it and that size is not the main criterion as quite modest data sets in cities, particularly those dealing with transport and flows, are too large for most of current tools which involve statistical manipulation and visualisation. Here are two of the papers to be presented:
Michael Batty: Data About Cities: Redefining Big, Recasting Small
Rob Kitchin: Data-Driven, Networked Urbanism
The image above was produced by Stephan Hugel and Flora Roumpani from their animation of tweets in London Using City Engine. Click here for the movie.
Ideal Cities, such as Frank Lloyd Wright’s Mile High Tower The Illinois (pictured here), and Le Corbusier’s City of Tomorrow have fallen out of fashion in recent years. But the rise of the smart city and the notion of the instrumented district together with our current concern for Future Cities is beginning to resurrect such theories. The current editorial (click here here) in Environment and Planning B deals with these ideas and inquires into the optimal size for such ideal cities. In fact the ideal city and its close economic comparator the optimal city are long-standing issues that I discuss in this short note with respect to questions of how we measure such optimality. There are two main approaches. The oldest is the visual approach – ideal cities tend to be geometric purities – as developed at their pinnacle during the Italian Renaissance while the second is concerned with optimal city size – optimal populations – that range from Plato’s 5040 to Ebenezer Howard’s ideal garden city of 30000 to Le Corbusier’s City of Tomorrow at 3 million. I deal with some of these issues in this editorial but the question of how all this relates to the smart city is something that I do not discuss. I will blog about it soon. I also need to note that the journal in which this editorial is published is now owned by Sage and the journal’s new web site with its contents including the editorial are here.
Clementine Cottineau initiated our work on paradoxical interpretations of urban scaling laws using the example of the French city system. The paper is now on the arXiv and you get it by clicking here or going direct to the arXiv. Scaling laws for cities are controversial and the furrow that we have ploughed here in CASA argues that the definition of the city system is all important in measuring the effect of scaling. If the definition changes, so does the scaling. Scaling is an intriguing concept and the notion that as cities grow they get more than proportionately richer has many policy implications which fly in the face of the small is beautiful movement that dominated city planning for most of the last century. Here Clementine Cottineau and colleagues have unpicked the city system in France, showing much the same that we derived for the UK city system or rather England and Wales which we reported in our interface paper: namely that evidence of superlinear scaling for income and other creative industries is volatile and ambiguous. There is much more we might and will say about this topic but a key issue that we are thinking hard about is what happens to inequality as cities get bigger.
The abstract from Clementine’s paper is as follows “Scaling laws are powerful summaries of the variations of urban attributes with city size. However, the validity of their universal meaning for cities is hampered by the observation that different scaling regimes can be encountered for the same territory, time and attribute, depending on the criteria used to delineate cities. The aim of this paper is to present new insights concerning this variation, coupled with a sensitivity analysis of urban scaling in France, for several socio-economic and infrastructural attributes from data collected exhaustively at the local level. The sensitivity analysis considers different aggregations of local units for which data are given by the Population Census. We produce a large variety of definitions of cities (approximatively 5000) by aggregating local Census units corresponding to the systematic combination of three definitional criteria: density, commuting flows and population cutoffs. We then measure the magnitude of scaling estimations and their sensitivity to city definitions for several urban indicators, showing for example that simple population cutoffs impact dramatically on the results obtained for a given system and attribute. Variations are interpreted with respect to the meaning of the attributes (socio-economic descriptors as well as infrastructure) and the urban definitions used (understood as the combination of the three criteria). Because of the Modifiable Areal Unit Problem (MAUP) and of the heterogeneous morphologies and social landscapes in the cities’ internal space, scaling estimations are subject to large variations, distorting many of the conclusions on which generative models are based. We conclude that examining scaling variations might be an opportunity to understand better the inner composition of cities with regard to their size, i.e. to link the scales of the city-system with the system of cities.”
Here again is the link to the ArXiv