simulating urban space use

How do people use spaces? Or to put it another way, how consistent are the proposed/designed spaces with the way they are actually used, and what kinds of informal/spontaneous/unexpected uses does a space finally host?

This is a question that can be investigated through simulation, and more specifically Agent-Based Modelling (ABM). Modelling at this scale is already well underway, with pedestrian modelling being a significant research area. But by creating more descriptive models (more detailed agent behaviour) that include pedestrian agents that use their environments, in addition to navigating them, the above question can be better approached.

For my MRes Dissertation at CASA, I looked into 3D ABM, and the simulation of urban spaces at the human scale. Here’s a short video of the model running:

 

The aim here was to create an agent framework that uses its 3d environment, in addition to navigating it. In the video above, agents are able to navigate the environment. Furthermore, agents are looking to situate themselves in one of the predefined spaces (sit at a bench, basically). Finally, agents respect local densities and other agents’ “personal” space. Using these rules, the model provided a good description of the space it simulates.

Each agent picks a spot at random within its field of view, and checks if it is occupied by another agent within a small radius. If not, it moves and occupies that spot for some time. If the spot is occupied, the agent keeps picking spots at random within its field of view until a suitable spot is found, while moving through the space. The buildings obscure an agent’s vision, and so agents have to move deeper into the space to find more hidden spots, which is further affected by their path choice through the area.

Some small-scale dynamics start to become apparent through this simulation, that would be hard to notice with conventional surveying techniques. For example, 30 seconds into the movie, back-and-forth starts in positioning between front and rear benches. That is because front spots are the first to be noticed and occupied, and after that time the front and rear spots alternate between free and occupied for a while. Looking at the overall distribution, the front-spot preference would be more apparent (in agreement with survey results), but the dynamic described above would have been lost.

modelResults_plan-01

Overall Distribution – Simulation

surveyResults_plan-01

Overall Distribution – Survey

One thing that is missing from the model is the informal seats; the steps around the perimeter of the front plaza and the ledges, which act as seats when the “formal” seats (benches) reach capacity. Time limits prevented this agent behaviour from being included. This type of agent behaviour however, the informal and unexpected, is a very interesting next step in urban space simulations. If implemented properly, it could provide insights in design shortcomings, or even better, further potential uses of urban space.

Although I didn’t mention it much, the development was done in unity3D, a popular Game Engine, which was, quite frankly, amazing. It provided a very clear platform for developing pedestrian agents, and more importantly, it also provided native support for developing the models in a 3D environment. The game engine platform allowed the 3rd dimension to be an integral part of the model, instead of a visualisation output. The importance of this might not be very clear here as the simulated area was flat. in other models however, the agents were able to detect height differences in their environment, and act based on this information, during runtime. Although it might not sound like much, being able to use 3D space integrally in a model can have a big impact on agent vision, movement and decision trees, and ultimately help produce much more descriptive (realistic) artificial societies and simulations.

There were definitely a lot of interesting “a-ha!” moments when trying to code urban behaviour into a simulation. In the future, I’ll try to go into more detail into how the models were developed, and properly document agent vision, movement, decision trees, and anything else I come by while developing urban ABM.

Advertisements

leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s