A single trajectory is a tragedy, 1.2 million is Big Data.

Pieter Fourie, Cuauhtèmoc Anda and Sergio Ordonez

There is still such a thing as bad publicity, as a recent New York Times exposè on app-driven person tracking confirms. Here’s how to stay out of the headlines by rolling your own data. We have developed methods that allow data stewards to stream completely synthetic location trails, which can fulfil the needs of many location-based services, and unconditionally guarantee individual privacy.

Three days, 1.2 million devices, 235 million locations, reads the tagline of a recent article on mobile data privacy in the New York Times. As a mobility researcher in the age of big data, one becomes inured to the staggering numbers involved in location services data gathering (one of many euphemisms for persistent individual location tracking). The bigger the numbers are, the better! More data means better models. Moreover, nobody is forcing the consumer to use these apps and services, right? Everyone is a knowing, willing participant. However, viewed through less familiar eyes, these numbers represent an Orwellian nightmare in the making.

The NY Times Daily podcast does an excellent job of shaking us out of our complacency; revealing the tragedy of vulnerable individuals whose privacy gets sold en masse to the highest bidder. It raises the question: what are our alternatives? The hottest topics in urban research and responsive cities all hint at some degree of surveillance: connected devices, digital twins, Internet-of-Things, Mobility-as-a-Service, all require us to become more connected. Information about us and our movements is put at risk of becoming more widely used by an increasing number of actors.

The NY Times article raises many issues around a lack of policy and oversight in the field of location tracking and exposes its personal, societal, institutional and corporate dimensions. These are all tough but pertinent problems. As we come to terms with living in an ever-more connected world, it is worthwhile to discuss some technological considerations to inform our decisions.

Data privacy preservation techniques

In the initial phase of their investigation, the journalists’ queries to data providers were met with claims that data were being aggregated or anonymised. What do aggregation and anonymisation mean? Generally, it either means bundling data points together so individuals cannot be told apart, or masking identifying information about them. When it comes to data on people’s movement, this becomes a tricky task, as the growing literature on the topic attests. Nevertheless, let’s assume that you somehow have a sufficiently robust approach to protecting people’s privacy. Then the next question becomes: how should you apply this protection?

Privacy-by-design vs post-processing

A data collector generally has two options when enforcing privacy preservation: either embedded into the device, meaning that no individual identifying information is ever recorded, or as a post-processing technique, that is applied after obtaining a fully detailed data set. The second case is vulnerable to compromise, as evidenced by the NY Times journalists who managed to get hold of a motherlode of raw data.

It may then be preferable to have devices that can be hard-coded and certified by an authority in order to enforce privacy preservation by design. Such data can be recorded in an auditable distributed ledger to uniquely associate each data point with an identified device. Encryption techniques make the device and its data tamper-proof. Authorities can start to insist on and enforce such end-to-end protection standards and certifications, similar to how they enforce the installation of sealed, tamper-proof taxi and electricity meters. Several projects are underway to enable this technology, some with a specific orientation towards mobility application, such as IOTA, TravelSpirit and IoMob.

Synthetic data: an alternative to privacy preservation techniques.

As the technological development in protecting and anonymising individual trajectory privacy grows, so does the body of shadow literature on de-anonymising algorithms that attempt to reconstruct back the individual traces. This means that a responsible data collector might go ahead and invest in an array of certified devices, only to find that their privacy protection gets defeated sometime later, in an unending privacy protection arms race.

This is the motivation for our interest in developing an alternative to typical location masking techniques. What if we could create synthetic location data streams with the same resolution in time and space as what is actually sensed through devices without reproducing any given trajectory in the real world? In our research on building such synthetic data streams, we use techniques that intentionally restrict the actual raw data view for machine-eyes-only. They can be hard-coded into purpose-built, certified devices that are only capable of recording and releasing aggregate statistics.

The synthetic location data generation operates in two steps: an encoding step which produces the aggregates, and can be audited and certified; and a reconstruction step, that produces synthetic data with the same aggregate statistics as the real data. We are developing two distinct techniques to implement this.

Multiple histogram matching

The first technique is repeated histogram matching in a high-dimensional space. This approach re-purposes an old statistical technique to ‘sculpt’ a synthetic dataset until it looks like the real data. This is achieved by repeatedly ‘raking’ the synthetic data along various directions in a multidimensional space.

Figure 1 illustrates the first four steps for a two-dimensional spatial data example. Note how the blue points (synthetic) grow to resemble the orange ones (real) more as we go from images 1 to 4.

histogram matching
Figure 1: Illustration of the iterative multiple histogram matching process against a two-dimensional target histogram.

Graphical generative techniques

The second approach is to generate synthetic data using a so-called ‘Traveller Generation Machine’.  This approach belongs to the domain of machine-learning or a so-called ‘graphical generative technique’. Here, the word graphical refers to a graph in the computer science sense, i.e. a map of relationships (edges) between quantities (nodes).

The ‘Traveller Generation Machine’ identifies a minimal set of aggregate information to be released in order to produce synthetic data that closely resembles the real thing. Unlike histogram matching, this approach requires structural knowledge of the data, that is, how does one thing relate to another in the dataset.

Figure 2: Generative Model for urban mobility data

Take E1 in the model in Figure 2 as an example. The arrows leading to E1 can be interpreted as follows: given S1 (defined as the start time of the first  activity in the day) and Z1 (the geographical area of the first  activity in the day), E1 (the end time of the first travel activity in the day) has the following likely values: … Note how the description intentionally reflects the fact that the machine does not record the information of any given individual, only aggregates.

Limitations

The two approaches were developed with the objective of providing reassurance to data providers when releasing synthetic data streams in the form of complete day travel trajectories of individuals. However, this data will not satisfy the needs of certain service providers, who insist on communicating directly with specific individuals who are in a specific place at a specific time.

If, as a society, we still want to sign on for this invasive form of direct marketing, in spite of the dangers reported in the NY Times article, then we should at least insist on limits to the number of locations that may be recorded in sequence. According to De Montjoye et al. (2013) , knowing only four location points in a sequence may be sufficient to uniquely identify most people in a dataset.

However, we are growing increasingly confident that this approach of relying on entirely synthetic datasets is sufficient for very detailed urban and transport planning, as well as location-based services that do not rely on real-time interaction with individuals. Feeding this synthetic data into a state-of-the-art mobility simulation such as MATSim represents the next step in producing an entire ‘doppelgänger city’ to test, probe and experiment with policy decisions, while leaving people in the real world safe and surveillance-free.

We thank our editors Tanvi Maheshwari and Geraldine Ee for their efforts in compiling this post.

Meet us at TRB 2019

Our research group’s research will be presented at the 98th Annual Meeting of the Transportation Research Board in Washington, DC, January 13-17, 2019 in the following sessions:

Hands-On Workshop for Virtual Reality in Stated Response Research

Sunday, January 13, 2019, 1:30 PM-4:30 PM, Convention Center

Zachary Patterson, Concordia University, presiding, Michael van Eggermond

Sponsored by Standing Committee on Travel Survey Methods; Standing Committee on Urban Transportation Data and Information Systems; and Standing Committee on Traveler Behavior and Values

A main challenge of the use of virtual reality (VR) in stated response surveys is actually putting together a VR environment. After short presentations on recent VR surveys, attendees will learn how to set up a basic virtual environment for stated response survey applications with the soon-to-be open-source Virtual Immersive Reality Environment platform developed by Bilal Farooq of Ryerson University. Other VR platforms also will be sought for inclusion in the workshop.

Processing cycling risk under different elicitation methods: comparing 2D and 3D in virtual reality choice environments

Martyna Bogacz, Chiara Calastri, Charisma Choudhury, Stephane Hess, Alex Erath, Michael Van Eggermond, Faisal Mushtaq

Collecting and Analyzing Pedestrian and Bicyclist Data, January 14, Monday 10:15 AM- 12:00 PM, Hall A / Convention Center

The aim of this study is to provide a better understanding of cyclists’ risk perception in different scenarios under different elicitation methods. In particular, 2D computer-based videos and 3D virtual reality simulations of road situations are contrasted. We collect data on cyclists’ behavioural responses in risky conditions and their stated responses on propensity to cycle and risk perception. Electroencephalography (EEG) is used to gain insight into the temporal sequence of cortical risk processing, which gives a better understanding of neural mechanisms underlying choices. In addition, this study provides the validation of virtual reality as a tool for risk preference elicitation. Our results are in line with expectations: they show behavioural responses in line with the stimuli of the scenarios and an effect of the elicitation method, e.g. the perception of the riskiest elements seem to be exacerbated in 3D. Overall, we show that the 3D presentation method has an impact on the neural processing of risk and not only it changes the way people perceive risk but also their behaviour. The findings provide useful insights about data collection in the context of cycling behaviour and beyond.

Operator and User Perspectives on Fleet Mix, Parking Strategy and Drop-Off Bay Size for Autonomous Transit on Demand

Biyu Wang, Sergio Arturo Ordonez Medina, Pieter Jacobus Fourie
Parking Potpourri, Monday, January 14, 1:30 PM- 3:15 PM, Hall A / Convention Center

Autonomous vehicles (AVs), but in particular shared autonomous transit on demand (ATOD), promises many efficiencies in future transport provision, and may lead to concomitant changes in urban form. Considering the effects of car-oriented planning on the livability, efficiency and sustainability of 20th century cities, there is growing interest in how we may anticipate the changes that this disruption will bring about. Parking and pick-up drop-off infrastructures are some of the several aspects which may change travel behaviour in the upcoming era of AVs. In the paper, three different parking strategies as well as four types of pick-up drop-off infrastructures are simulated to assess their influence in users and operators. The studied parking strategies include demand-based roaming, parking on the street and parking in depots. The four types of pick-up drop-off interfaces are infinity bay, demand-based bay, curbside and single vehicle. The proposed fleet include 3 vehicle sizes: 4-, 10- and 20-seaters for sharing mobility, and 1-seaters for private mobility. Combinations of different parking strategies and different pick-up drop-off infrastructures were evaluated from the perspective of travel time, walk distance, vehicle occupancy, rejected requests and vehicle kilometers traveled. Results show that strategies produce radically different utilization of vehicles to provide the same minimum service level for a particular study area in Singapore. We conclude that urban designers and policy-makers need to consider these as important parameters when designing or retrofitting neighborhoods if they want to maximize potential benefits from this new transportation mode.

Studying Cyclists’ Behavior in a Non-naturalistic Experiment Utilizing Cycling Simulator with Immersive Virtual Reality

Transportation Issues and Solutions in Major Cities, Wednesday, January 16, 2019 2:30PM 4:00PM, Hall A / Convention Center

Mohsen Nazemi, Michael van Eggermond, Alex Erath, Kay W. Axhausen

This study investigates the combination immersive virtual reality (VR) and an instrumented cycling simulator for in-depth behavioral studies of cyclists. To this end, a cycling simulator was developed, virtual environments resembling Singapore were created, combined with the output of a traffic microsimulation. This set-up was created with the specific objective of evaluating the effects environment properties and road infrastructure designs on cyclists’ perceived safety. Forty participants, mainly university students, were recruited for the experiment. Results showed that the average speed of the participants changes between scenes with different bicycle facilities, with the highest value for the segregated bicycle path. The braking and head movement activities also changed within each scene, where they significantly occurred more before arriving at the intersections. Questionnaire results revealed adding a painted bicycle path to a sidewalk increases the level of perceived safety. Moreover, participants felt safest for cycling on the segregated bicycle path, in line with findings from previous research. This study provides evidence that cyclists’ behavior and perceptions in VR is very similar to reality and that VR, combined with a cycling simulator, is suitable to communicate (future) cycling facilities.

Accounting for uncertainty and variation in accessibility metrics for public transport sketch planning

We have new paper out in the Journal of Transport Land Use, Volume 11, No 8, together with Matthew Wigginton Conway and Andrew Byrd

Abstract

Accessibility is increasingly used as a metric when evaluating changes
to public transport systems. Transit travel times contain variation depending on when one departs relative to when a transit vehicle arrives, and how well transfers are coordinated given a particular timetable. In addition, there is necessarily uncertainty in the value of the accessibility metric during sketch planning processes, due to scenarios which are underspecified because detailed schedule information is not yet available. This article presents a method to extend the concept of “reliable” accessibility to transit to address the first issue, and create confidence intervals and hypothesis tests to address the second.

The full paper can be found here.

Bike to the Future in Leeds and Delft

On two occasions in May 2017 we had the opportunity to present our ongoing work on cycling and virtual reality. Alex Erath was invited to the Institute for Transport Studies at the University of Leeds and presented Bike to the Future (slides here). Me and Alex presented our previous and current work on Bike to Future for the Allegro research group (slides here). The Allegro project covers five years of research with a group of approximately of 10 people under the lead of prof. Serge Hoogendoorn, and is collaboration between the Delft University of Technology and the AMS Institute. The group aims to develop new theories and models for the behaviour of pedestrians and cyclists in cities, using state-of-the-art data collection techniques. At both occasions, it was great to discuss the challenges involving the surveying and modelling of cyclists’ preferences and behaviour.

Presentation in Delft - Immersing yourself in SIngapore!
Presentation in Delft – Immersing yourself in Singapore!

Continue reading “Bike to the Future in Leeds and Delft”

Agent based modeling conference in Sao Paulo

I had the opportunity to participate in the AAMAS conference. The acronym means Antonomous Agents and Multiagent Sytems. Participants from everywhere in the world came to the World Trade Center complex in Sao Paulo.

crowd

The first day I joined the workshop on Agent Based Modelling for Urban Systems ABMUS 2017. I also presented my work in this workshop, an article called “Scheduling weekly flexible activities in a large-scale multi-agent mobility simulator”. I described the challenges of multi-day activity demand modeling, my approach categorizing activities into fixed or flexible, the algorithm to schedule flexible activities during free time windows, and the results applying these methods to a weekly mobility simulation of Singapore.

resultsFlexible

Continue reading “Agent based modeling conference in Sao Paulo”

Bike to the Future

Experiencing alternative street design options

We believe that Virtual Reality (VR) offers tremendous opportunities as people can explore the future design options from different perspectives in the Virtual World. In the case of street design, for example, one can explore the experience from the view angles of motorists, cyclists, pedestrians and even children.

VR also offers new opportunities for stakeholder engagement as participants can provide valuable feedback to planners how to improve the design before it is actually built.

Fig 1: The Bike to the Future exhibit combines the latest technologies in 3d modelling, traffic simulation and game development to create a realistic Virtual Reality experience
Fig 1: For the Bike to the Future exhibit, we combine the latest technologies in 3d modelling, traffic simulation and game development to create a realistic Virtual Reality experience

Exploring Virtual Reality as a Planning Tool

We all are very familiar with the maps and renderings planners and designers use to communicate development plans. But imagine if you could explore future planning scenarios in Virtual Reality!

At the occasion of this year’s Park(ing) Day in Singapore, the Future Cities Laboratory will set up a digital peephole into the future to test new possibilities in engaging people for street design and urban development projects. The technology combines the latest 3D modelling and traffic simulation techniques in Virtual Reality to showcase how streets can be re-designed to make cycling and walking a more pleasant experience

Fig 2: Alternative street designs for Seng Poh Road and Lim Liak Street, 3d building data © Urban Redevelopment Authority
Fig 2: Alternative street designs for Seng Poh Road and Lim Liak Street, 3d building data © Urban Redevelopment Authority

Join us in Tiong Bahru on Friday, 16 September 2016

Visitors are invited to cycle in Virtual Reality through three local streets: Lim Liak Street – Kim Cheng Street and Seng Poh Road. Each street features a particular re-design to make cycling and walking more attractive. After the virtual ride, a short survey will be conducted to better understand how Virtual Reality applications can help planners getting feedback from local stakeholders: What are your needs? What do you like in the new design? Which design elements could be adapted or improved?

We are looking forward to welcome you at our parking lot on the 16th of September from 9am to 8pm! All visitors taking the survey do not only take part in a lucky draw with exiting give-aways, but also will have for the first time in Singapore the chance to test both the latest Virtual Reality goggles HTC Vive and Oculus Rift.

Date:  Friday, 16 September | Time: 09:00 to 20:00 |   Where: Lim Liak Street (Opposite Tiong Bahru Market)

We would like to thank the Singapore Urban Redevelopment Authority for making the highly detailed 3d model available for this research project.

PTV Innovation Day 2016 in Singapore

PTV, one of the market leader in software solutions for traffic and transportation planning, regularly organises the so-called Innovation Days. The aim of those events is to provide the users of their software products a forum to broaden and share their transportation knowledge and modelling expertise.

We took the opportunity to present our software pipeline to create Virtual Reality application using Vissim at the PTV Innovation Day in Singapore on the 22th of July 2016. The slide of our presentation titled Using Vissim for Virtual Reality Applications to Evaluate Active Mobility Solutions can be found here. It was great to see and hear that the presentation was well received as ‪Alastair Evanson, Solution Director at PTV Vissim & Vistro at PTV Group, noted:‪‪

“The use of PTV Vissim in their project Engaging Active Mobility, which models and visualises streetscape designs to understand people’s preferences to cycle infrastructure, is just the sort of innovative application that we at PTV like to see our software being used for. The audience at the event found it a very interesting subject and PTV look forward to further co-ordinating on the topic to enhance the application of PTV Vissim in allowing people to ‘experience’ proposed designs through interacting with micro-simulation models in virtual/ augmented reality.”

Click here to access the slides of the presentation.

With this blog post, we also would like to summarise the other interesting presentations at the event and share the respective slides.

Continue reading “PTV Innovation Day 2016 in Singapore”

Data Ecosystems, Transport, and Urban Transformation in Sao Paulo

-Notes on the ESRC urban modelling workshop

From 20 to 24 June 2016, I had the opportunity to participate in the urban modelling workshop organised by the ESRC Strategic Network: Data and Cities as Complex Adaptive Systems (DACAS). The workshop was held in the ICTP-South American Institute for Fundamental Research in the municipality of Sao Paulo, Brazil. The event brought together researchers across multi-disciplinary fields, all interested on how Data and Complex Adaptive Systems can be applied to describe and understand the underlying emergent behaviours in cities, and ultimately, plan for smarter cities: sustainable and resilient.

Data and urban challenges

On the opening day, Tomás Wissenbach from the Sao Paulo’s urban development agency talked about the challenges of urban transformation in Brazil and explained the recent efforts of Sao Paulo’s administration to collect all available datasets across the different governmental authorities regarding Sao Paulo’s population and infrastructure. This data fusion and processing endeavour culminated in an online interactive application (Figure 1) which anyone can access and download the datasets. In the second phase of the project, Wissenbach announced the possibility to collaborate in projects that can capture the urban transformation experienced in the city, and that can help the government to make informed decisions to plan for a better city.

Figure 1. Screenshot of interactive Map of Sao Paulo. Blue for the metro lines, red for the bicycle lanes, and in orange tones the population density
Figure 1. Screenshot of interactive Map of Sao Paulo. Blue for the metro lines, red for the bicycle lanes, and in orange tones the population density

Prof. Ana Bazzan, from the Institute of Informatics at Universidade Federal do Rio Grande do Sul (UFRGS), presented in her keynote presentations on her work in agents and multi-agent systems in traffic and transportation. (video here, slides here) The talk started with the rise of the cities, and the inherent transportation challenges within. Prof. Bazzan introduced then the idea of a data ecosystem triggered by people’s participatory sensing as the key to develop analytical applications to improve the transportation system. In a smart city, citizens interact directly with the system instead of just being passively receiving information. This change in the paradigm requires a human/agent- approach for the information, modelling and control challenges in which humans act as both targets and active subjects (i.e. sensors).

Putting all together: Data and Complex Adaptive Systems for Transportation Planning

My presentation on our research project Engaging Big Data supplemented the prior presentations quite nicely. This ongoing project conducted at the Future Cities Laboratory of the Singapore ETH Centre seeks to build up an agent-based simulation framework for transport planning using MATSim that can benefit from both urban mobility sensors (e.g. mobile phone and smart card data) and traditional data inputs (e.g. household travel survey and census information) (Figure 2). In the era of ubiquitous sensing and big data, the first challenge for developing the next generation of predictive, large-scale transport simulation models relies on designing a data mining pipeline that can fuse the knowledge from these different datasets in order to have an enriched and full explanation of the urban mobility dynamics. The second challenge aims in using this information to automate the parameters of a MATSim scenario, which would not only allow to significantly lower the efforts required for setting up simulation scenarios but would also lead to even more realistic results. This will ultimately serve as a platform to test the viability of policy and infrastructure decisions before they are implemented, and guide and inform the urban and transport planning process.

Figure 2. Big Data-driven MATSim
Figure 2. Big Data-driven MATSim

Witnessing Sao Paulo’s Mobility transformation

Besides the workshop, I took the opportunity to experience some of the results of the city of Sao Paulo’s recent pushes to improve the adoption of sustainable  transportation policies. Those initiatives primarily target the notorious traffic congestion the 21 million inhabitants of the metropolitan areas are suffering from. With the introduction of the ‘Bilhete Único’ in 2004, a smart card automatic fare collection system for the public transport, citizens are being incentivise to opt for public transport through standard fares regardless of distance or number of connections. The data on mobility patterns that this system generates every day would also be an ideal source for setting up Big Data driven urban transport simulation. In addition, Sao Paulo’s municipality has recently done major investments on bicycle infrastructure throughout the main avenues of the city, including the symbolic, Avenida Paulista. (Figure 3)

– although my colleagues at FCL who study how street design can support active mobility think that there is potential to make cyclists feel more comfortable and safe on this major arteria ;-).

Figure 3. Bicycle lane in Avenida Paulista
Figure 3. Bicycle lane in Avenida Paulista

The fruits of this year’s TRB submission frenzy

Every year, on 2nd August many transport researchers from around the work feel utterly relieved after they successfully submitted their papers for presentation at the Annual Meeting of Transportation Research Board. The meeting, which actually is the by far biggest conference in our field takes place every year in Washington D.C. in January of the following year.

Sometimes I ask myself what’s to point of going to conference in an age where researchers are not even present one but often even several social networks that are purely dedicated to scientists and constant bombardment of Tweets, Facebook updates and new blogposts ;-).

But being at TRB is always special to me. Not only it is great to catch up with colleagues in persons to informally exchange and spin new ideas, there are also always those chance acquaintanceships that make personal and research life so much richer. And checking out the mood of a city just before a new president (blonde for sure, but hopefully not male) is inaugurated is also always special.

Enough small talk, here come in an exclusive sneak peek the three submissions from people related to the Engaging Mobility group at the Future Cities Laboratory of the Singapore ETH Centre. My great co-authors and are looking forward to hopefully positive constructive points for critique from the reviewers, but also are curious on your comments!

Visualizing Transport Futures: the potential of integrating procedural 3d modelling and traffic micro-simulation in Virtual Reality applications

In this paper we elaborate on potential use cases of Virtual Reality (VR) in transportation research and planning and present how we integrated procedural 3D modelling and traffic micro-simulation with the rendering capabilities of a game engine in a semi-automated pipeline.
Through a review of potential practical applications, we present how this pipeline will be employed to distil behavioural evidence that can guide planners through dilemmas when designing future cycling infrastructure. At the same time, we are studying efficacy of VR as a method for assessing perceptual behaviour as opposed to traditional methods of visualization. Concretely, we present how the pipeline can be adapted i) to generate parameterised visualisations for stated preference surveys, ii) as a platform for a cycling simulator and iii) to communicate different design scenarios for stakeholder engagement. The flexibility of procedural programming allows discretionary changes to the street design and the traffic parameters. Through this experience of developing procedural models, traffic microsimulations and ultimately VR models for streets in Singapore, we find that visual and temporal feedback enabled by VR makes several important design parameters observable and allows researchers to conduct new types of behavioural surveys to understand how people will respond to different design options. In addition, we conclude that such VR applications open new avenues for citizen engagement and communication of urban plans to stakeholders.

Virtual Reality software pipeline to integrate CityEngine and Vissim output in Unity3d
Virtual Reality software pipeline to integrate CityEngine and Vissim output in Unity3d

Introducing the Pedestrian Accessibility Tool (PAT): open source GIS-based walkability analysis

The indices for walkability proposed so far are mostly ad-hoc and refer generally to the closest amenities/public transport stops and the existing network structure. They are ad-hoc as the weights of the attributes are generally arbitrary and do not reflect the independently measured preferences of the users and residents. Furthermore, they do not include design attributes such as the location of crossings and walkway design features, which are very relevant for actual planning decisions.

In this paper, we propose a walkability index that can be behaviorally calibrated and has been implemented as a GIS tool and is published as Open Source software. The Pedestrian Accessibility Tool allows evaluating existing and future urban plans with regards to walkability. It calculates Hansen-based accessibility indicators based on customizable specification of generalized walking cost and user-defined weights of destination attractiveness.

Comparison of walksheds with Pedestrian Accessibility Tool: impact of replacing a pedestrian overhead bridge with a conventional zebra crossing.
Comparison of walksheds with Pedestrian Accessibility Tool: impact of replacing a pedestrian overhead bridge with a conventional zebra crossing. Click the image to start the animation!

Simulation of autonomous taxis in a multi-modal traffic scenario with dynamic demand

Given the rapid technological advances in developing autonomous vehicles (AV), the key question appears not so much anymore how, but when AVs would be ready to be commercially introduced. Therefore, it is very timely to explore how the new way of travelling will shape the traffic environment in the future. Questions regarding the environmental impact, changes in infrastructure and policy measures are widely discussed. Most likely, the introduction of AVs will not only add an option to the traveller’s choice of means of transport, but also shape how people interact with the traffic environment. From a transport planning point of view, key questions concerning the introduction of AVs as a new means of transport are how it will influence travel behaviour, how supply and demand for AV will balance, how it impacts the viability of existing public transport services and how AVs will impact congestion and demand for parking.
In this report, a new simulation framework based on MATSim is presented, allowing for the simulation of AVs in an integrated, network- and population-based traffic environment. The demand evolves dynamically from the traffic situation rather than being a static constraint as in numerous previous studies. This allows for the testing of various scenarios and concepts around the introduction of AVs while taking into account their feedback on the travellers’ choices and perceptions.
Using a realistic test scenario, it is shown that even under conservative pricing a large share of travellers is attracted by autonomous vehicles, though it is highly dependend on the provided fleet size. For sufficiently large supplies it has been found that for the autonomous single-passenger taxis in this report the vehicle miles travelled increase up to 60%.

Share of the available travel modes in a percentage of the total number of trips, dependent on the number of available AVs.
Share of the available travel modes in a percentage of the total number of
trips, dependent on the number of available AVs.

Gussy up Vissim with the rendering power of a game engine.

Why does traffic microsimulations need a facelift?

The traffic microsimulation tool Vissim is one of the most advanced traffic simulation software. Vissim offers great possibilities to reproduce extensive urban traffic situations including public transport, individual cars, trucks, bicycles and of course, pedestrians.

The software is mainly purposed to quantitatively evaluate traffic scenarios with regards to vehicle and pedestrian densities, road and intersection capacities, as well as travel times or delays. But traffic simulations are also instrumental to illustrate possible design scenarios in pictures or videos to decision makers and the general public.

However, while the 3D visualisation capabilities of microsimulation tools such as Vissim have considerably improved over the last few years, there are still limitation when it comes to the realistic rendering of 3D environments (Figure 1).

Figure 1: examples of simple 3D street views rendered in Vissim
Figure 1: examples of simple 3D street views rendered in Vissim

The roads and objects usually can’t be represented by varying surfaces and the animated models of pedestrians and cyclists are quite clumsy. While such restrictions are of minor importance in conventional applications, the 3D display options are insufficient if we aim for creating Virtual Reality environments. For this reasons, we decided to combine the strengths of Vissim in simulating complex traffic interactions, with the strengths of a 3D rendering engine: Unity.

Our pipeline so far

The logic behind the pipeline is quite straightforward: from Vissim, we only need to export the trajectories of the simulated interactions between pedestrians, bicycles and cars and the commands related to the traffic lights, but use Unity to animate the respective 3D models. The export functions of Vissim allows to do so. Indeed, the coordinates of pedestrians are written in a ‘.pp’ file, while bicycles and cars are saved in an ‘.fzp’ file. The traffic light programme is written in a simple XML-file, and therefore easily exportable.

Basically, the files with the pedestrian and vehicle trajectories are simple CSV (Comma Separated Values) text files and can be read in Unity with appropriate scripts. The information exchanged is: simulation second, number of pedestrian/vehicle, type of pedestrian/vehicle, x-, y- and z-coordinates. For the vehicle, two coordinates (front and rear) are required in order to extract the size of the object.

The creation of these files in Vissim is simple. Before to run the simulation, the option ‘evaluation’ in the toolbar should be selected and, after some settings (see Figure 2), the text files will automatically be created. To ensure that file sizes remain easy to handle, we restricted to a time resolution of 4 simulation steps per second, but included an interpolation feature in our Unity import-scripts. Otherwise, the text files will grow to some billion lines, depending on the number of pedestrians and vehicles simulated in the network.

Figure 2: Settings needed in order to create text files with pedestrian trajectories
Figure 2: Settings needed in order to create text files with pedestrian trajectories

Apart from the trajectories, the files related to the traffic lights should also be exported (‘.sig’ files). There, we also needed an appropriate script to read the times and the colours of the traffic light and represent it correctly on a new 3D model with different street lights in Unity.

Great possibilities with Unity

There are many great 3D engines available out there, but we chose Unity because of its fantastic visual capabilities and Virtual Reality features, ample range of file formats and ease of use.

Some of the more noticeable visual improvements when going from Vissim to Unity are:


  • Physically based lighting and shadowing
  • Global illumination
  • Reflections
  • More realistic skies
  • Better texture filtering and antialiasing
  • Post-process effects (Depth of field, motion blur, bloom, etc.)

Other more subtle details, for which we had to write scripts, also contribute to a more realistic experience:


  • Rotation of vehicle wheels according to their speed
  • Vehicle brake and turn lights
  • Interpolation of vehicle, cyclist and pedestrian movements for a smoother animation.
  • Speed-controlled walk animation for pedestrians

Unity with its tons of features allowed us to create various types of content such as scripted screenshots, videos and 360 videos. The making of videos was possible thanks to its animation capabilities, which we used to create a few camera animations to glide through our 3D environments.

Figure 3: graphical user interface of Unity3d
Figure 3: graphical user interface of Unity3d

Some of the scripts (which are available in our GitHub repository) we wrote in Unity weren’t dedicated to the final visual appearance, but still they played a crucial role in our development. These scripts, for example, helped us to identify issues outside of Unity by visualizing the problem within Unity. E.g. one script generates a traffic movement heat-map, to observe which areas are more/less frequented by simulation agents, or to identify if an agent takes an undesired path which helps to verify the simulation setup (Figure 4).

Figure 4: simulated traces of pedestrians (blue), cyclists (yellow) and vehicles (pink) projected on 3d model
Figure 4: simulated traces of pedestrians (blue), cyclists (yellow) and vehicles (pink) projected on 3d model

Encountered Challenges

When we started importing the first simulation files from Vissim, Unity ran quite slowly due to the large traffic simulation. We finally had to reduce the number of agents in order to keep a real-time interactive experience in Unity.

Since we’re aiming for a Singapore-like environment, we not only need streets and buildings that look like Singapore ones, but we also need a wide range of 3D assets (e.g. road signs, street furniture, buses, cabs, trees) all of which are very specific to Singapore. These are not ready-made assets that can be found for download or sale, but required new 3D modelling efforts.

Aesthetics aside, another associated problem with 3D assets is the quality: too high and it can bring the performance to its knees; too low and it will look horrible.

Outputs

Unity gives numerous possibilities for exporting visualisations, but of course also to interactively engage with the 3D model in Virtual Reality. For a start, we will use Unity to render out still to be used in stated preference surveys.

Figure 5: prototype renderings from Unity to illustrate different street configurations and traffic  levels to be used in stated preference surveys.
Figure 5: prototype renderings from Unity to illustrate different street configurations and traffic levels to be used in stated preference surveys.

Futhermore, we started to play with 360-degree videos which have great potential to communicate design scenearios to various infrastructure project stakeholders, certainly as you can publish nowadays such videos on Youtube and watch them not only with fancy head mounted displays such as the the HTC Vive and Oculus Rift, but also cheaper options such as the Samsung VR Gear or Google Cardboard.

The bright future of VR in urban and transport planning

We are convinced that the Virtual Reality environments that are based on the integration of an urban environment (CityEngine) and a proper traffic simulation (Vissim) with a game engine like Unity offers new possibilities in terms of communication, visualization and evaluation of planning scenarios. Not least it permits to a non-technical audience to easily immerse themselves in the future environment of proposed project and to have the possibility to compare different scenarios. This allows various stakeholders to provide valuable feedback that can improve the planning process and lead to a better final project before even a single cent is spent in construction and also can avoid expensive alterations works to address usability issues.

Our roadmap

For Vissim, the aim will be to hone the interactions between pedestrians and bicycles, especially in shared space conditions. A realistic representation of these interactions is only possible with extended programming works.

To enhance the virtual reality experience, it is planned to integrate Vissim and Unity using Vissims Driving Simulator and Multiplayer interfaces to allow for so that the traffic simulation can react on user input from the game engine. Reconfiguring a cycling trainer as a controller will allow the user to ride through a virtual environment with 360 visual freedom and interact in real time with the simulation. Colleagues at NHTV Breda have already prototyped such a setup as part of their Cycle SPACES project and Alex already had the opportunity to test it (Figure 7).

Figure 6: Alex testing VR cycling simulator developed by Atlantis Games and NHTV Breda
Figure 6: Alex testing VR cycling simulator developed by Atlantis Games and NHTV Breda

But there is still a lot of things to be improved and we look forward to collaborating with them and Atlantis Games on it and keep you posted on the progress.