This time we talk about visualization; scripts, plugins, more importantly the use of open source software.
Enise Burcu: What do you do currently?
James Melsom: Outside the ETH I am working in collaborations with firms throughout Europe more or less. More recently, research work with some colleagues based in Barcelona, that is also crossing back over into the research work here. Also competition work with architectural offices, mainly in Switzerland.
EB: So you work your own and collaborate with different offices?
JM: Yes. In the past years that is meant more or less competitions with other offices and some consulting work in the case of international projects. I am also involved in a sustainability organization in Rotterdam called EXCEPT, and worked on a wetland project in China with another planning bureau. More recently I won a competition in Bern so in that case I will be sharing that project with a colleague, and that will continue into the next year or two.
EB: Sounds like generally large scales projects.
JM: Yes, also because they tend to have a longer time frame Secondly because it is the kind of work with which I am more familiar with, project work in South-east Asia, in Singapore, and Thailand so often very urban work which suits to very large-scale.
EB: Introducing new software second module updated our accustomed workflow in terms of visualization. What is your workflow? What tools do you use?
JM: In terms of visualization and in using tools such as Grasshopper, it has actually evolved very much parallel to professional work. Actually one of the philosophies which has been encouraged by the work at the chair is that work is really project or solution based, then the resulting tools are not only much more useful but can really be extremely efficient as well. For example using Grasshopper and SAGA-GIS applications together, was developed because I literally had to: I was consulting on a project for a landscape architect in Zurich who was not so well equipped, and the project was not so far that she could do necessary volume calculations. She had no idea how much material was being produced by the building works, foundations and car parking within the project. So I was able then to take that problem and actually generate a workflow using different possible project outcomes, using Grasshopper for example to dynamically generate volume calculations; really useful project tools which otherwise would not exist. These could also be given to the engineer even before they had even look at the problems themselves.
EB: They certainly open new windows as far as you know how to utilize. But if you are not confident they are even scaring I would say.
JM: I think so and the other trick is generating ways that you can also verify your results, because when you are developing a new tool or a method, you still need to have some control, ie. where you can check that the volumes are correct. You need to have a background that you can really be confident in the results that you are putting out and reliably share them with other consultants or clients. Once you’ve developed that tool (even if it sometimes seems like inefficient process to develop it) then you can easily deploy that to another project. The more projects you work on, somehow you pick up steam and it becomes even more efficient.
EB: What other tools do you use? Would you tell us unless they are professional secret?
JM: The main tools I use day-to-day, in terms of tools made by others, would be Rhino, Grasshopper and some particular plug-ins. One is “Elk”, for getting/dealing with Open Street Map data. Another is “GHowl”, which allows you to call dynamically data from Google Earth; also “Weaverbird” which extends the abilities to work with meshes in Rhino. Apart from this I use Lightwave for rendering as well as Vray, and 3D Coat which is a voxel modeling program…
EB: All of them are for 3D. Do you start with modeling? And I wonder don’t you ever use Illustrator for example?
JM: I use a lot of Illustrator. All of my plans for competitions for example are produced in Illustrator. All of the vectors are produced in Rhino; (I might even render out shadows or textures out of the 3D model) so I try to generate something accurate enough that I can lay on the top and all of those aspects can be re-used in the final output, which also makes updates much more simple.
EB: During the module you mentioned very often of open source software, and gave a couple of examples. What is the significance of open source software today?
JM: A good example I would give is Blender. Blender first started off as a 3D application for rendering. It has evolved to the point where it is now a compositing tool – so you can literally mix and grade video inside it; you can do motion tracking (you can actually import the video and solve where the camera is moving based on the video frames); you can do character animation… In its kind it is turning in to many headed beast which actually makes it harder and harder to use. You can imagine now you have hundreds of menus, it is becoming more and more specialized and less a general tool you can open intuitively and get a hold on. The other problem with the open-software of that form is that it has so many developers now; you have 20 different universities each developing some small functionality which then they give to Blender, but there is no overarching control over the workflow (or even keystrokes) that they would make sense, that these tools would work well together. Although it is extremely powerful, it is also somewhat trapped in terms of complexity of software. The other aspect is that it is putting a lot pressure onto existing software developers to generate new business models.
EB: Is it a market already? Do they develop these software in order get them purchased by the bigger companies?
JM: It is. What is interesting is that open source software now generate most of its incomes through sponsorship and through selling off training material. Basically you have programs like Blender who generates more less all of the revenue by having sponsors for the films and projects (you are listed in the trailer literally like you are honored for your contribution), or they sell training for that software. It is a self supporting system because software is getting more and more complex and it is getting less and less possible to learn it by yourself. For this reason I think it is extremely important to support it, but actively. In a way open source projects should not be ‘free’ in the sense that you should donate – somehow [testing, sharing findings].
EB: I guess you donate so many of them.
JM: Probably too many. This makes the software better. And the great thing about some of these projects is that if you do involve yourself you can literally talk to the developer. For example we have been able to talk directly to the developer of SAGA and actually have been changed the software for our needs. This is huge in the sense that beyond any kind of programming which we might teach in MAS (which his maybe not so conventional for landscape architects), just the understanding of the whole process behind it means you can actually talk “shop” to these people – and they will actually listen.
EB: There are too many of them. How do you choose among the thousands which one to use? Is there a kind of network that you follow?
JM: There is a very interesting site called osalt.com. That stands for open source alternatives. You can put in Photoshop and it will get you back a list of open source alternatives. It is a database which tries to show new software, it gives a few lines what the positive and negatives are, what operating systems are available on and so on. What is interesting is that just based on the feedback and donations they get [open-source developers], they continue or don’t continue development. For every open-source project currently active, there is about 15 times as many (and it is increasing) which have been discontinued. This a little bit the problem that often an active programmer is studying, during their dissertation, and when they have a big paid job then they might not release some kind of open source alternative. Unless they really get a lot of involvement, feedback, monitory donation or recognition, probably they will stop supporting it at some point. The other phenomenon which has happened: there is an application called MeshMixer, which has started of as a pet project of a programmer, where you could have a bunny (for example) and add these geometries together however you would like. It was a free software and amazingly done but then Autodesk bought it – now it is still available but it is being cannibalized into Autodesk products which is actually fine, as long as it is available, but probably it is going to disappear soon as well.
EB: What other networks are common?
JM: In general there is open source applications and then there is also a massive amount of plug-ins and scripts. For almost every 3D program there are plug-in and scripts being developed, also for free. This is the other huge area which definitely needs to be supported. This is also where we try at least to teach experimentation and trusting your intuition, and not necessarily taking one product and using it because “it is the only product to do what you want to do”, but actually experiment and try many products within a project; take just try one aspect, one very narrow feature set of the software and integrate that in an efficient way within the production or the generation of landscape design. Because in any case, software will come and go, have strengths and weaknesses. It is about being flexible , flexibility is certainly something which needs to be learned, a kind of fluidity jumping between these softwares and keeping them all efficient.
EB: LVML is basically working on landscape visualization. What should we understand from “landscape visualization”? Is it future landscapes or present landscapes that is meant?
JM: I think it should be flexible as a terminology. The idea why we use so much, for example, point clouds, is the fact that is not such a static data set. You can more or less imitate transparency and color and season just by loading different sets, or you can overlay easily different scenarios at the same time. We would, in the future, like to not only talk about present landscapes (which is obviously what we are talking about what we scan) or potential landscapes by the integration of 3D models, but also integrate for example recreations of historical landscapes. For example: the previous landscape before a dam was put into a valley and was flooded; to recreate the actual previous valley bed, and town in some cases when we have photographic material to reconstruct that. So it should be much less tied to a specific moment and should keep that kind of [temporal] flexibility. As you would notice ,we don’t spend too much time rendering if we can help it.
EB: It is not about creating the best 3D image.
JM: No. In fact, that should be possible of course, we should have the capabilities of generating any number of different possibilities. In our case, it is also about the overlay of data of the site which we cannot see but as well this idea of testing possible futures, which is for example where GIS is extremely under-utilized in its typical usage: we are usually talking historical or current, or instantly a kind of depreciated state, in terms of potential zoning plans, so extremely abstract whenever it comes to future scenarios. This is something we try to challenge.
In terms of research we are already dealing with the Computer Vision Lab at the ETH Zentrum, involved in a nationally funded research project, which involves not only the automatic stitching of point cloud data but also the automatic recognition of photographs within that landscape. You could literally throw images into this database and recognize from where it would be taken and have chronological information generated on a temporal database, which really relates to geographical space. Aims from then on would be to continue that into video or literally movement within the landscape, and even sound within such a geographic and 3D landscape database. Apart from that chiefly in research projects such as that the one which was presented in San Francisco, that was with IAAC, with Luis, with whom I will also be running a workshop in London in April. That will be another quite huge event in which we will do temporal data capture of spatial environment, temperature, light, humidity, sound and map people’s movement and relate all of these within one geographic real-time database.
EB: Thank you for the interview James. These all sounded to me very virtual, like in sci-fi.
JM: This will be quite physical. We will definitely try to relate it to the real way people act and react in space – not a simulation. Rather than the typical mode in which usually micro-climate is talked about in terms of simulation, or creating 3D models which imitate how we think it might be; this is about literal data capture and recording, and then reacting to existing micro-climates which we find on the site. The workshop scenario simply allows us map and manipulate this data, as we receive it.