Final fly through rendered

December 30th, 2015 by David Steinmann

The following fly-through was rendered with Otoy Octane Path Tracing in Blender:

Photo-realistic Rendering of a Tomb Chapel in Luxor from David St on Vimeo.

The following images show high-resolution renderings of the same scene:

visualization rendered with Otoy Octane Path Tracing

visualization rendered with Otoy Octane Path Tracing

visualization rendered with Cycles Path Tracing

visualization rendered with Cycles Path Tracing

UV Map of the whole entrance hall

November 22nd, 2015 by David Steinmann

To get the photos mapped on the walls of the 3D model UV unwrapping of the mesh is required. After 4 days of calculations, Blender smart unwrap crashed, probably due to low memory. The step was repeated with a mesh with 50% resolution, which led to a promising result. In a next step, the images taken in the tomb will be stitched and mapped on the model.

The unwrapped map with exemplary texture

The unwrapped map with exemplary texture

Mapped texture

Mapped texture

Extraction of Objects

November 18th, 2015 by David Steinmann

The mesh from Egypt is too complex to completely unwrap it in one piece. Therefore a work intensive separation of the pillars was carried out in Geomagic. The thereby created pieces will be individually unwrapped.

Colored pillars, rendered with Octane in Blender

Colored pillars, rendered with Octane in Blender

Data Acquisition in Berlin

November 11th, 2015 by David Steinmann

Last weekend, we had the chance to collect point cloud data in Berlin in cooperation with the ETH Chair of Architecture and Art. The thereby scanned objects are a leftover of a former graphite factory. The two towers and the surrounding area form an initial position of the project “San Gimignano in Lichtenberg”, launched by local artists.

(https://nikolaivonrosen.wordpress.com/2015/09/03/berliner-block/)

The facade and interior of the buildings were scanned with two Faro Focus 3D. We are looking forward for rendering interesting views.

Intensity picture of a scan on the top floor

Intensity picture of a scan on the top floor

Faro Focus 3D on top floor

Faro Focus 3D on top floor

DSC_6859

Tower with complex outlines and many chambers

Photo-2015-11-07-14-33-10_9303

Silo tower

Different light types and camera near clip depths in Cinema 4D

November 5th, 2015 by Michele Martinoni

The type of used light is important in order to reach a realistic visualization. In a first step we used the Octane Day Light to obtain a natural light as shown in the following image:

1104_daylight

Inside the tomb, Octane Day Light

It is to note that space inside such a place is not much and it is difficult to have a good overview of the room. A good solution is to increment the camera near clip depth.

Unbenannt

This tool hides from the rendered view each element that (in the view of the camera) is nearer than the used near clip depth. The light reacts as the hidden object would still be there. The two following Images show a view inside the tomb with the west facade hidden.

1104_camera_nearclipdepth_eveninglight

Inside the tomb using the near clip depth tool

 

1104_camera_nearclipdepth

Different light intensitiy and temperature from the previous rendered Image

 

Note that the octane day light can be adjusted to different day times by changing either the time directly in Cinema4D or by changing the temperature and the intensity of the light used (previous Image). In a next step the projection of the camera can be switched from central projection to parallel projection. This change can be helpful in combination with an optimal clip depth tool to visualize an entire wall in one go (this would be probably impossible inside of the tomb as there is too less available space). The following Image Shows the previous visualization with a parallel projection (note that the second line of pillars is not visible anymore).

1104_camera_nearclipdepth_ParallelP

Parallel projection

Another type of light is the octane IES light, which appears like a four-sided surface. The size, the orientation and the light settings of this surface can be adjusted. The two following Images show the use of this type of light in combination with an external day light coming from the door. Two different combinations of light temperature and intensity are used. A tripod was created behind the light surface to render a realistic situation.

1104_camera_spotlight

Octane IES light with low intensity and temperature

1104_camera_spotlight2

Octane IES light with adjusted intensity and temprature

UV Mapping of the main hall – Cinema 4D workflow

November 2nd, 2015 by Michele Martinoni

The main hall was unwrapped using the UV mapping method in Cinema 4D. Using Photoshop, Colors were used to fill the main generated surfaces and modify the texture of the hall. The created texture was then introduced in Cinema 4D and the textured model has been rendered using Octane. The inside of the hall could not be separated in single surfaces, probably because it was overlaid to the external walls. The next step will be to find a method to separate each single surface.

50pcent

Texture modified using Photoshop

UVMapping50pcent

Rendered model with texture from Photoshop

Blender News

October 31st, 2015 by David Steinmann

It is planned to test multiple rendering softwares on their capability of rendering laser scan datasets. Many weeks were dedicated to Cinema 4D so far, which showed good results but also several errors and weaknesses while processing. Last week, the procedure of importing a mesh with 3.4 million triangles, creating a corresponding UV map and subsequent rendering was carried out in Blender on a mobile OSX environment (MacBook Pro Retina). Even though the computational power and RAM was lower, the workflow was smooth and without errors. Blender automatically created a reasonable UV map. The in the Blender package integrated cycles render engine is an open source solution of the commercial Octane render engine and also runs on the GPU. The workflow with this solution especially seems to be attractive due to the low costs of using open source software that works.

initial mesh

initial mesh

UV map, automatically calculated in Blender

UV map, automatically calculated in Blender

Result of Cycles render engine

Result of Cycles render engine

Result of Cycles render engine

Result of Cycles render engine

Graphic Board with advanced performence

October 25th, 2015 by David Steinmann

Last Wednesday we were supplied with a new Nvidia graphics board, which allows us to render complex and large datasets almost in realtime.

1022_tt95_8pillars100_homogene_texture

Dataset with random texture rendered with Octane in Cinema4D

We hope that we finally arrived at a point where the hard- and software is ready to cope with the demanding task of rendering dense meshes. Unfortunately it recently appeared that Cinema4D crashes with an error message concerning the Nvidia OpenGL driver — Something that we’ll look into next week.

Image Stitching

October 24th, 2015 by David Steinmann

To map the overlapping photos, captured in the tomb, on the mesh, they first need to be stitched together to a single image per wall. Due to the big amount of pictures and the inaccurate result of manual stitching an automatic method is required. With the fully automatic method “photo merge” included in Photoshop the images are accurately merged. But since the images aren’t orthophotos, the result looks distorted.

Complete wall of The Theban Rock Tomb of Meri and Hunai, stitched with Photoshop (source of raw pictures: University of Basel)

To get rid of the distortion, the idea came up to run Visual SFM (http://ccwu.me/vsfm/) over the raw images, which estimates a 3D model from images and then orthorectifies the images based on that 3D model. The configuration of the raw pictures didn’t content enough information to execute the method with success.

test_photoscan

Result of Visual SFM

Despite the distortions the results of the stitching in Photoshop were mapped to a mesh dataset. The result visualizes the situation geometrically inaccurate but still looks aesthetically appealing.

Stitched images mapped on a mesh. The pillar in the foreground features a solid paint material.

Stitched images mapped on a mesh. The pillar in the foreground features a solid paint material.

First images mapped to simple corner with pillar

October 18th, 2015 by David Steinmann

Finding an adequate projection to show the 3D meshes in a 2D space already was a challenge for the dataset “corner” with one pillar. While Michele could manage an automatic unwind, mine required some manual post-processing due to overlapping areas. This wouldn’t be feasible for bigger datasets (which we’ll be able to process soon – a new graphic board is on its way). Even though the result shows the potential of mapping and rendering. It is planned to completely map the whole dataset “corner” as an example for our meeting with the archeologists on next Thursday.

Post processed UV map (the blue colored pillar overlapped with the part below and had to be isolated)

Post processed UV map (the blue colored pillar overlapped with the part below and had to be isolated)

Result of the UV map

Result of the UV map

Dataset "corner", partially mapped with images from Theben

Dataset “corner”, partially mapped with images from Theben