Projet MADRAS / MADRAS Project

3D Models And Dynamic models Representation And Segmentation


PhD Thesis – subject #1

Segmentation of static meshes: automatic evaluation of segmentation methods and application to partial shape indexing.


Main contact: Jean-Philippe Vandeborre
  jean-philippe dot vandeborre at (anti-spam: please correct the mail address)
  direct phone number: +33 3 (0)3 20 33 55 96

Practical information

The PhD student will work within the MADRAS project. The grant will be about 1500 euros per month during three years.

The PhD student will work at USTL (University of Sciences and Technologies of Lille, north of France), and especially at TELECOM Lille 1, a French Grande Ecole on the USTL campus.


Advances in computer speed, memory capacity, and hardware graphics acceleration have highly increased the amount of three-dimensional models being manipulated, visualized and transmitted over the Internet. In this context, the need for efficient tools to index and retrieve this 3D content, mostly represented by polygonal meshes, becomes even more acute. Indeed indexing these data has become a key application for all advanced 3D-users (CAD designers, multimedia designers, game designers, and so on).

Many 3D-model indexing methods have been developed in the past based on 3D shape descriptors [1] or characteristic views [2]. Generally, the user is presenting an example 3D-model or 2D-image to the search engine, and this one gives ranked results from the more to the less accurate corresponding 3D-models. But the methods developed so far are limited to an entire object, without occlusion. Thus, the next key issue in 3D indexing is to address the partial shape retrieval problem. That is to say: a user gives a 3D-model of a human arm to the search engine, and obtains other arms but also entire human bodies containing such an arm. Segmentation represents clearly an essential tool to develop new partial shape indexing paradigms and to implement corresponding methods. A key idea would be to process a truly semantic segmentation, that would cut the mesh into meaningful components in order to bring a certain understanding of the object.


Several pseudo-semantic segmentation algorithms exist [3-5], however they do not rely on psychophysical or perceptual models but rather on heuristics intuitively relating to the visual perception. It should be quite pertinent to introduce real perceptual and cognitive models and human vision principles for designing truly semantic segmentation algorithms. The use of subjective experiments with human subjects could be quite useful for this task by bringing some perceptual aspects of the human vision about 3D objects and grouping criteria.

Moreover subjective experiments could lead to define some error measures between two segmentations. Indeed two segmentations can be consistent, i.e. presenting basically the same perceptual decomposition, while being visually quite different. Such similarity / evaluation criteria are quite pertinent for our indexing purpose.

The main objectives of this PhD thesis are: