The Semantic Verse of 3D models

Keyhole co-founder and Second Life veteran Avi Bar-Zeev writes a long post that argues Google Earth has a long way to go before it approaches what we commonly imagine to be the metaverse. A few random quotes I liked:

What we really need is a new language of object representation that encapsulates and preserves form and function, aesthetics, style, meaning, and behavior, all tightly coupled and never discarded in the “art pipeline” until the object is finally rendered on your screen. And the big problem here is that things like semantics are so far from concrete math that any program, even if it supports the concept, can have its own varying interpretations. So this language needs to be fully expressed, down to a fairly programmatic level, so that these assumptions are clear and enforced. It should contain the instructions on how to render the 3D object, but also how to create it, use it, kick it, break it, change it, and even say what it is.



So to the extent Google helps you search the non-semantic web, they can certainly help you search the non-semantic set of 3D objects too. And they’ll succeed, to the extent there’s some value to add beyond simple keyword searching (think PageRank). But is it world-changing? Not until we change the fundamental properties of the virtual world.

I kind live the same trying to combine the semantics of little messages and the geometry of the place where the author attach them: semantics is very far from geometry. Maybe is closer to geography because of the meanings we attribute to. How to re-conciliate the two?

Applcampus

Tags: , , , ,

Leave a Reply