Finally an idea!
Briefly the research question: Is an autonomous agent able to maximize contextual spatial cues through building a semantic description of the spatialized communication between peers?

The idea is simple and builds on the bits of research project we passed by this last few months. We have people walking into the city space. Each user is using a mobile phone able to track their position, to provide a 2D map of the city space, and to offer some authoring functionalities. Out of the users’ interaction in the small group, an autonomous agent implements some methods (partially statistical) which translate the virtual objects into an RDF description. Then this semantic data can be re-aggregated on the fly by the users to retrieve useful information or to navigate the space. On the operational level, the semantic structure is used in combination with a script that regulates the editing rules and negotiation process.

To structure further:
1. collaboration model
2. system architecture
3. RDF vs. XML? Which schema?

Leave a Reply