Creating a new web stack for collaboratation and dynamic content.

People are finding that the current web stack (html,javascript,css) isn’t getting the job done. The old paradigm of thick servers and thin clients is becoming obsolete as people want more reactive app-like web pages.

In the next few years we’re probably going to be dealing with virtual/augmented reality (see the castAR and occulus rift) and 3D printing. Inevitably we’re going to try to adapt our existing web infrastructure to deal with problems in those fields. They’re both actually pretty similar problems, collaborate CAD and virtual spaces.

We’ll hack away at the existing web stack, develop new standards, and get something that more or less works. I’ve seen all kinds of very clever solutions to these kinds of problems. We’ll get something that works eventually.

But the web isn’t exactly clean or easy to develop for. It’s riddled with strange design patterns and weird legacy behavior. I’d like to use the momentum from those two fields to develop something a bit more reasonable, with design patterns that naturally compliment 3D and collaboration, instead of being a hacked in afterthought. The web is a pretty well tested set of design patterns, and we don’t want to stray too far, but we could definitely make this easier.

I think we can get a pretty awesome environment by combining a few off the shelf systems.

A shared/editable html inspired “scene” data structure

Basically, html where code can change the values, and where multiple users can interact with the same html “scene”. Similar to how javascript and the DoM works now, but with support for multiple users editing the same instance simultaneously.

Verse 2.0 is network protocol for real-time sharing of 3D data. It is intended mostly for graphical applications of collaborative virtual reality. It could be used for sharing data between applications like Blender.

The verse protocol.

Integration with blender and other design tools is an obvious asset. Editing your 3D models or your textures in real time on your development server is a very natural and easy workflow.

It uses a tree data structure just like xml. If you want to do very fast simulation stuff, you can talk to it using C. If you want to use it in your web app, there are javascript bindings. We’re mostly going to be paying attention to the python bindings though.

Fast sandboxed python code

We need a scripting language. Python has popular support, and it’s pretty easy. We can do sandboxing via pypy. Why we need sandboxing should be pretty obvious. We don’t want the web server to be able to run arbitrary code or install viruses or anything.

Of course eventually we want to support other languages, perhaps via something like llvm bytecode. But we need a standard base to work from, and splitting effort this early would be counter productive. We’re going to need a lot of higher level abstractions in order to make this really easy to program for.

Services design pattern

We still need a way for out sandbox to talk to libraries outside of its sandbox. It would also be useful to be able to talk to the server in a pythonic way, instead of just watching for changes in the verse tree. What we lose in network performance we make up in ease of programming.

RPyC lets us create remote “services” that behave like normal python functions. Implement a server side function that simply returns the users hit points, or implement a non-sandboxed service that let’s you access very fast simulation tools written in C.

For example, you could set your verse scene to be read only, and make it so the client needs to ask the server to move its avatar for it. Simplified programming, but more network overhead.

It makes implementing a secure plugin as simple as writing some python code. I’d like to see this evolve into an android style capability based security permissions model.

Rendering with scene graphs

sg-road-truck-shared-crates
Image from Leandro Barros’ intro to open scene graph, which nicely explains exactly what a scene graph is.

Scene graphs are a popular way of rendering a 3D scene. There are scene graph implementation in javascript, and open scene graph runs on everything from desktop computer to android tablets. A scene graph lets us separate our presentation from our logic. That means that by simply recording changes in the scene graph we can create 3D screenshots or videos, changing render setting or camera position when we view them. We could do all the heavy lifting on a desktop or cloud computer, and forward the scene graph to a mobile device or a specialized rendering cluster.

Of course we’d provide access to our scene graph rendering via a service, and you could easily replace it with proper opengl bindings.


Right now I don’t have the time to really work on all that though. I’ve got some code working but there’s a lot to do, and I’m spending most of my time working on 3D printing related stuff. Any devs feel like talking about it? I’m still refining the system architecture. Drop me a line or just comment.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s