Illustration from Meta SpaceGlasses |
OSVR includes two independent parts: an open-source HMD (the "Hacker Development Kit") and the OSVR framework, a free and open-source software platform that provides an easy and standard way to discover, configure and operate a wide range of virtual reality and augmented peripherals.
Examples of such peripherals could be head trackers, hand and finger sensors (like Leap Motion and SoftKinetic), gesture control devices (such as the Myo armband and the Nod ring), cameras, eye trackers and many others. It turns out that most of these devices are used not only in virtual reality applications but also in augmented reality applications and so the services that the OSVR framework provides are just as useful for AR as they are for VR.
What are these services that OSVR provides? There are many different services, but some of them are:
- Discovery. Determine what peripherals and devices are connected to the system
- Configuration and autoconfiguration. Configure each peripheral and the interconnection between them.
- Operation. Extract data and events from the peripherals and pass it on to the application in a standardized way. Support both a state (synchronous) and an event (asynchronous) programming model or any combination thereof.
- Store and restore configuration to/from a file or the cloud.
- Provide user parameter such as eye separation.
- Provide optimized connectors to popular engines such as Unity and Unreal.
- Support of multiple operating systems and hardware platforms.
For instance, if you are developing an application that uses the Leap Motion sensor but also want users to operate it with the SoftKinetic camera, you have two options:
Option one: do it yourself and learn the individual APIs of each of these cameras and create multiple pieces of code to support them. Then, as these APIs evolve over time, you have to continuously upgrade your application. After a while, you want to use yet a third type of camera, say the Intel RealSense, so now you need to add yet another branch of code. Of course, each of these devices may report data differently. They may use different coordinate systems, or different units. You application needs to handle all these configurations.
Option two: use OSVR. You need to learn just one API which abstracts the various types of cameras. As new cameras are added, the hardware vendor (or the OSVR community) are likely to quickly add a new plugin to OSVR, meaning that your application works the same way with just a download of a new OSVR plugin. If the community does not work fast enough, OSVR is open-source and well documented so that you can write a plugin yourself. Coordinate systems and units are consistent. You are not strongly dependent on one particular piece of hardware.
OSVR also provide a growing list of analysis plugins that latch onto the output of device plugins and provide high-value processing such as a gesture engine (converting motion into gestures), data smoothing, target recognition and more.
Next time you think about our AR software architecture, think OSVR.
1 comment:
Very interesting post. It's amazing seeing how VR technology is developing. Here's hoping it continues to progress in a good direction.
Post a Comment