The FPGA and the DRAM chips next to it can perform real-time video processing, giving hackers the ability to experiment with hardware-level transformations that do not require cooperation from the video-generating device or application.
Some of the options that are currently implemented:
1. Pass-through. This is the simplest mode and it does not involve the DRAMs. 1080x1920 video from the HDMI receiver is essentially passed through to the MIPI interface that drives the OLED screen.
2. Image rotation. This mode allows real-time 90 degree rotation, so that standard landscape mode 1920x1080 video can be presented in the 1080x1920 display. To do this, a full video frame is stored in the DRAM chips while the previous frame is sent to the display. This 90 degree rotation does cost you 1 frame of latency, but can be very useful in some of the following scenarios:
- Video coming from a 1920x1080 source such as as DVD player or in "replicated desktop" mode.
- Video coming over a low-latency wireless link. These links primarily support 1920x1080 today and not the native 1080x1920 mode.
3. Conversion of full screen to side-by-side mode. Ever saw a desktop in the HMD and found yourself squinting to see each half at a time. When this conversion mode is enabled, the video signal is converted into two identical copies of the signal, that can then be shown on both eyes at the same time. Control over this mode is via a command via the HID feature interface or with a simple utility.
Additional modes that are not yet implemented but can be implemented by the community:
- Real-time distortion correction. If you have a non-cooperative video source or just prefer to use your GPU for something else, real-time distortion correction in the FPGA can be useful.
- Resolution up-scaling: converting from lower resolution into the full resolution of the HDK
- Color enhancements (e.g. gamma, HSI improvements)
- Rearrange the video signal. One cool application that we saw from one of our partners is rearranging a 1080x1920 on the output of a GPU to reformat it into a non-legible 1920x1080 image, send that over the low-latency wireless video link and then use the FPGA to unscramble the image. This allows wireless video transmission without "paying" the 1-frame latency penalty.
If a manufacturer was very cost-conscious, they probably would not include the FPGA in the design, but as a hacker developer kit, we think it's an excellent exploratory option.
What could you do with it?