OpenHDCapture

From OpenCircuits
Jump to navigation Jump to search

Description

This project will capture High Definition Video 1280x720 at 30fps, and hopefully be capable of 60fps and maybe even 1080p. I intend to use a cheap FPGA, A HiSpeed USB PHY, and an Analog Devices Video A/D chip. I will nail down exact chip numbers later. The target platform driver and example code will be written for GNU/Linux. This project is more a proof of concept and more for fun and excercise rather than to be practical. But I will take any advice and help that I can get and who knows what we might create here. Entire BOM should be less than $40 but we'll see.

Yes I know about the Hauppage HD-PVR or whatever its called that Capures Component video at 720p. The problem is that device uses H.264 encoding, which is just way too expensive to decode, it takes a Dual Core 1GHz Machine at 99%. It is also not totally open to hacking.

FAQ (Frequently Asked Questions)

1. Why analog. Digital is the future man! Haven't you heard?

This projects intended application for the professional A/V embedded market has and always will have for the forseable future a need for analog. Adding digital I/O is just adding another (actually DVI-D) simple chip and bus. Analog is here to stay in the Pro world at least for the next decade. I know what I am talking about here ;)

If you don't work with commercial video installations and large Vid systems on a regular basis and just play w/ home plasma and Blueray home theater systems then it will seem silly to start analog. I understand as that is the extent of most people's background in video.

Also the digital HDMI (technically DVI-D for this application) chips are pretty simple to add in later. Right now HD-SDI would be more useful for the pro world, as it can travel further over coax and the cable is pretty cheap per foot compared to HDMI or DVI-D cable.

Finally I can not actually see much difference if any between digital and analog. At my company we did the "Pepsi Challenge" between a 6' HDMI cable and 100' of component cable, and we couldn't tell a difference. Not very scientific but good enough for me. I don't need a peer review to know the difference is negligible. Analog is still susceptible to ground loops from multiple ground points and interference, but in practice there is no difference.

2. Have you heard of http://www3.elphel.com/ It is an HD Camera w/ a Xilinx FPGA onboard

No, but thats a really cool project. I am interested in how they handle USB. I might be able to grab some ideas for my project. They use theora and thats great. This project will keep compression to the minumum to fit HD Video on the USB 480 Mbps bus.

3. Why is this project not using Theora or Dirac. What are you using for a video compressor?

This project is not a video compressor, it is a video capture device. Very little compression is needed to get 720p60 onto USB Hi-Speed. What little compression will be done is image compression on the individual frames. Once the data is in the CPU the user may compress it using theora or Dirac, store it as raw, or process it however they like.

I want to modify the video stream as little as possible. Video compression implies decompression. Since many will want to use the data live, lets not add an extra step of decompression for the CPU and user application. If they want to capture the stream, let the user application store it to the Hard Disk in whatever format the user likes, hopefully its Theora or Dirac.

Finally video compression is expensive for hardware and adds things like SDRAM and more gates to handle the re-encoding that Theora and other video codecs require. There is a huge difference in required hardware for video compression versus image compression.

How To Donate to This Project

You may Donate to this project by clicking here: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=VREPAVYCLJ2J4

You can see how donations are used in the Donations.xls file in the git directory (Soon to be added).

How To Volunteer

If you are interested in helping please contact mention @electronjunkie on twitter http://twitter.com/electronjunkie or email me at lm317t_AT_gmail_DOT_com, replace _DOT_ and _AT_ with "." and "@" to make it a valid email address.

I need help with board layout in Eagle, USB interfacing and drivers, organization.

Current Status / News

03/25/2010 Mailing List has been added, here is the mailman link to subscribe: https://lists.sourceforge.net/lists/listinfo/openhdcapture-discussion

03/24/2010 USB Chip is the USB3250. Working on schematic capture and part selection.

03/19/2010 USB Chip will likely be the USB3250, the USB3318 would be slightly more difficult to put into the UTMI core from opencores.

03/12/2010 Chris http://twitter.com/chrisindallas has offered to help with the USB portion of the project

03/11/2010 Project now has git hosting on git://openhdcapture.git.sourceforge.net/gitroot/openhdcapture/openhdcapture

03/11/2010 Added "How To Volunteer" section. Found this core that should work with the USB chip here: http://www.opencores.org/project,usb

03/09/2010 Created the ad9883a part and package in Eagle. I Began researching what it takes to get the USB3318 chip to work. I am also Considering maybe using the TSB41AB1 1394/Firewire chip. USB is a PITA to work with because of the protocol overhead. If we do Firewire, it will be in addition to USB.

I am in the planning/brainstorming stages, although I have been researching this for at least the last 6-8 months.

I have a working Forward/Reverse DCT algorithm in matlab (actually Gnu Octave) that can compress an image 5:1 with little loss in quality. I have also done much research on putting a DCT in hardware. Currently the DCT will be broken down into 2 stages, and all multiplies/adds will likely be done using a very parallel bit-wide pipeline to keep clock speeds high.

Data Bandwidth Issues YUV422 720p data comes in at roughly 1280*720*30*16 = 443 MegaBits/s. HiSpeed USB is 480Mbits/s which, after taxes, is probably not enough. Note that 720p60 is twice that. Either way using the DCT and some Huffman coding along with other simple compression techniques we can squeeze the data down a little without hurting quality too bad. My goal is to get it down to 150Mbits/s

Improvements

Yes video data is highly redundant, but the only way to utilize this redundancy is by adding much more complexity and hardware/memory. For example, if you just want to compare two frames to encode the difference, you must know what the last compressed frame was. In hardware that means storing the previous compressed frame, decompressing that block, and then comparing the data. This adds almost twice the complexity and hardware, and makes the memory requirements much larger than exist on cheap FPGA's. Any help here would be appreciated though.

License

All files for this project are licensed under the GNU GPL V3


Prototyping Hardware (Under constant review)

Software

An example program will be written using the SDL library. GNU/Linux Drivers will likely use libusb unless I have to go lower level. Hopefully the driver will be V$L2 compliant. If anyone knows how to help with this please contact me.

Current Plan of Action

Phase 1

  • Design A PCB that has the USB PHY and Video A/D converter that can interface with my Spartan FPGA Board
  • Write a simple USB state machine for the FPGA
  • Write a simple driver that can interface with the USB PHY and FPGA
  • Write a simple program that can flash LED's on the FPGA and read data from the FPGA

Phase 2

  •  ???

Phase 3

  • Profit!


File Hosting

This project's git hosting is at:

git clone git://openhdcapture.git.sourceforge.net/gitroot/openhdcapture/openhdcapture (read-only)