Hello, I am a broadcaster for an American high school. One of our biggest productions is american football, and we have been looking recently into ways to help increase the quality and innovation in our broadcasts. A fundamental element in american football on television is the yellow down line, which simply shows how far the team has to go to get the first down. The technology in making that yellow line is advanced and (for the most part) only used in professional broadcasts.
There are two challenges in building this. 1) There must be a way to make the line go on the field, but under the players and 2) making the line stay in the correct place as the line moves. We can do what we need for the line to go under players with chroma keying in our systems, thats easy. The problem we have is getting the line to stay in place.
The First & Ten system by Sportvision Inc. is the industry leader in the technology, and use state of the art softwares to make the system work. They capture data from the camera such as pan, tilt, and zoom, then send it back to a computer where a 3D model is drawn onto the field with the line. (https://www.youtube.com/watch?v=-MvUTaukYmM) We do not have the technology needed for this, nor the software. So we decided to simplify it, bringing me here.
We were thinking of using augmented reality to accomplish this. And more specifically, glyphs. Motion tracking is out of the question because of its lack of performance and reliability. We have a camera positioned at the fifty yard line in a press box. On the opposite side of the field, there are bleachers at the stadium, elevated above the field. There is a railing facing us, the perfect place to put glyphs. Here is what we are thinking:
By placing a glyph in line with each 10 yardage marker on the stands, we can keep at least one glyph in the camera's frame at all times, allowing for the line to be drawn in the proper position. We just have to make the glyph positions correspond with each other in the model to work. So the flow is as such:
1. Input camera feed to computer through a Decklink card
2. System identifies and tracks glyphs
3. Model is placed with the line
4. The line is outputted through a Decklink card
We were thinking of using Blender for this. Does this sound like a system that is feasible? And if so, is GRATF the right solution for it?