Today's interview is with Greg Heuss, President of Eyealike (www.eyealike.com), a Bellevue-based startup which just unveiled a new product focused on detecting video copyright violations on Internet user generated content. The firm is showing its product at the DEMO 2008 conference in Palm Springs, California this week. We spoke with Greg last week ahead of the company's product launch to get more on the firm.
Tell us what Eyealike does, and what your new product is?
Greg Heuss: We are trying to revolutionize the way that people go about visual search. We have a product that we are launching at DEMO which addresses the video copyright side, and last month we launched Eyealike faces, which has the ability to take still images and identify facial structures. It's a tough technology to do, but we've gotten very good at it. After still images, the next iteration is video. Our product basically will be going out to the web, against the sites you see out there, the content aggregators like YouTube, Metacafe, and Yahoo Video--and finds any videos that happens to be using non-licensed, or copyrighted materials. I'm sure you have heard about the lawsuits against Google, and YouTube, and all of the fuss about that. Our technology goes out and finds video within those user generated content sites, and identifies it out of a catalog of video out there. We store and train our system from a library of content from a Sony, or Warner, or Viacom--and then identify, flag, or expedite how to monetize that content. That might be to get them to pay a royalty, to get them to take it down, or whatever it might be. We've got the technology that lies behind that.
How was this technology development, and how long has it been in develompent?
Greg Heuss: The company started back in 2003, and a lot of the technology--especially in visual recognition of faces and video, comes from the universities in the country. As you know, the University of Washington is here and is one of the top five universities in the country, or even the world, in image search or image-based recognition. Linda Shapiro, a professor there, is on our board as an advisor. The technology has been developed from former Ph.D. students, Linda, and working relationships with them, along with great minds in research and development on our staff. That's how it all started. The video piece we have been working on all year, but the still images we've been working on for quite a while.
Why the move to take this technology from still images to video?
Greg Heuss: It's the next natural step, to take it from still images into video. If you look at the numbers, and the trends on the web, and how gigantic YouTube is getting, you can see the web is turning more and more towards video. Still images are a fine market, and there is lots of area for potential there, but we wanted to move the technology into the next level. Frankly, there is no one else out there doing both video and still images. We feel we're very evenly matched to cross into each of those. In the video world, there are not technology solutions as good as ours is that solve the problem.
Is the product ready to launch, and when will people start using this?
Greg Heuss: Officially, we're launching next week. Our product has been built out as a beta, and more work needs to be done, but we're in talks with some of the major studios out there about the technology. We feel we're ready to deliver product very shortly. It should happen well within the year.
We've seen lots of other firms starting to look at the copyright issue. What's different about you, and why is your technology better?
Greg Heuss: This is not easy. I think the toughest thing is the scalability and accuracy required. The two areas go hand-in-hand. Others are using a couple of things, such as watermarking and tagging images, and going frame by frame through videos to determine the "DNA". That's kind of cheating. We're doing it the old fashioned way. We're comparing things frame by frame, image by image, and tracking everything which is moving or still, side-by-side. Our accuracy level is at the 98 percent to 99 percent level, with very few false positives. Trying to do that with watermarking doesn't work -- anyone is able to upload images from MTV or whatever they're pulling it from, and you can scan the "DNA" or watermark out of it, so that's kind of helpless. By doing it the way we're doing it, frame by frame, the accuracy and scalability is better than our competitors.
Can you describe more how your technology works?
Greg Heuss: We're processing hundreds of images and tracking movement. The way we're going about comparing the two is unique. Not only can we match a Madonna video owned by Sony to one shown up on YouTube, we can detect someone using part of a video in their own user generated content. If you've ever been up on YouTube, you see they don't put up "Madonna's latest video". What they do is embed it in their own user generated content. Our technology looks where the Madonna video stats, and stops, and then we take and compare it against the base of Sony or Warner or whatever library, and we find the exact match and the exact frame. Then, we can go through and flip frame-by-frame and see if the motion is the same and if it's the same video. That's the key to the technology. Some competitors of ours are using audio, but you find that lots of the content out there has different audio. We're scalable to potentially millions of videos.
Finally, let's talk a bit about the company's backing. How are you funded and backed?
Greg Heuss: We started in 2003, and spun out of Logicalis, a large, nearly $1 billion IT infrastructure and staffing company located here in Bellevue. Logicalis owns 50 percent of us, plus we have funding from three pretty generous angels. We've raised a total of $1.6M, which is where we are right now. We have a staff of seven, and growing forward will start raising funds from angels and VCs in the first quarter of this year.