Any difference to Prisma or Visionn or any of the others? Seems a lot of apps are doing the same thing
Is the addition of stickers the big differentiator?
@bentossell it's so sad to see that it has come to this in what is suppose to be the best place for innovation to flourish in the entire globe this is what we end up with.
300 clones of Prisma no one ever asked for.
So many problems that can be solved with computer vison, AI and deep learning.
Especially when you dive into the complexities of thought vectors.
Wake up silicon valley....wake up!
@bentossell Hi Ben. Sorry for the delay! I was traveling. Philm is the first App that does real-time neural style transfer. On my iPhone 6S, it gives 30 FPS. Our next version will be even faster and can achieve 40 FPS. It runs a 100+ layer neural network in real time directly inside your mobile phone's GPU.
Re: Difference. Visionn is great but as far as I can tell it is not based on neural network and does not do style transfer. (It's also not free :-) Prisma is not really real time. On my iPhone 6S, Prisma takes a minute to process a 15-second video, so the FPS is more like 7. Another popular app Artisto is completely cloud-based, so not real-time either.
We believe Philm is also faster than FB's not-yet-released Caffe2go (which seems CPU-based and achieves only 20 FPS for style transfer on iPhone 6S according to the press). From Mark @ FB's Beast demo video of style transfer (which presumably is based on Caffe2go), I also suspect they use a relatively small neural network, because the effect of style transfer is somewhat weak.
@bentossell In terms of underlying technology, Philm's ability to run large neural networks on mobile phone's GPU at high speed stems from its combination of deep learning with information theory and compressive sensing, which I've done extensive research in while being a CS professor at UT Austin. See my home page there: http://www.cs.utexas.edu/~yzhang/
@nicholassheriff@bentossell Dear Nicholas, unlike 300 clones of Prisma, Philm represents a technology breakthrough on real-time mobile deep learning. Until now, the power of deep learning is largely constrained by the cloud. We believe new technology like Philm (and FB's Caffe2go) represents an important step towards fully unleashing the power of deep learning and bringing it into to every mobile device. In particular, being able to run large neural networks truly at real-time (i.e., 30FPS) will likely enable a range of AI-driven, real-time, mobile applications.
@bentossell re: stickers. The motion-tracking stickers are themselves not a big deal. However, the stickers are rendered in the same artistic style that you selected. After adding a sticker, you can see the rendered effect right away without any waiting. Such WYSIWYG video editing is only possible with real-time neural style transfer. It will be intolerable if one had to wait for 10s of seconds after adding a sticker.
Besides, a lot of users find neural style + motion-tracking sticker a rather fun & expressive combination -- it gives a comic book like feeling. They particularly like the tracking capability. So you just need to think about what sticker/emoji/text to add, instead of having to tweak its position on different frames.
@frassmith I would suggest giving Philm a try. Philm is the first App that does real-time neural style transfer. It achieves 30 FPS on iPhone 6S. If you've tried Prisma, you know how much delay it requires. On my iPhone 6S, Prisma takes 1 minute or so to process a 15-second video ...
Makerpad
Castle
Philm
Philm
Philm
Philm
Philm