[ad_1]
A British startup is using the unique abilities of convolutional neural networks to do a sort of scaled-up version of Adobe’s content-aware fill — but instead of filling in the gaps in a picture, it’s imagining a whole new picture, larger and more detailed than the original. Kind of hard to believe without seeing it, right? That’s why they call their company Magic Pony.
Just emerging from semi-stealth mode (and even then, only barely), Magic Pony Technology’s researchers have trained their system by exposing it to high- and low-resolution versions of images and video, letting it learn the differences between the two. MIT Tech Review was first with the story.
Just as you could supply the probable details of a pixelated face because you are familiar with how faces look, the AI can extrapolate as well, having examined on a pixel by pixel basis what certain features look like at various levels of detail.
It can, for instance, upscale blurry images or video intelligently, because it “knows” that certain patterns indicate letters, and can be hammered into shape no matter how artifacted; other patterns indicate the hard edges of a face, and can be contoured and sharpened as the system sees fit in order to bring the image up to snuff.
One highly valuable application is in enhancing poor-quality streaming video on the client side — in real time and with a standard GPU. There are sophisticated filtering systems out there, of course, but this one may outstrip them with superior intelligence.
In addition to enhancing images, the Magic Pony system can improvise new ones. By recognizing not just low-level features like edges and features but also high-level ones like structure and overall shape, the AI can invent statistically similar images or expand beyond the edges of the original.
Take that brick and mortar wall at top, for instance: It’s clear there are different regions and hard borders between them, with reliable variations in color and texture. The system discovers rules that govern everything from the finest details to larger patterns, and simply makes new imagery that fits within those rules.
Imagine a game or CG movie where textures like that can be generated dynamically, different for every playthrough or character, beyond what is currently possible — a unique patina on every sword, each building with its own wear pattern, ivy creeping procedurally along buildings. Chances are that humans would still have to ground-truth these images to fine-tune the algorithms, but it’s a powerful way to implement a feature artists and engineers have been pursuing with varying success for years. (Paging John Carmack and Mark Johnson.)
Magic Pony has raised an undisclosed sum in seed funding from a number of angel investors: Chris Mairs, Tom Wright, Xen Mategan and others. It was also part of the Entrepreneur First program in 2015. Co-founder Rob Bishop confirmed that it has a number of early access partners, though he declined to name them. We’ll likely be hearing more from Magic Pony when the details of the neural network — and how it was put together — are presented at CVPR in June.
Read More Here
[ad_2]
Magic Pony’s neural network dreams up new imagery to expand an existing picture
-------- First 1000 businesses who contacts http://honestechs.com will receive a business mobile app and the development fee will be waived. Contact us today.
#electronics #technology #tech #electronic #device #gadget #gadgets #instatech #instagood #geek #techie #nerd #techy #photooftheday #computers #laptops #hack #screen
No comments:
Post a Comment