New algorithms developed by Google and MIT engineers make it possible for smartphones to process and retouch photos in real-time, before you've even hit the shutter button – so they can fix your shoddy photography before it happens.

Machine learning networks were set to work on a database of 5,000 sample images improved by five professional photographers, teaching the software how to tweak a picture to get it looking its best.

The real innovation in this case is making the resulting algorithms efficient and fast enough to apply the retouching while you're still framing your selfie, and according to the researchers that speed of operation opens up a host of potential uses.

"This technology has the potential to be very useful for real-time image enhancement on mobile platforms," says Google's John Barron.

Machine learning has been used to teach computers to process images before, both by Google and others, but the large size of modern-day smartphone pictures, plus the limited computing power on board, makes getting edits done in real-time very challenging.

photo fancy 2Photo enhancements applied in real-time. Credit: Google/MIT

To get around this, the engineers developed algorithms that could perform the image processing on a low-res version of the picture coming through the camera viewfinder, then scale up the results without the quality getting ruined along the way.

Whenever low-res images get converted into high-res versions, you either see a lot of blocky, pixelated shapes, or the software has to do a large amount of guesswork about how to fill in the detail.

In this case the researchers upscaled the low-res images by outputting them not as actual images but as formulae that could then be applied to the high-res versions – expressing the changes through mathematics rather than as actual pixels.

Finally, splitting the output from the retouched, low-res photos into a grid meant that every pixel in the final high-res image had four formulae combining to tell it what colour it should be.

When compared with a machine learning system that uses full-resolution versions of the photos throughout the process, the new approaches uses just a hundredth of the memory.

All of which means the picture you see on your camera screen as you frame your shot can be processed very quickly, even as you're moving it around. The engineers say they can tweak the system to create different styles of shot to be used for different purposes beyond your next Facebook post.

As well as brightening dark spots and balancing contrast, for example, the algorithms could even mimic the style of a particular photographer.

While some phone apps can already put filters on top of pictures before you snap them, these new algorithms aren't just a broad set of rules slapped over what's coming in through the viewfinder – they adapt intelligently as you change where you're pointing your phone.

And it's just a taste of the superpowers our phones are set to get as apps get smarter at recognising what's in shot. With camera lenses in smartphones restricted by the slimness of today's handsets, focussing on image processing algorithms could be the best way to keep improving smartphone photos.

Barron says we can all look forward to "new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience".

The work is due to be presented at the Siggraph digital graphics conference this week.

The researchers also produced a video explaining the new technique which you can see below: