• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Ron McLeod
  • Jeanne Boyarsky
  • Paul Clapham
Sheriffs:
  • Junilu Lacar
  • Tim Cooke
Saloon Keepers:
  • Carey Brown
  • Stephan van Hulst
  • Tim Holloway
  • Peter Rooke
  • Himai Minh
Bartenders:
  • Piet Souris
  • Mikalai Zaikin

How to enlarge a screenshot without pixelating it?

 
Ranch Hand
Posts: 87
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello good day people, that's the question. Thank you very much.
 
Saloon Keeper
Posts: 26892
192
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Technically, unless you have a vector display device, all images on it are pixellated. But obviously what you want is to enlarge an image without it degenerating into a blur of fat blobs.

Like when in the movie "Die Hard" where they "faxed" fingerprints (fax resolution is something like 120dpi). Or the Red Dwarf episode where they did something like zoom in on a shop window reflection to find a raindrop on an automobile where they zoomed in on another vehicle's side mirror where ot bounces off a facet in a diamond ring in a jewelry store... Or something like that. Managed to keep going for arbout 8 levels. Or Captain Picard's "enhance" orders or lots of crime shows.

Sorry, but there are limits in real life. When you enhance an image, you're taking data that used to be in one pixel and making it fit 2 or more pixels, since the hardware pixel size on a digital display is fixed and CRTs are dead. This means that you need data that didn't exist before and you have 2 basic choices: A) repeat the original pixel value, ending up with large colored blocks or B) Interpolate with surrounding pixels, resulting in blurry blocks.

In short, there's no way to magically pull more resolution out of thin air.

On the other hand, if the underlying pixel data itself is higher resolution than the display it's being presented in, you can just up the scaling factor as there is data available that had been compacted for the smaller display. Overall, scaling works best whe it's an even multiple of the device pixels that the image is being displayed on.

Having said that, there are multiple interpolation algorithms designed to reduce scaling artefacts. Which one works best for you depends on what types of images you are working with. In some cases you can go even further and do stuff like edge detection to sharpen boundaries and so forth.

But since all these techniques are just approximations attempting to synthesize data that doesn't actually exist, be careful that you don't fall into the "Garbage In, Gospel Out" mindset.
 
What are you doing in my house? Get 'em tiny ad!
Thread Boost feature
https://coderanch.com/t/674455/Thread-Boost-feature
reply
    Bookmark Topic Watch Topic
  • New Topic