Technically, unless you have a vector display device, all images on it are pixellated. But obviously what you want is to enlarge an image without it degenerating into a blur of fat blobs.
Like when in the movie "Die Hard" where they "faxed" fingerprints (fax resolution is something like 120dpi). Or the Red Dwarf episode where they did something like zoom in on a shop window reflection to find a raindrop on an automobile where they zoomed in on another vehicle's side mirror where ot bounces off a facet in a diamond ring in a jewelry store... Or something like that. Managed to keep going for arbout 8 levels. Or Captain
Picard's "enhance" orders or lots of crime shows.
Sorry, but there are limits in real life. When you enhance an image, you're taking data that used to be in one pixel and making it fit 2 or more pixels, since the hardware pixel size on a digital display is fixed and CRTs are dead. This means that you need data that didn't exist before and you have 2 basic choices: A) repeat the original pixel value, ending up with large colored blocks or B) Interpolate with surrounding pixels, resulting in blurry blocks.
In short, there's no way to magically pull more resolution out of thin air.
On the other hand, if the underlying pixel data itself is higher resolution than the display it's being presented in, you can just up the scaling factor as there is data available that had been compacted for the smaller display. Overall, scaling works best whe it's an even multiple of the device pixels that the image is being displayed on.
Having said that, there are multiple interpolation algorithms designed to reduce scaling artefacts. Which one works best for you depends on what types of images you are working with. In some cases you can go even further and do stuff like edge detection to sharpen boundaries and so forth.
But since all these techniques are just approximations attempting to synthesize data that doesn't actually exist, be careful that you don't fall into the "Garbage In, Gospel Out" mindset.