ISO 8859-1 specifies a one-byte per character encoding.
If setting the encoding on the InputStreamReader to UTF-16 allowed you to read the data correctly, then the data is encoded as UTF-16, not as ISO-8859-1. As you said, UTF-16 uses two bytes per character. The numeric values of the characters common to both are the same, but the number of bits used for encoding is different.
Here is why setting the encoding to UTF-8 will return MalformedInputException if the data is not really UTF-8 data. UTF-8 is a variable-width encoding that will encode some (16-bit) Unicode values using 8-bits, some using 16-bits, and some using 24-bits. If you try to read data that is encoded as ISO-8859-1 (or UTF-16) as though it were UTF-8, you will typically get a MalformedInputException if the data includs non-ASCII data (this is, if the data includes bytes with the upper bit set) because any byte with a binary code of 128-255 will be interpreted as part of the multi-byte representation of a single character. If you look at how UTF-8 encodes characters, you will see why this would occur.
For a description of UTF-8, see Markus Kuhn's
What is UTF-8 sun.io.MalformedInputException is a subclass of java.io.CharConversionException. Is one of a number of implementation-specific subclasses of java.io.IOException.
Note that it is easy to create UTF-16 files on Windows NT,2000, and XP, which are Unicode oriented (but still support CP1252), while Windows 95, 98, and, I think, ME, were CP1252 oriented. (Or some other language-dependent code page instead of CP1252.) UTF-8 is more frequently used for networking, since can represent the same characters as UTF-16, but usually uses fewer bytes to do so. UTF-8 is less frequently used in files or RAM, since having a variable number of bytes per character makes it impractical to use the data for random access.