It is database dependent and (unless you were interviewing for a DBA job) quite a specialist thing to know. I'm guessing the interviewer was testing if you knew there was such a thing and that it varies from database to database. Usually if you are anywhere near row limits its time to review your data model!
Rd Dari wrote:Hi all Rancher,
What is the maximum size of a row in sql?
Not only is it database dependent, but the question is kind of vague too. Do they mean the maximum size of data that can be contained in a table row, or the maximum number of columns per table? Do they mean the maximum that can be physically stored in a table row, the maximum that can be handled within SQL (could vary I guess), or the maximum that JDBC can handle? Guess we'll never know!
I feel like this is one of those issues that "if you have to ask the limit, you're doing something wrong". Granted, interview questions sometimes test the boundaries of what you know, not what you need to know.
I recall in Oracle knowing the table data limit is practical, in part because if you have 2-3 large columns (free text entry such as VARCHAR(2000)) you'll hit it fast. Oracle "encourages" clobs at point since they don't get stored in the table directly. I've never thought much about the row limit per query though, since there would probably be bandwidth or timeout issues long before you hit it.
Jeanne Boyarsky wrote:I agree. Knowing the maximum size of a column is reasonable. I have no idea what the maximum number of columns per row is. And if I was ever near that limit, there's too many columns.
Well, I'd agree with you there, except that my current project requires us to deal with precisely that situation. Our users have datasets containing up to 1300 columns of data, and different records may not contain the same 1300 columns either. As you cannot fit 1300 columns into a single table, one option is to split it across two or more tables, which is the approach we've taken for constructing views to output these datasets for example. But to deal with the unpredictable variability (i.e. each record could have a different set of up to 1300 columns), we've taken a star schema approach i.e. splitting each record into its constituent elements and storing these individually, then pivoting them back into a wide format as necessary. Lots of fun.
Anyway, drifting off topic I guess, but just goes to show that you can't predict when you'll need to know (or rather know how to find out) some of this arcane stuff.