Sam Sylva wrote:edit: So basically what you recommend is that I initially write everything in as straightforward, object-oriented a manner as possible and then go back later and fix what's killing my memory? For example, it would be better to start off encapsulating each record in the data set in an object and maintain one array that references these objects rather than creating a set of parallel arrays, one for each field, where each index across the arrays corresponds to a record (which would definitely use less memory since you don't have the overhead from millions of record objects)?
Liutauras Vilda wrote:
Also, since you're worrying about tinny memory/speed efficiency improvements and also using pre-increment within "for" loop (if I'm not mistaken I know why), which is not usual expression of "for" loop , it's might worth to think, which case of "if" statement likely going to be satisfied more often (you mentioned you working with big amount of data). So, based on your assumption, make amendments from != to == and swap return statements if it is a case (it's not something usual as well when talking about efficiency improvement, but..).
I have a hunch that someone will criticize on that.
Junilu Lacar wrote:Using pre-increment vs post-increment in the for-loop here makes no difference since the increment part is always evaluated after the loop body is executed.
Sam Sylva wrote:I've been tasked with processing a large dataset as part of a class assignment. One of the fields is a 24-digit unsigned hex number. I realized that, rather than storing the field verbatim in a char array of length 24, I could store the actual value of the hex number in an array only 6 chars long (I chose char over int because chars are unsigned). To do this, I wrote the following simple class...