|On reading a text file in Smalltalk
||25 Sep 04
(Source: comp.lang.smalltalk, Lex Spoon) If you accept losing one notch of
performance, then you can make much clearer code in Smalltalk. The
"file lines" idiom in this thread is very useful, because you can
then use collect:, select:, etc., on the resulting collection of lines.
And it is important to consider that once you commit to, say, iterating
over an entire file, that the file must be reasonably small anyway to get
decent performance. The same issue exists with collections. Who cares if
collect: creates an extra collection or if WriteStream wastes space at the
end of a long underlying collection; if these concerns are really so
important then probably this huge collection should not exist and/or you
should not be iterating over the entire thing anyway.
To put it very simply: you just can not expect a program to work on
large data structures just because you micro optimized everywhere. If you
want to handle large data structures then it takes planning and specialized
algorithms and test cases. If you are not going to put in that effort, then
don’t sweat the small stuff. It is very liberating to code with an
eye towards correctness and towards algorithmic performance, and not to
worry about getting down the constant factor. It seems to lead to lower
stress, faster code production, and fewer bugs generated.