Once more: you are comparing chalk with cheese. There is absolutely no reason to suppose a 20 year-old JavaScript interpreter will perform as well as a modern JavaScript interpreter that by now has had 20 years and probably a million-plus man-hours invested in making it go faster. In any case, you are really measuring the wrong thing. If you want meaningful information, you should compare how the same interpreter performs for different amounts of data. For example, Illustrator’s 20yo JavaScript engine: function test(samples) {
var res = ["RESULTS:"]
var n = 1000
while (samples > 0) {
var t1 = Date.now()
var s = ""
for (var i = 0; i < n; i++) {
s += "a"
}
var t2 = Date.now()
res.push(s.length+"a = "+(t2-t1)+"ms")
n *= 10
samples--
}
alert(res.join("\n"))
}
var samples = 4
test(samples)
/*
RESULTS:
1000a = 0ms
10000a = 8ms
100000a = 682ms
1000000a = 109171ms
*/ (I only ran 4 samples there; I doubt a 5th would ever finish.) Compare a modern Node.js: #!/usr/bin/env node
function alert(s) { console.log(s) }
function test(samples) {
... // same as above
}
var samples = 5
test(samples)
/*
RESULTS:
1000a = 0ms
10000a = 2ms
100000a = 17ms
1000000a = 173ms
10000000a = 1431ms
*/ (I was able to get a 5th sample under Node, though even it crashed with a memory error when I attempted 6.) If you can’t already see the key difference between those two sets of numbers then go plot them both on paper. It is the shapes of those graphs which is significant. With the old JS engine, the time to complete the task increases quadratically (i.e. it rapidly flies through the roof). With a new JS engine, it increases remarkably linearly—no small achievement on such a pathological piece of code. What’s important is not measuring a script’s raw speed, but measuring its algorithmic efficiency. Inefficient algorithms will make even the fastest of computers run like a slug when fed a non-trivial volume of data to process. The modern interpreter appears to recognize the inefficient algorithm being used by your JS code and is optimizing under the hood the way in which it evaluates it to radically improve its performance. Whereas the old interpreter is a simple naive implementation that evaluates the inefficient JS code exactly as it’s written. (As I noted previously, the modern optimizing interpreter is most likely reducing the number of low-level memory allocations and memory copies it needs to perform to hold these growing strings.) … If you really want to understand this stuff, go read up on Big-O notation and how different algorithms can have radically different performance profiles while performing the same job. As an automator though, all of this deep Computer Science stuff may be moot anyway, as your most important question should not be: “How fast does my script complete the job?”, but “Does my script complete the job significantly faster (and more accurately) than doing the same job by hand?” If your script takes 5 minutes to run but replaces a error-prone manual process that previously took half an hour, who cares if it’s not running on the newest, most optimized JavaScript engine? Go make a cup of tea while you’re waiting for your computer to finish if you’re bored. Or, if it’s performng an daily production task, you should easily afford a second machine to run the script just from the salary hours per year it now saves you alone. And if your script does take half an hour to run, then either it’s doing a helluva lot of valuable production work or you’ve got a horribly inefficient algorithm running somewhere in it; in which case you need to performance-profile your code under different levels of load to pinpoint where its critical inefficiencies lie. My first major script took over 10 minutes to run a standard job; the update that replaced it a year later, less than 1 minute. Not because the computer got any faster, but because I spent the year inbetween learning about speed vs efficiency, common CS algorithms, Big-O notation, and so on. (Funnily enough, its #1 performance problem was caused by the same naive string concatenation algorithm you’re demonstrating here. I changed the way in which it assembled long strings as it ran, and it flew.) My advice: if you’re teaching yourself, start by going through a high school Computer Science textbook. Once you’ve grasped the basics, get yourself a copy of Steve McConnell’s Code Complete which you can pick up and read individual chapters if/as/when needed. Oh, and remember to factor in all the extra time you’ve spent learning CS when assessing your final cost-vs-benefit ROI. ’Cos it’s a rabbit-hole.:) Just like algorithms themselves, making code go fast tends to be slower, harder, and more complicated (and error-prone) than making code which goes slow but gets the job done. A good programmer doesn’t write code that is fast for fast’s sake, but code that is fast enough.
... View more