We've worked hard to make Construct performance shine, and it's nice to see that reflected in a microbenchmark. Modern JavaScript JITs are also extremely sophisticated and it's cool to see that they're competitive even with compilers.
However I'd caution against taking any performance advice from tests like these. As noted often you can get significantly different results by making obscure changes to events. However these changes usually only affect microbenchmarks that are hammering one code path super hard. Real games generally don't do that, so porting microbenchmark-level changes to real games will probably have no effect. So don't look at results like these and think "Wow, I should always use conditional expressions instead of conditions". Most likely you'll just needlessly obscure your events to no performance gain. As ever, with real games the only meaningful approach is to make measurements, and target optimisations at the things you've actually measured are slow, and only keep optimisations that measurably improve things.