I'd say tokenat() is the bottleneck here.
I replicated your example but used a 610k text file of the english dictionary (60387 words). It was dog slow. I stopped it after 10 seconds and it only had read 1400 words.
So I modified it and used mid() and find() to progressively get the words.
http://dl.dropbox.com/u/5426011/examples13/dict.capx
It read the entire thing in 0.979 seconds.
Then for the fastest possible performance I modified the example above to save the Array to a JSON file. Then I made another capx to load it with the load from JSON action.
http://dl.dropbox.com/u/5426011/examples13/dict_json.capx
The data file is now 905k but loads in 0.039 seconds.