Procedural generation number of arrays slowing down system

0 favourites
From the Asset Store
Dungeon generation
$4 USD
60% off
Template for dungeon/maze generation, using wave function collapse
  • Maybe I'm going about this the wrong way with Construct 2. What I'm trying to do is procedurally generate a world. Starting with an imported heightmap and using the canvas plugin I pull the needed color data into an array the same size as the image. Using that I then go through and generate a second array for each cell in the first array and that's where the system starts to slow down. Testing this process a 100x100 image uses an array with a total size of 10,000 cells which means I have to generate 10,000 smaller arrays which works fine if those arrays are below 25*25 cells. Anything above that and the system eventually grinds to a halt. If I am not mistaken, it's the number of items in total which are slowing down the system.

    Anybody have any tips or ideas on how to handle generating larger worlds with construct 2 in this fashion? Am I going about this in the wrong way?

  • Create a plugin to generate the data. Don't use events to do it. Using a plugin will allow you to run some javascript which will run faster than if you do it through events.

  • Wouldn't the plugin end up generating the same amount of arrays, thus ending up in the same situation just quicker? To be honest plugin creation is not something I have really looked into before and my JS abilities are akin to writing with my left hand so I may be interpreting this wrong.

  • The thing about loops is that they have to finish before the next event can happen.

    That is why it gets bogged down.

  • Why do you need a 25x25 array for each cell of the first array? That's 625 values for a single pixel of your heightmap.

    And do you really need to keep all this data in memory at once? Can't you process one pixel at a time and re-use the same 25x25 array?

  • Newt- As far as I can tell it isn't getting bogged down in the loop. I have it set so it iterates in portions, rather than all at once so I can provide a visual feedback to the progress instead of waiting and wondering if it's working. That part is working fine. The issue is that it starts off quickly, and the begins to get slower and slower every loop as it proceeds through.

    dop2000 - 25x25 is just where it starts to faulter. I'm really trying to aim for 100x100. Think of each pixel of the heightmap as a chunk of the map. The idea is to take every chunk and expand it into a second array so that chunk is in reality 100x100 tiles. I do realize this is an absurd amount of data. As far as reusing the array I could, but wouldn't that wind up in losing the processed data. I suppose I could dump it to json after ever single cell expansion of the main map. Would be a lot of files being created however.

  • It doesn't sound like an absurd amount of data. As you sure reading each pixel of the 100x100 image isn't the issue? That's a slow action.

    Javascript can be a much faster at running but you lose the ease of interacting with the rest of C2. I wouldn't recommend it. Using just javascript with some other game library and not use c2 at all would be simpler i think.

    Anyways, if doing everything all at once locks up the browser you could redo your logic to do the generation over multiple ticks instead of all at once.

    so instead of this:

    start of layout

    array: for each xy

    --- do something

    you could do something like this:

    global number y=0

    y<array.height

    --- for "x" from 0 to array.width-1

    ------ do something

    --- add 1 to y

  • Yup, that's what I've currently go it doing so I can provide visual feedback on where it is in the process. The problem is accrued slowdown (I think due to the number of arrays being generated) It will generate the first three or four thousand (the first 30-40 lines of the image running from y 0 to image height) after that it gets progressively slower until it just stops around 47.

  • Upon further poking around it isn't the amount of arrays but the size as it's eating up tons of ram (which is to be expected with them sitting around in memory). I suppose the only way to get around this would be to save the array to JSON after it's generated. EDIT: After experimenting with saving the file through the JSON format there's the issue of it prompting every file save. Don't think anyone want's to click 10000 times.

  • Sounds like a logic issue. It shouldn't get progressively slower and use more and more ram. It should stay constant.

    100x100x25x25 is nothing.

  • Here's an image of the event list (7 events) imgur.com/a/uNMB4 The locator line does not speed things up when disabled. The 'arraySubChunks' is set to 100x100. The layout will finish fine with sizes ranging from 1x1 to around 30x30.

  • Um, reading image data takes a tick fwiw.

  • Yes I understand that. The image data is not slowing anything down anymore than I expected it to. I can populate the master array (the one the size of the image fine) with the data I need just fine. The problem that occurs is when creating an array for every pixel on the map. Without that occurring, even with reading the image data, it still flies through everything and pumps it into the master array just fine.

    EDIT: Just tried creating an additional switch state that would fire once the initial master array populating was finished and create the secondary arrays there, just to eliminate any possibility that the image reading was causing the hang up. Still causing the same issue.

  • 100x100 x 30x30 x8 is 72Mb

    100x100x 100x100 x8 is 800Mb

    Saving it to disk as json will be quite a bit larger since each number no longer only takes 8byte but usually more depending on the number.

    But idk. you can always generate less than a row in an array at a time. Still i only have a vague idea of what's going on. Good luck with the project.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • dop2000 - 25x25 is just where it starts to faulter. I'm really trying to aim for 100x100. Think of each pixel of the heightmap as a chunk of the map. The idea is to take every chunk and expand it into a second array so that chunk is in reality 100x100 tiles. I do realize this is an absurd amount of data. As far as reusing the array I could, but wouldn't that wind up in losing the processed data. I suppose I could dump it to json after ever single cell expansion of the main map. Would be a lot of files being created however.

    gskunk

    Still, why do you need to hold all 10.000 chunks of the map in full detail in memory at all times?

    Take Google Maps approach - it loads a portion of the map only when you zoom into it.

    You can load/generate a block of 3x3 chunks to allow players to move freely and load new chunks once they move to a new area.

    Also, how do you populate that second array? If it's randomly generated, you should use seeded random, this will allow you to store just one number in the first array (the seed) and you will not need the second array at all! You will be able to recreate the data in each chunk of map at any time from the seed.

    And if your player makes some changes on the map (kills enemies, opens chests etc.) and you need to keep track of these changes, maybe consider creating another array just for that. It could be something as simple as [tile, newdata].

    This will save you hundreds Mbytes of memory.

    EDIT: You can go even further and get rid of the first array too!

    Compress your entire 100 million tiles map into a single number - MasterSeed.

    Use MasterSeed and a pixel from your heightmap image to generate MapChunkSeed, then use MapChunkSeed to generate 100x100 tiles portion of the map. You probably should be able to do this in real-time with very little or no lag.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)