tulamide's Forum Posts

  • How does it exactly work? Couldn't find any file on the folder I've chosen at the start, but it works. I have 2 .cap files for both player. Just couldn't figure out how :P. And it saves every tick the position?The file that is written to, is defined in "Start of layout" as "sety.dat". Yes, it writes to disk on every tick. I wouldn't recommend copying such a method.

  • Didn't know where exactly it would make sense to post this lucid, so I decided to start a thread. Please don't discuss here, just post caps that are reduced to a possible bug related to switching layouts - no complete projects. Add a short description.

    Globals messed up when transitioning:

    Two layouts. Switching back and forth. Two global objects. Start the cap. It should run. Now toggle action 1 and action 2 of event 2 in both layouts. Now with a transition active, no globals will appear on layout 2 (visual problem only) and the layouts only switch one time (timer not resetted?).

    LayoutBug.cap

  • O my, I totally forgot about that sliding. I'm terribly sorry <img src="smileys/smiley9.gif" border="0" align="middle" />

    When I was posting I had no chance to read that post from you just a minute earlier. Can you make it run normally by running with the debugger instead of a normal preview, or was it the combination of deleting parts of the layout plus debugging?

    And another thought: Running v-synced causes the problem while only one preview is running. With two previews, both are running v-synced and without problems. Does the driver report a correct refresh rate? Could it be that it reports some interlaced rate (e.g. 1920x1080i instead of 1920x1080p) instead?

    And do have exported executables the same problem?

  • It doesn't necessarily need to be the graphic card that's having problems. If for whatever reason the processor is working on capacity, you will also experience lower frame rates and stutter.

    I was thinking (and this is based on nothing, just thinking) if for some reason the three cores might be the trouble here. I'm not sure if it is still the case, but in earlier days that 3-core-cpus often were 4- to 6-core-cpus with faulty cores deactivated. Also, I remember the early 2-core-cpus from amd having problems with apps that were designed for a single core. This shouldn't be a problem anymore, resolved a long time ago ... but who kows^^

    Did you look at the task manager and the debugger, when testing? Maybe you can see a heavy load, when only one core is used (=only one preview runs), but a homogenous distribution, as soon as more than one core is working (= 2 or more previews running)?

    If so, then the cpu might be the problem, although I could not provide a solution.

    Anyway, as I said, was just thinking...

  • oh right, I meant because I had done a test with a different png of the same size, and got a much higher framerate. not sure if there's some optimization done for unvaried textures on my card(hd6970), but in the faster test I was using a solid red square of 4096x4096. I even tried lowering the resolution of qwary's test, but it didn't come close to matching my original testHmm, I couldn't reproduce it. I created a 4k .png, solid red, with paint.NET, then recreated the example from scratch, window 1280x720, layout 20000x20000, same events. In result I had the same 15fps (and btw it didn't make a difference if sine behavior was applied or not, so I guess your cpu is a lot faster. Well, my amd cpu doesn't have a level 3 cache, so...)

  • I have no more than 4 fps. What am I doing wrong?

    You demand too much of your hardware. Even in "Rage" or "Battlefield" or "Skyrim" and the like, you will never ever see 1000 objects of 4096x4096 with a texture of the same size. It's just not the way it's working.

    interesting. I'm getting about 15fps with the sine behavior on all of them, and about 30 without it. I have no idea why one png would render slower than another, especially that much slower, but perhaps one of the more knowledgeable folk around here can enlighten us.I'm getting 15 fps, too. It's not a matter of png rendering (the textures are on the graphic card's VRAM in a simple RGBA format, no png/jpg/etc anymore), it's a matter of different test settings. A graphic card has much more work to do if the viewport is larger. If you just half the width of the window from 1280 to 640, you will see that the framerate will instantly go up. Also, the hardware may differ. While my GTX460 Hawk is relatively powerful, a GT7600 is not. This will also result in different framerates.

    In any case, even 30 fps with a workload that insanely exaggerated proves the point that there's more than enough power if you take the time to learn how to use it. You should never design a game that way with 1000 4096x4096 sprites on screen, this isn't just in construct.Absolutely correct. And it is a point that was explained some hundred times before now: If you try to use your hardware in a way it isn't designed for, you won't get what you want.

    As far as 5 shaders and it's contribution to slowdown, I don't know much about them, but I suspect the games that look great use more complex shaders, not necessarily more shaders. CC is limited to PS2.0, perhaps this isn't what you'd want, but I'd suspect most 2d titles, including AAA titles don't have multiple shaders running at all times.That's correct as well. Most games (incl. AAA) will make use of 1 post process effect, that may run at all times. All other effects are placed carefully and the design of the game will factor in the use of effects, so that the amount is always as low as possible.

    Some general informations:

    1) Pixel shader

    A pixel shader is an algorithm that runs on the gpu side and calulates every pixel it gets passed. Modern cards have shader units. They do whatever task is needed, vertex, pixel, even other tasks. The more units you use, the less gpu power for all the rest of the work, that needs to be done.

    If you use a pixel shader on a 4k texture, then that shader calculates 4096x4096 = 16.7 million pixels per tick. That's 1 billion pixel calculations per second at 60fps. Now do this on 1000 4k textures and you have a quadrillion of pixel calculations per second, just for one pixel shader!

    Of course, that's not the way, shaders are used. Instead, in 3D games, low res copies of a hires texture are made. For example, the hero may have a 4k helmet texture. But currently on screen that helmet is seen from a distance, so that it's only a couple of pixels big. Instead of the 4k texture, a 128x128 copy is used and sent to the pixel shader. You don't see the difference as it is so small, but the graphic card now only needs to do 0.9 million pixel calculations per second, instead of 1 billion.

    In CC you should adapt that technique. For example, if you do a color correction on a 4k texture, but your game window is only 1280x720, then you are wasting processor power. Instead, attach the shader to a layer. The shader will now calculate the visible 1280x720 pixels and not the full 4k texture.

    Also shaders should be deactivated, if their effect is not to be seen. For example, if two sprites lay one over the other, so that the lower one is completely covered, you don't need the pixel shader applied to it.

    I'm sure there are many other optimizations I'm currently not thinking of, but the simple message is: Think smart, when designing your game.

    2) Texture sizes

    There are many, many cards out there, that are still optimized for lower res textures, like 512 or 1024. They do support 4k textures, but have a hard time with them. Consider that, when designing your game. Also, even backgrounds are not just one huge texture but composed out of many smaller ones. Have a look at this image: Whispered World

    It is not one huge texture, but at least a dozen of smaller ones, carefully layered. You have to adapt your idea to the technique you use. If you are working with 3D optimized hardware, you need to split your creations into smaller textures and compose the game's graphics from them.

  • Count me in. I say March 21st

    And if I hit the exact date I want a "Foreseeing genius"-badge <img src="smileys/smiley4.gif" border="0" align="middle" />

  • When I see 60 fps in big 3D games with large (high resolution) textures, DX11 and 10-15 fps with DX9 and 5-6 shader effects in 2D Construct applications, it makes me laugh.

    It makes me laugh, too. Why do you think that comparing the almost two times faster DX11 to DX9 and comparing the completely C++ written engines of "big 3D games" with an event system of a 2D game maker could make any sense? If you need that 3D speed, then write your game in C++ using one of the many C++ engines out there. If you need comfort in 2D, stick to CC. You won't find a faster 2D game maker (even Torque2D included).

    I've seen it all. It did not impress me. Too corny.

    What is corny about a realtime calculated mantis movement with smooth movement transitions? I wonder what you expect of a 2D game?

    I'm talking about simplifying creation of more complex processes. Construct has already taken the first steps in this direction. I mean the idea of visual programming. There's so much that can still be done.

    But simplifying the creation of more complex processes is exactly what I meant. You won't get it, unless you use a game maker that is specialized on one type of game (e.g. an rpg maker, or an adventure maker, etc.).

    Visual programming? Like placing your sprite on the layout, load its animations and put on a movement behavior? I think it already is very visual. Or do you hope for some kind of visual scripting? That won't happen, because Ashley decided to go with ACE. But there are a few other engines that support visual scripting. Have a look at UDK or Unity. While they are 3D engines, you can also do 2D by just ignoring the 3rd axis.

    I downloaded both, but decided to stay with CC, just because of the ease of use. But then again, I also do not make games with 1024? textures or a dozen of complex pixel shaders. I would really like to know, what you use that huge textures for? Until today, I did not see any 2D game making use of them, so maybe you've come up with a totally new idea, that might be worth being used with a 3D engine under C++ and DX11 instead?

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Current functions is enough to create a simple games, but not enough to create a serious, complex in structure 2D games.

    I disagree with that. All the features you get with CC are particularly suited for serious, complex structured games. The powerful and fast ACE, the groups, the includable event sheets, the function object, arrays, hash tables, pixel shaders, the fabulous workhorse 's', and hardware accelerated DirectX 9 graphics, that beat any flash game in terms of speed.

    I will soon post an example cap that features a music sequencer (well, the code for it and the functionality, no fancy graphics) - completely driven by 's' and the function object and even running autonomous. Yes, it is not a game. But it is highly complex.

    Also, I'd call something like thumbwars (was it the correct title?) a complex structured game, as well as some of the games already posted on these forums. Just open your eyes, you'll find enough evidences to proof CC's capability to do complex structured games.

    Maybe you mean you can't do complex games right out of the box? Then you're right. But there's no tool in this world, that can do this for you. You need time, you need to manage everything smart, you need to fight against your weaker self from time to time - but CC is always with you.

  • Time to add a sixth digit <img src="smileys/smiley20.gif" border="0" align="middle" />

  • I already did mention and link it here, and it kept being uncommented, so that I feared there is no sense for it.

    I'm glad it gets attention at least when posted as a new thread, as it is much to important to not get attention.

  • Here is a link about arrays and hashtables, incl caps demonstrating them:

    http://69.24.73.172/scirra/forum/viewtopic.php?f=8&t=8400

    --------------------------------------------------------------

    Arima, I wonder why you do it so complicated? Without any further info it seems to be unneccessary (you can just save any hash table to file with the same ease as saving an array).

  • If the events posted above are complete, then there's an event/condition/trigger missing, that sets the global 'Boss002' to anything other than less or equal 0. Which means, as soon as that condition is met, it keeps being true on every tick.

    One solution could be indeed using 'trigger once' with this event:

    + System: Is global variable 'Boss002' Less or equal 0

  • I'd say, time does not play a role in this scenario. Of all three, ini is the slowest, because it needs to convert strings, but that's a matter of milliseconds per line. As long as the amount of data stays that small you're free to use whatever pleases you the most.

  • It does detect it and it gives it a "picked" status. That is, it is put on the selected objects list ("SOL"). And you acess picked objects using SOL.objectname:

    SOL.spr

    http://www.scirra.com/forum/python-refer-to-instance-from-event_topic44898_post281166.html

    Another way would be using "getattr", getattr(SOL, objectname):

    getattr(SOL, spr).width = self.nW

    http://www.scirra.com/forum/function-created-with-python-needs-check_topic45101_post283709.html