Fimbul's Forum Posts

  • Except, XP is on almost 30% of all computers which use the internet daily. 30% is a massive factor, regardless of how old the system is. And I suspect it'll continue to be a massive factor until either Microsoft replaces its village idiot, or Chromebooks is upgraded to become truly PC (instead of PC lite).

    0% is the same share IE6 had of the browser's market until march 2008. Steam stats say XP usage is around 10%, and steam's demographic is the same as the one you're probably aiming for. XP usage stats are also massively inflated due to china, so unless you're targeting chinese casual gamers during their working hours (if so, that's dumb), XP is as good as gone.

    Coincidentally, construct 2 was launched officially in june 2011, and back then IE6+IE7 had a combined 9.5% share.

    And as far as your 'opinion' carrying 'weight', if you've so keen to belittle such as large demographic, then no, your opinion doesn't carry any weight when it comes to commerciality.

    ince when am I belittling anyone? I'm dissenting - two totally different things. Unlike some of the people in the forum, I couldn't care less about mobile. I get that people make money with mobile games, and I get that mobile is what's "hot" right now, so I don't mind, however:

    If you guys want to pressure Ashley to focus on mobile exporters, then I am within my rights as a buyer to pressure Ashley to NOT focus on mobile and keep his current strategy of a pure-HTML5 product. When this product started (and when I purchased my license) it was all about the desktop, so when business decisions start impacting the quality for me (and make no mistake, if Ashley were to focus on native exporters, the desktop side would suffer), I have to speak.

    Because Internet explorer 6 doesn't have the features necessary to run HTML5 games. XP does, and until recently, chrome supported hardware acceleration on it by default. That's more users that can purchase your game. It's that simple. Besides, Ashley already said he's going to make node webkit have hardware acceleration all the time.

    agree, this discussion on XP is basically a moot point due to node webkit. Still, I would expect HTML5 developers, by definition on the bleeding edge of tech, to not care about outdated/unsupported platforms.

    Again, I disagree. The state of MMF should not be used to assume what the state of C2 would be if they made native exporters. C2 already is vastly better than MMF, is updated way, way faster, works better and has most features that games need, and scirra working on native exporters would not send C2 back to MMF levels of problems, nor would it entirely stop progress on the IDE if they hired someone to work on them concurrently.

    Why shouldn't the state of MMF be used as a baseline for comparison? You're well aware Ashley, or as he was known back then, Tigerworks, made construct due to dissatisfaction with clickteam's solutions. This is in part due to Ashley being a better developer, but the vast majority of it is due to learning from clickteam's mistakes! Let's explore some of those:

    • Hardware acceleration support from the get go. While the infrastructure (read:webGL) wasn't there yet, the scaffolding was present long before.
    • Open formats and technologies.
    • Event sheet organization is a priority - even official plugins should adhere to standards and not be deeply dependant on engine internals (this is an area where I think c2 has been failing a bit, recently, with the new crop of official plugins requiring architectural changes).
    • Open discussion of competitors in the forums.
    • Short release cycle.

    And now, most recently, we can say add "not wasting time developing native exporters" to that list. It's obvious that native exporters would not send C2 back to MMF levels of problems, but it would definitely slow things down. Ashley said it himself: the work involved in maintaining separate codebases grows exponentially!

    Hiring someone new doesn't change anything: that is one person that is sucking money from Scirra. That same person could be developing something else instead. It's as if you all think the editor is perfect already when it's far from it! We need more modularity, we need a way to code server-side in construct (especially now with multiplayer), the tilemap needs way more features (where's isometric support? why can't tiles have attributes? why do we need a separate tilemap for a collision layer? how do we make multi-tile entities without resorting to third party tilemap plugins?), we still don't have full layout-by-layout loading, the list goes on and on...

    You can't just call a platform with over 700 million devices sold overpriced hype. Even if not all of them are currently in use, a quick google search implies more than half still are. Even if you personally don't like it, lots of other people do and it has been a huge moneymaker for some.

    ctually I can, and I just did. My personal grievances with Apple aside, the point is that you're always reliant on third parties: I could go reductio-ad-absurdum (as I did with my IE6 example) and suggest that Scirra becomes a mobile device manufacturer and OS vendor as well to minimize reliance on external factors. That would obviously be outlandish - my argument is that making native exporters might not be totally inviable, but it's still unreasonable. This phenomenon even has a name: Not invented here. Why do you think Ashley's solutions would be better than existing solutions?

    but apple doesn't want to for some reason that they apparently don't want to disclose (or at least in my search for answers I don't recall finding an official reason)

    And there's that "third parties are unreliable" issue again. Back when clickteam was developing the android exporter, they ran into some bug in the official Android SDK that prevented them from continuing. Chrome for windows 8 ARM still has many of it's capabilities artificially limited. JIT compilers for iOS are inaccessible, third parties cannot compile. Apple's software can, though, so it's not an engineering or technological problem.

    This goes to show that even when making native compilers, the list of possible problems with third parties is huge! Sticking to HTML5 has its cons, but at least there you know the issues will eventually be fixed, since you have giant players throwing their weight behind HTML5's success.

    Which company was that? Regardless, one company's failure does not mean another company will not succeed. There are quite a few frameworks that export native just fine.

    ..? The company is clickteam. But we can also talk gameSalad, gamemaker, and many others in the scene. The reason I chose C2 over them is because I think C2 is a better product. And that is due to not wasting time on exporters. Ashley keeps mentioning "what a waste it would be to develop native exporters and by then HTML5 is good enough", but can you imagine the waste it would be if we had a native blackberry exporter? Symbian? Tizen? XNA? Ouya? Windows Mobile? Ubuntu touch? Palm OS? Bada? Hindsight is always 20/20 but who could've predicted the huge success of the wii? Or the huge failure of the WiiU? Or the sudden appearance of the iPhone? Blackberry was once a huge company, so it's not like you're safe by sticking with what's established today.

    It's really not. It's the result of years of work and quite a lot of code.

    eep in mind I'm talking about the IDE only, not the engine powering the games or the exporter. The most complicated parts are the event editor, the image editor, and the saving/loading of it all into XML. Sounds like, at most, a few months work to convert, though only Ashley can say with any certainty.

  • Aphrodite: yes, I'm suggesting scirra sticks to their currently business plan - which means to stick to HTML5. The entire tech industry has been moving the way of the web for quite some time now, and with HTML5 this has just accelerated.

    Most people working with computers either use the browser as their only software or could do so if desired - the exceptions are people like 3D modelers, VFX artists, graphic designers and other specialized functions like database admins and the sort. Even programmers are switching to javascript based IDEs like brackets. We already have quasi-html5 based OSs (chromeOS), html5 smartphones (Firefox OS) and most of the software industry is investing almost exclusively on web tech for new apps.

    In my opinion, the only thing Ashley needs to change is:

    • Develop a command-line exporter
    • Create a new editor using web tech - this is not a website or a "cloud editor", but a normal software just like the one we use except instead of C++ it's made on javascript. The current editor is quite simple so it's not like it would be difficult to port. It even makes development faster, since you can share code between the runtime and the IDE.
    • Open it up for developers to create IDE plugins (say, and IDE SDK) - stuff like dialogue tree editors, AI editors, finite state machine creators all would suddenly become possible with ease. Currently those kinds of tools can exist, but they'd rely on modifying the XML, which could cause corruptions. Examples of the kinds of things possible are the spriter and tilemap plugins, which only exist because scirra supports them officially.
  • Ugh... I'm sorry guys, I was lurking around watching this discussion but I have to chime in - Ashley is a veteran of the industry, he was already a veteran back when I started, and I have 14 years of experience with this sort of dev tool, so I like to think my opinion carries some weight.

    Windows XP is too old to even be considered! Why don't you go ahead and ash Ashley to support internet explorer 6 as well?

    Windows XP is 13 years old! When XP was launched, not even windows 3.1 was that old! That would be equivalent to you booting up your XP machine for the first time and hearing someone extol the virtues of windows 2.0! I'm sorry, windows XP isn't even a factor anymore. And that's not even counting the woes of it's 64 bit version, since the 32 bit one can only address up to 2 Gb RAM.

    ... we don't need an entire browser engine - we just need what games need...

    gutted version of node-webkit would be great - there are tons of things we don't care about that could be removed in order to reduce the overhead.

    As for the issue about third party plugins - as much as I appreciate all the work put into them, even with C2 as it is now, the vast majority of games could be made without them. If C2 was feature complete, then even more so.

    think that the fact that third party plugins aren't necessary for the vast majority of projects says more about the lack of power in the SDK than it does about the power of construct. Many of the advanced plugins, such as spriter, tilemap or the upcoming multiplayer have advanced hooks to the IDE that we normal developers just can't access, which means plugins that require advanced or intensive configuration are right out, unless you're willing to edit XML files (thank Ashley for having the foresight of having c2 use an open format, or even that would be impossible).

    By hiring someone to work on native exporters concurrently, it would reduce the impact the extra work would have on C2 itself. Tomsstudio is showing that it's possible.

    omsstudio is showing nothing. I don't say this because I hate him or anything - if he manages to pull it off, power to him! However, making an exporter is a gigantic overtaking even WITH support from developers and a proper exporter SDK - just ask the creator of the anaconda exporter at clickteam (and that exporter is based off a much less feature-complete product).

    Working on native exporters is a trap, just look at the state of multimedia fusion 2.5 - practically a joke, after all those years they release a meager update with either nothing new (support for skinning), features it should have had from the start (utf-8) or, at best, features the competition had for nearly a decade (zooming/rotating the layout).

    The point of asking for native exporters wasn't just about solving performance problems, though that was a big part of it - it was also about other things like the issue of no memory management on iOS and about you being able to directly control the quality of C2's own exports.

    OS is, as well as everything else apple makes, overpriced hype. They should be the ones fixing their memory management and enabling webGL/javascript compiling for third parties - we moan at ludei for not fixing cocoonJS but who's knocking on apple's door telling them to get their act together?

    What it really boils down to is too much is based on hope. I'm just tired of relying on hope for the quality of exporters to improve, and worse, once we get them, having to hope that a quality exporter isn't later screwed up in some way.

    [...]

    I'm afraid I'm still in the native bandwagon.

    [...]

    I think it is more important for your long term benefit to make native exporters

    nd what's the alternative? Making native exporters? Well let me tell you a story about a similar company that went this route, and that's an established game-making company with many employees:

    • First, they release a java/j2me runtime. It is broken and many features are missing or are inconsistent. They continue working on said exporter, but nearly no one uses it. The java/j2me forum section stands nearly empty. A few years later, smartphones arrive on the scene and j2me quietly dies. The java runtime is useless for windows because the traditional exporter is way better, and it's also useless for mac/linux because it supports so few features, and the JVM is so bloated that the speed is minimal. No hardware acceleration either. The SDK for making third party plugins arrives so late it's not even worth considering.
    • Then, they begin work on a XNA exporter. A few months after the launch, Microsoft announces XNA is being discontinued. Almost no games are made for the XBOX using this exporter, though some prototypes are attempted. The SDK for making third party plugins for this exporter is never released.
    • So they work on a flash exporter. Well this one actually works, except flash is dying as we all know. Also, it has poor support and all third party plugins have to be remade.
    • iOS is here, so better make an iPhone exporter, right? Cue years going by, and the final exporter performs poorly and offers little support. SDK arrives late as usual.
    • What's this Android thing? You would think that since a java exporter was already available, an Android exporter would be a piece of cake, and you would be wrong. The Android version arrives many many many years after the big Android boom. As usual, poor compatibility and no SDK.
    • Now they're working on an HTML5 exporter. Want to bet where that will take them?
  • This is probably something that is not implemented yet.

    Meanwhile, you have two options:

    Either modify the ajax plugin and add options for modifying headers (shouldn't be that hard to do, but requires you to learn a bit of the SDK)

    Or set up a server-side request proxy, that takes your request, modifies the headers, and forwards it to the destination - you'll need to know a server-side language (such as PHP) for that.

    I recommend the first option.

  • You might have more luck in the correct subforum:

    Help Wanted

  • What if I don't want to animate my particle, but still want the ability to load it's single image from URL?

    You wanted a use case:

    Say you have a platformer where you can have a floating companion, which emits particles when it floats around. You can unlock new particles for that companion, without having to regenerate the game's spritesheet - super useful for when you want a special "christmas edition" or similar.

    Ashley - why don't all objects that display an image have a "load image from URL"? Sounds like a nice case for standardizing, as that same use case could be applied to everything.

  • > LennardF1989

    >

    > Did you somehow miss Ashley's response? The SDK is perhaps your best bet.Her response is the reason as to why I am bumping a topic from 2012: The SDK doesn't support it.

    Hence I am asking her to reconsider and expose edit-time drawing functions to Behaviours for debugging purposes, my example being one way to apply it.

    Ashley is a male, btw.

    Also, bump. I see no reason why behaviors shouldn't be able to draw - some possibilities have already been mentioned: a debug behavior, health bars, a speech bubble surrounding text (that has the benefit of being compatible with other text-modifying effects)

  • So,

    Fimbul From what you've said, am I correct in saying that I can design a peer that functions as a server using WebRTC?es! We'll see what Ashley comes up with - in principle, there's nothing in WebRTC preventing you from doing it all within construct.

    Ashely would a Construct 2 app be able to successfully/feasibly function as a server using its upcoming p2p features with WebRTC?he only problem with that is whether you mean a normal server (one of the player is the host) or a dedicated server. Both are feasible with construct 2, but for a dedicated server you'll either have to pair it up with a tracker (that is, a non-construct server responsible for handling handshakes between peers), or Ashley will have to come up with a server-side exporter (maybe node.js?), which is a lot tougher (But I would pay a hefty sum of money for a node.js exporter that could handle application design instead of purely gamemaking - might be a way to expand into the corporate market Ashley? *Wink*Nudge*)

    Matter of fact, I do have another question, can/will/does C2 have the ability to interact with MySQL? So say I create a peer server, could I then store/retrieve, say, user data using MySQL?

    Would this all (peer server & MySQL) even be worthwhile?nless Ashley makes a node.js exporter (in which case you'd probably have plugin wrappers for the npm database drivers), the only way you'll be able to talk to MySQL is via the traditional method of calls to a webserver. If the "server" isn't publicly available and is just a construct app that you leave open in chrome (quite the flimsy architecture if I may say so), you could design a pretty simple PHP proxy to forward ajax calls to MySQL - it would probably need less than 20 lines of code total.

    It would definitely be worthwhile! For more complex games, say MMOs, you'd probably want to ditch construct for the backend though (unless there's that node.js exporter)

  • Ashley, but do you even have to have a switch point? Wouldn't it be better if, instead of having the "use collision cells" be a project-wide setting, why not let it be a property of the turret/LoS behavior?

  • Wouldn't it be better to just optimize the turret behavior and line of sight to work better in those edge cases? I.e. switch the turret off from collision cells if it detects that it's doing extra work...?

  • Fimbul, I beg to differ. I?m not a game developer, I work with servers and datacenters for a living so I actually know when I tell you that most people want servers for their games and p2p has all type of troubles for data connections unless your requirement is just for 2 persons and packets are low on bandwidth. also work with servers, datacenters and clusters, and I can tell you it's looking more decentralized every day, because monolithic servers simply cannot scale.

    Not only I see people asking this all day but like I explained before, even Microsoft switched Skype off from p2p. Most people using Voip which is going to be similar to game requirements, also use a middle server, usually a voip provider, and do not usually call from machine to machine.icrosoft moved skype to the cloud. The p2p remains, but now the peers are microsoft's own servers. Even if Microsoft ends up completely centralizing the whole Skype infrastructure, so what? As if Microsoft were the king of good decisions - see Internet Explorer.

    On the Google presentation they even explain for WebRTC, you will need a server to handle multiple connections because its just the nature of it once its scales.ou need a server for it to work at all, not just for scaling it.

    This may work wonderful for someone playing chess with another party, but not if you want to connect 100 players together playing the same game at the same time. The network will start to drag the slowest link of all.hat? If that were the case, bittorrent would only download at the speed of the slowest peer, and that is clearly not the case.

    Its not correct than p2p is the shortest path either because that is not how networks are deployed worldwide.

    ...snip...

    In this small example case if your target is Latin America or have players there you would have a server in the US and it will be faster for everyone vs players connecting with each other.layer A in Brazil connects to Player B also in Brazil via link C in the US (p2p-model)

    vs

    Player A in Brazil connects to server C in the US, which then transmits data to player B in Brazil (server model)

    How is that any faster? Worst case scenario, they are exactly the same. In all other cases, the p2p model is faster.

    Besides, like I said three times before, you can mix the models:

    Player A in Brazil connects to Player B also in Brazil

    Player A in Brazil connects to Server C in the US

    Player A sends data to both B and C

    Server C sends player A's data to player B

    Player B receives the data from player A and, much later, the same data again from server C

    In player B's perspective, data received directly from player A isn't trusted implicitly, but is used for interpolation. Data received from server C is authoritative, and is used to determine serious state changes, such as kills, health, etc., and also as a backup should the direct link from A to B fail.

    Even if they are local players (same city, state, etc) you will not achieve high bandwidth outputs with p2p, it will work for a couple of players tops because each user needs to stream their connection up to the network, and even in countries with high Internet speeds this is usually very low. Example, last time I was in Germany on a residential ISP connection download speeds for a 25 MBPS connection, only had tops 1 Mbps upload, now try connecting all your players via 1 Mbps. For a small game that only sends positions this will work fine, but it will cause problems as players increase. Now most games only send a few details, like position, score, etc, but still if latency sucks in one user it will drag the whole game down, this means all players.Mbps uplink is more than enough: you could send 10kb packets every 10ms (that is, 100 frames per second) - 10Kb of UTF8 text allows for between 10240 and 40960 characters. Those are insanely high values for a packet, and you could probably get away with something like 128 kbps.

    On real live games this means suffering connection drops, slow games, etc.

    Is that all worth?

    In particular because setting up a server is so cheap and you will need to host scores, registrations, and probably a website anyway for your game.

    I?m not against P2P but if P2P worked for most things we would not have datacenters and servers today. P2P like its name says is peer to peer, and is usually designed for person to person. I imagine someone developing a game will need more than just a couple of players, that is the idea of multi-player. If you are creating a big multiple player game there is no way P2P will ever work.

    If you only need to test it with a coupe of people or are playing with friends, then this will work just fine.2P is not exclusively person to person, though, it's peer to peer. Must I repeat myself again? A server is just a special type of peer!

    If you want to argue with me that you can't possibly program a server in construct 2 to handle thousands of players, then yes I agree, but no one said you had to do that. Fire up node.js, it can talk to c2 just fine.

    But for open code like HTML5 games usually are, a server side is the only way to protect the game, just like no one would allow users to view your asp or php code that runs on the website. The server side option is the only realistic option that can potentially avoid some of this cheating problems.id you read what I posted? It makes no difference whether the game is made in HTML5 or C++, and it makes no difference if the game is hosted on a server or on a machine in the client, the architecture is all that matters. If starcraft 2 needed server-side security to prevent cheating, it wouldn't have become the e-sport it is today. If bitcoin needed a server to provide security, it wouldn't be worth over $800 each.

    Besides, no one said you can't have a server.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • And good luck with anti-cheat using Javascript. There are client-heavy games that have succeeded in preventing cheating, but the only ones that come to mind are Flash games such as RotMG, which uses an insane AS3-specific obfuscation that JS will never have.o competent engineer will think a third-party anti-cheat solution is appropriate to an online game. You have things like GameGuard, PunkBuster, valve's VAC and blizzard's warden but all they do is create parallel markets for cheats. Cheating countermeasures must be integrated in your game's architecture.

    Even if client-side anti-cheating solutions were a good idea - and they are not - there are no technical reasons preventing you from deploying your own JS-based obfuscation that runs circles around AS3 solutions - heck, you'll probably make a ton of money if you can do it.

    Games like League of Legends have no anti-cheat programs - you can run cheat engine right along with it and it won't ever complain. Yet there are no cheats for LoL (except rudimentary tools that can't even be considered cheats). Meanwhile, gunbound has had decades of cheat signature collection and has tried many anti-cheat providers, but it's still a festival of aimbots and hackers. Why do you think that is? One word: architecture.

    > ...snip...n a p2p system you often have a host and a client(s) - or at least from what I've learned in my classes - and if you figure out who is the host, and you are it, you can do pretty much anything unless you have a server to check it.ot at all. P2P simply means you allow clients (aka peers) to connect to each other. Like I said many times before in this thread, a server-client architecture can be achieved easily in a p2p solution, but not the contrary - hence Ashley saying server-based is a subset of p2p.

    Besides, who said if you're the host you have free reign to cheat? Games like Warcraft 3 use that architecture and you cannot simply "cheat" the quantity of gold or wood you have: if you attempt to, the game desyncs and kicks you out (thus ending the game).

    I'd agree that you could properly design a game to prevent the majority of cheaters, but the degree you could achieve is limited by using JS or using C2 for web-browser games.hat makes you think JS is inferior to anything else? Clients are self-contained, in order to cheat in a properly designed "pure" p2p game, you'd have to find a way to modify the code in the remote clients, which would require attacks of a magnitude far greater than simple game hacking. That is the same in all games, regardless of programming language. Making the game in C++ (for instance) would change nothing.

    This book explains everything you need to know about game security - I highly recommend it

    And then, only the host can mess around with things - and if you're playing with a cheating host, I guess you can just find someone else to play with.he clients can do their own verifications to see if the other peers (including the host) might be cheating. In that case(excluding certain kinds of cheats like bots), only the developer would be able to cheat.

  • Even if it has P2P capabilities that is no the way you want to go.

    P2P does not work nicely in games and real time protocols because you can?t expect a certain quality in terms of lag, performance and speed. It may work great if your user A is in the same city as user B but go to hell if user C is in another part of the world with a crappy Internet connection.gh, no. If you broadcast your data to peers, it will still be faster than sending it to the server only and letting it re-transmit, regardless of where you are in the world. A straight line from A to B is always shorter than a line from A to C and C to B.

    Finding or setting up your own server is just plain silly easy today and you will have so much more benefits, just use the same server where you are hosting your game.nd you'll still be able to do that with a p2p architecture, but you also gain the benefit of not HAVING to have one.

    In P2P the server just acts as a hub to connect players together or initiate the handshake connection, but users are not connected via the server after it, but directly passing data between from one user to the other. This creates all types of problems.ou are confusing the handshake server, aka. Tracker, with the game server. You always need the tracker, but the game server is optional, and is just a special type of peer.

    Sure for a "AAA expert commercial product" this might not cut it.

    I don't see why it wouldn't cut it.

    I suppose the main issue I and others are concerned about is security and cheating.

    In a p2p connection, cheating is often as simple as loading up Cheat Engine and changing one or two variables. Obfuscation or no, this is incredibly simple and can take little no nearly no effort.ou can implement a completely secure game on a client-only p2p architecture (that is, one that has no "hosts" or "servers") - saying you can just "load up cheat engine and change some values" couldn't be further from the truth in a properly designed environment, that'd be the same as claiming you can edit the amount of bitcoins in a wallet or infect a torrent swarm with a malicious file.

    Serverless has another major downside:

    What about scores or leaderboards ?

    User registeration ?

    This sort of functionality, which generally goes hand in hand with online multiplayer or game play, will typically not be availble in a pure p2p gaming connection. (you can have scores and names exchanged between 2 p2p players, but not store it for later review or leaderboard purposes)

    Ofc, you can have some alternate means, like updating through ajax calls and what not ... but then you would still need a server.ou provided the answer yourself. Ajax calls are more than enough. Why do you think you are constrained to one approach only? When we say serverless it means servers aren't required, not that they are banned.

    Having an active socket pumping data back and forth in your browser can causse lots of slow downs.

    Especially if your going to try more then two players.ebRTC is a new standard designed to avoid said slowdowns and offer performance.

  • Yep, I was just about to say that. P2P solutions can implement the authoritative server model, just as long as you find a way to make clients always connect to the "server peer" and recognize it as being an authority, while at the same time making it impossible for other peers to impersonate a server (this is all pretty easy to do).

  • Ashley - would you mind sharing what was wrong and the fix? It seems like it was a rounding error and could be useful to keep in mind as a javascript programmer