After some thinking I've arrived at the idea that the hypothetical camera behavior we're discussing can (and possibly should) be fairly minimal. You see, there's a lot of contrived situations, and it's a good idea to minimally restrict the user. So here's what I suggest.
Add a 'Camera' behavior (and possibly ditch the 'Center view on me' attribute, I'm not sure there'll be a use for it with this Camera behavior) and a 'Camera boundary' attribute. Objects with Camera behavior won't be able to pass through Camera boundary objects (those are effectively camera-specific solids). Give the Camera behavior a bunch of parameters:
- Rotate camera - equivalent to the event Always set display angle to Self.Angle
- Scale camera - similar, only links zoom to object's size
- Make 1:1 - on layout start makes the view show only the area covered by the sprite. Probably redundant, but it eliminates the need to change both the camera object size and the initial zoom factor when you decide to change the size of the viewport during development. Likewise, eliminates the need to change the initial zoom factor when you decide to change the window's size.
- Create default boundaries - on object creation creates four rectangular objects with the 'Camera boundary' attribute framing the layout, so that the user doesn't have to do it manually.
- Invisible on start - on by default (I see no reason to leave this object visible, but someone might want to...). Sprite object already was this in 'Appearance' , but it might be a good idea to duplicate it in the Camera behavior, so that all relevant things are grouped in one section. However, that could probably cause untold problems with resolving two identical properties. By the way, why don't, say, Tiled Background objects have this property? Also, why there's a way to edit the image associated with a Tiled Background object (Properties/Properties/Image), but no way to do the same with a Sprite object?
This will hopefully take care of the basic things. Now about 'Pan to object' and 'Follow object' Deadeye mentioned. Those should probably be separate behaviors, or one behavior with a lot of parameters. So that the object has 'Camera' behavior to actually control the viewport, and another behavior to follow the character or whatever object is 'in focus' right now. After all, there seems to be no 'Chase another object' behaviors right now, and having one would be very useful. There's a lot of useful features that can be crammed into such behavior, I'm not ready to say anything useful about it right now.
Why bother and split it into two behaviors? Well, 'Chase' behavior is useful on its own, and besides there are always times when canned behavior doesn't cut it, and user will want to have a different way of moving the camera - using events and/or other behaviors.
Now about tilting, wobbling and shaking. Well, there already are some utility behaviors for that, but I have a different idea. What if someone wants to use Sine behavior, only applied to object's angle, not it's position? Or a cycloid instead of a sine wave?
A while ago I tinkered a bit with Ogre3D (and in the end decided that it's way to early for me to dig into full-blown C++ programming and 3D), and stumbled upon a great feature: controllers.
The basic idea is that you attach a function to a variable, so that the variable changes on its own without the need to explicitly change it's value each step (or, in Construct's case, no need to clutter event sheets with technical calculations). So a controller has a value it changes, probably some values it takes as input (TimeDelta is the first that comes to mind), and a function that calculates the new value based on the input values (and possibly the old value). The function can be programmed to be anything, of course.
I have no idea if it's at all possible to implement it in Construct, but that would be a very cool feature. Anyway, it can be emulated with events, but not all that easily. The way I'm seeing controllers, they attach to the specific variables of the specific objects, so when you try to simulate them with events you need to pick the exact object you need.
So all in all there'll be up to three different aspects to implement a good camera: Camera behaviour to attach the camera to the object, Chase behavior to make this object follow the player, and some controllers to shake this object (and the camera with it), if it is needed.
There are two things I haven't thought about: multiple cameras (Imagine that we want to have different boundaries for each camera in the layout. A single 'camera boundary' attribute is insufficient for that. Now if it stored an object ID...) and camera-specific effects. That is, effects that 'live' in the local coordinate space of the camera, and not in the global coordinate space of the layout. Say, horizontal waves on the screen representing dizziness. Now those should stay horizontal (respective to the camera) no matter how the camera is rotated. UI overlays also fall in the same category. So, there's the need to rotate some of the layers together with the camera objects. Now that I think of it, controllers could help with that, linking the required layers' angles to the camera object's angle via the trivial 'output=input' function. That might be a bit counterintuitive, however...