Scaling, up or down, will always introduce some form or visual degradation. There's no easy way around it, and it's directly related to signal processing theories, with aliasing, interpolation, convolution kernels used for filtering, etc. Also, it seems you're using non-uniform scaling, i.e. stretching, which increases the loss of quality from a perception point of view.
The best working practice is to author resources at the scale they will be displayed at runtime, or close to it ; it's true in both 2D and 3D - there's no reason to have a 256x256 resource to display a 32x32 sprite (which will probably lose the noticeable features and would require pixel art), just like there's no reason to have a 64x64 texture to display a full-screen character 3D model that would require a set of 1024+ textures. Filtering will lose important details and introduce artefacts.
In cases where the asset needs to support wildly different resolutions, usually it is recommend to author multiple sets of resources manually (or using a process that requires human interaction, to control/tweak/maximise the quality). In professional 3D games, certain various mip-levels for important textures are authored by an artist, and not just computed.
It is always possible to scale resources using more complex algorithms (e.g. larger spline-based kernels, etc.), but it quickly becomes too intensive for runtime usage and should be kept an offline process.