I haven't done any design work on power supply circuits in a few years now, so I may be out of date with my info, but the way that things used to work was like this -
True sign wave was basically an audio amplifier circuit that was fed a sign wave an input signal. The power output was real clean, but the output thyristors ran very hot & this caused a fair amount of power loss.
The approximated sign wave units were usually based on pulse width modulated square waves that were then filtered to become pretty close to being an actual sign wave. If the quality of the filtering was good & the circuit was not overloaded, then the output was usually sort of clean. One of the big upsides to this was the fact that the output thyristors were almost always full on or full off, due to the square waves going through them. This made them run MUCH cooler, with a lot less loss. If you ran your carrier frequency up high enough, then you could run a very small iron core in your inductors & still not run into a problem with flux saturation. Your "transformer" could normally be reduced in size & weight by a factor of 10 or more.
As for the output thyristors, I didn't see that many mosfet's in the stuff I fooled with. Mostly, I saw IGBT's or SCR's in the switchers. Often, I would see discrete silicone transistors in a cascading push pull arrangement in the true sign units. Usually the second stage would be something like a Tip32 & the final stage would be something really chunky, usually in a T-03 case. Most of the fet's I ran into were in true audio circuits. They give a warm tube-like sound, which seems to still be preferred by musicians.
If things are different now, please feel free to fill me in.