In a more-or-less recent thread up yonder (the one about LED lighting that evolved into a discussion/argument about CFLs vs incandescents and power factor, among other things), a technical term and concept (power factor) was argued at length. I wonder how many folks actually were able to follow those arguments.
Myself, I really didn't know just what this mysterious "power factor" was. I did know that values lower than 1 were bad and caused power distribution inefficiencies that resulted in real losses of energy and money.
I now know what power factor is--sort of. The best explanation I ran across on the web was this really simple one. Instead of taking the mealy-mouthed Wikipedia approach of jumping in all cosines and formulae phase angles and other fancy stuff and *then* explaining just what the hell it *is*, this explanation is for the layperson:
Power factor in electricity is like efficiency. The best power factor is 100%.
Consider a child on a swing. If you push them when they are going backwards you will actually slow them down. In order to push with maximum efficiency the motion of the swing and and your push must be "in phase".
Similarly in electricity, voltage and current must be in phase for optimum performance. Equipment such as motors, ballasts and variable speed drives tends to move voltage and current out of phase with each other.
[see atSo it turns out that PF is actually computed as the absolute cosine of the phase angle, which also makes sense if one thinks about it. But I still don't really have a handle on the meaning of this number. How low does PF have to get before it's considered really bad? 0.8? 0.5? Don't have much of a handle on that yet. (That's the problem with them dimensionless numbers.)
I still don't know exactly how PF losses work in the real world, though I can take an educated guess that they result mostly in heating in transformers, transmission lines, etc.