Bruce Schneier points out that we software developers have more responsibility to protect users than they have to follow all of our instructions:
The problem isn't the users: it's that we've designed our computer systems' security so badly that we demand the user do all of these counterintuitive things. Why can't users choose easy-to-remember passwords? Why can't they click on links in emails with wild abandon? Why can't they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?
Traditionally, we've thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This "either/or" thinking results in systems that are neither usable nor secure.
We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users' security goals without -- as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it -- "stress of mind, or knowledge of a long series of rules."
I'm sometimes guilty of it, too. Though, I also feel that users can do really stupid things that ought not to be our responsibility. After hearing countless stories about fraud, why do some users give credit card numbers to complete strangers, for example?