I fully agree that you should always try to contribute the changes you make to a Free Software project. This has not only the advantage of less maintainance work in the long run. It also means that your changes will be reviewed - by people who know the code you're modifying very well. So contributing also means to get a good quality assurance.
However, it makes still a lot of sense to keep a local fork in addition to contributing. And here the author argues too one-sidedly when he recommends that you should do that only for security fixes. There are many other scenarios in which this makes sense:
1) The change may be important for you (e.g. to make the code compile on some strange OS), but not be accepted by upstream (e.g. they don't want to support that strange OS in the long run).
2) The review process, as well as the next release, may take some time. And you certainly don't want to make your own release schedule totally depending on other projects' release schedules.
In the first case, you have no choice but to keep a local fork as long as the project maintainers don't change their mind or provide a better solution.
But even in the second case you'll have a long-term fork, at least if you are contributing regularly. But that's not a bad thing, because everytime the upstream project releases a new version, you can remove some of your (contributed) local changes from your fork. So yes, you'll have a long-living fork, but it will only differ from upstream by the last few patches not yet accepted by them.
The pain comes in when the upstream project decides to refactor the code in a way which forces you to completely rewrite your patches.
It should be noted that the situation where you intend to fork and maintain the project, this does not apply, and by all means make your changes. In the case where you intend to update, this pattern will help you avoid the pain.
"The freedom to run the program means the freedom for any kind of person or organization to use it on any kind of computer system, for any kind of overall job and purpose, without being required to communicate about it with the developer or any other specific entity. In this freedom, it is the user's purpose that matters, not the developer's purpose; you as a user are free to run the program for your purposes, and if you distribute it to someone else, she is then free to run it for her purposes, but you are not entitled to impose your purposes on her."
The Free Software Definition http://www.gnu.org/philosophy/free-sw.html
But with that in mind, there are plenty of cases where you do need to maintain a proprietary fork: testing new ideas, integrating with internal infrastructure, and so on. This is certainly more difficult than letting someone else maintain the project, but less difficult than being the maintainer yourself. You basically miss out on big refactorings, but it's no different than being a regular user of a library that makes incompatible API changes.
The other project explicitly allows for code modifications and accomodates them in a separate directory you can keep in revision control. Again, no memory of major conflicts in stuff we did correctly - i.e. used the provided overlays or callbacks.
So, when is it a bad idea? When you step outside certain constraints (like our callbacks example above) and override or replace or modify directly the project's own code in some significant fashion.
So, modification of open source for your own needs is a major plus if the project accomodates it sanely or the scale and nature of the changes are controllable.
edit Removed "and only if" in last paragraph, attempts to preclude people from providing better ideas.
If they want to give up the intangible benefits they would get from sharing those changes (at least the non-business-strategic ones), that's disheartening, but I wouldn't lose any sleep over it.
So yes: avoid forking when possible, but don't be afraid of maintaining a fork if the benefits are worth the effort.
My usual reason for having a fork is Solaris support, or removing code which I don't need for performance reasons, or replacing the CMake build system with something sane. These aren't the kinds of changes which many maintainers are willing to accept.
Another frustratingly common case is that the original maintainer has gone AWOL and I need to fix a few bugs and maybe add a couple of features, but I do not want to become the de facto maintainer of a public fork.
One solution might be for applications to pull in all of their dependencies as subrepositories in their version control systems. But then where would we stop? At the implementation of the application's main programming language or managed runtime? At the C library? At the operating system itself (assuming the OS is open source)? This would also seem to encourage a single dominant version control system.
So I have no definite answers.
By contrast, even the biggest codebase you use in a language like Python, you can dive into the code directly. Well, unless it's Software as a Service.