Cygwin commands that run slower than Windows counterparts are typically those that are syscall heavy, where those syscalls are significantly different on Windows and need lots of work for emulation. The biggie is fork(); it's better by far to write scripts etc. in such a way that they stream results rather than iterate and create new processes.
So, for example, rather than write a script that converts Unix paths to Windows paths with iterated calls to cygpath -w, instead pipe the paths to cygpath -w -f -. Rather than use pipe-to-sed (like "$(echo $foo | sed 's|bar|baz|')") for ad-hoc edits, try to use shell substitutions instead (like "${foo/bar/baz}").
Another thing that can be slower in Cygwin is find, when run over very large directory trees. I wrote a wrapper script (I call it rdir) that runs "cmd.exe /c dir /b" and massages the output into a Cygwin-style format. I also have the same script written in terms of find, so that my scripts that use it work on Windows, Solaris, Linux and OS X.
But I have to say, the biggest limiting factor in me solving ad-hoc problems is composing the tools available, rather than the actual runtime speed of the tools themselves. Having all the Unix tools available makes my life far easier in this respect. They could be even slower, and I wouldn't mind, because I would still be saving lots of time compared to what Windows provides; and my scripts usually also work on all my other systems running different OSes.
PowerShell doesn't even support simple fork-join like bash does trivially:
for x in {1..10}; do (sleep $x; echo $x) & done; wait
I use this idiom a lot when dealing with lots of multi-gigabyte files. PowerShell is mostly useful to me when I need to access Windows-specific stuff that Cygwin doesn't do well, like WMI.