The most interesting part of this to me is using multiple DNS providers to determine which category the site is in. It's both simple and effective.
If they actually go ahead with this plan in the UK and it's implemented similarly (eg. via DNS rather than IP blocking), somebody should make a list of what's blocked. Go through the top N sites and for each run a DNS lookup from both a filtering DNS server and also a couple non-filtered ones (ex: Google DNS[1]) then compare the results[2].
Bonus points if someone builds a way to crowd source the data so that it gets logged from multiple DNS servers round the world.
[1]: https://developers.google.com/speed/public-dns/
[2]: This would need to do more than a plain A == B as each address could resolve to multiple IP addresses.
The current UK ISP filter (the one that already filtered Wikipedia), used DNS & HTTP. IP addresses that needed filtering were redirected to their HTTP server by sending back their IP address, and then a HTTP proxy was used to filter specific URLs. This allowed them to block certain URLs. It was initally detected because lots of wikipedians noticed a lot of edits (basically lots of the UK) coming from a small amount of IP addresses (the IP addresses of the proxies)
http://www.smh.com.au/technology/technology-news/how-asics-a...
Tip of the hat to the author...
What you want is a website that answers: Does Country-C block Website-W? A user gives it a URL and it has VPNs surfacing in lots of different countries and it tries them all, and displays in which countries the URL is blocked.
The website also stores and records all blocked/unblocked websites, and allows this data to be downloaded.
sudo !!eg. Check what you're about to delete
$ ls *.backup
a.backup b.backup c.backup
$ ^ls^rm
rm *.backup
Something else that saves a lot of time is to incremental-search backwards through your command history using ctrl-r instead of arrow keys. eg. cycle through every "grep". Press ctrl-r, type grep, and it jumps to to most recent command that contains "grep". Each time you press ctrl-r it will jump further back in time. If it's something you expect to search for a lot, you can even tag commands with # comments then search for the comment. (There's a fine line there though... if you reuse a command really often you should probably alias or script it)Command history uses the 'readline' library so all(?) the other editing-related emacs chords will work on it ctrl-a/ctrl-e to jump to the start/end of the line, ctrl-r/ctrl-s to search, alt-f/alt-b to jump words, etc. Oh, and an emacs kill-ring too, that's pretty useful.
Enjoy.
...
...
But there's one more thing.
This is a feature of GNU Readline, not a feature of bash. Other apps that use readline will also accept these chords.
Things like the ruby and python shells, mysql, etc.
You think you can do a lot in those tools now? Learning to leverage everything that readline gives you will take you to a whole new level.
Have fun exploring :)
fucking !!
!gre # will run the last command starting with gre (so probably grep)
If you type history, then !<number to the left of the history command>, the shell will execute that command. $ history # shows command history
$ !200 # executes command 200[0]: https://www.gnu.org/software/bash/manual/bashref.html#Event-...
pns.py
I was cracking up. Brilliant!
sudo ss -lpu 'sport = 53'