Inspired by cloudApp, when I found myself wanting a command line version.
This way we aren't carelessly littering our data all over the "cloud".
$ pagekite.py /path/to/file.blah yourname.pagekite.me
... send people a link to https://yourname.pagekite.me/file.blah and it streams from your disk. CTRL+C and it's offline with no copies stored anywhere in the cloud. Works with entire folders too (append +indexes to generate indexes), which is good for static HTML demos. If you want a harder-to-guess URL, append the +hide flag to the command above.
And yes, it's open source and you can run your own relay/reverse-proxy if you don't want to rely on me. Give it a try! :-)
I'd be curious if there exists such a company or even any good open source libraries that can tie into other servers.
Are you serious?
cat file | curl -F 'sprunge=<-' sprunge.us curl -T /path/to/any/file flag.io
# or
cat some-file | curl -T - flag.io
Basically, pipe anything you want into it. It can also syntax-highlight your text files.I hadn't come across filepicker.io before and reading through the geturl code something jumped out at me:
APIKEY = check_output(['curl', '--silent', "%(fpurl)s/getKey?email=%(email)s" % {'fpurl': FPAPIURL, 'email': email}])
From that, it looks like any random person can fill up your filepicker.io space providing they have your API key or know the email address you used to register the account with. Made sense when I read a bit more about what filepicker.io actually does (i.e. a client-side embeddable javascript file uploader) but it's something to be aware of (especially if you link your account up to an S3 backend!).
In general, the apikey doesn't actually provide very much security as is; by it's public by it's very nature as you have to put it client side and expose it to all your users. We've got HMAC and secret keys in the pipeline for next week :D
Also, isn't it normal to check the referrer when using API keys? That's what Facebook does -- API keys only work from certain domains, which effectively restricts their access. The downside is that you need to maintain separate API keys for every domain (staging, sandbox, etc), but the advantage is that they don't rely on the honor system :P
[1]: http://www.catb.org/jargon/html/Z/Zero-One-Infinity-Rule.htm...
cp file_name ~/Projects/2012/ajf.me/ajf.me/imagedump/ && cd ~/Projects/2012/ajf.me/ajf.me/ && git add imagedump && git commit -m 'new file' && cd .. && ./update.sh
Elaborate, sure, but it does the job. update.sh runs git push and then does an SSH into my server and a git pull. (Because I'm too lazy to actually set up git on my own server)But if you have your own server, why don't you just scp it directly there? That's the piece most people are missing.
Something I noticed (https://www.filepicker.io/pricing/): Since filepicker.io charges by number of files and NOT by total size of files, a free account could potentially host a huge amount of data.
Eg: 5000 files, each of 1 GB = 5 TB!!!
Edit: pull request submitted.
> To make it easier to get started, if you haven't
> put in your S3 credentials we will store them on
> our servers, but as your usage increases we will
> ask you to move to your own storage.pro: no storage backend needed con: works only behind a UPnP router (most are ok)
Simply copy your files into ~/Dropbox/Public, right-click and choose Dropbox > Copy Public URL. Voilà!
Just a note:
28: exit("`curl` is requrired. Please install it")