What I saw instead were people spending the vast majority of their time pipetting. All the way up the ladder, upto and including postdocs. I sometimes thought our PI had it worse for having to spend most of her time applying for grants.
The AWSification of synbio research would be a game changer. Some labs at Hopkins have tried to build robots but with limited success. Given how cheap labor is at research institutions competing on price will be incredibly difficult.
I'm happy to leave someone else to do that. I'd rather be in a job I actually enjoy the day-to-day of.
And that's to say nothing of the problems of PhDs: namely that there are ten times more PhD positions than there are postdoc positions. That ten-to-one crunch when it comes to finding a job sure does sound fun...
If you want to contribute to Firefox or any other non-trivial open source project, you need to spend time creating a development environment and it likely will take weeks to months before you can make a substantive contribution.
If anyone is reading the comment I'm responding to or its parent comment, keep in mind that the manual labor is in pursuit of a goal.
To add on: Some words have valuable and specific meanings! We don't have a good substitute word for "exponential" that means the same thing. Please make an effort not to do this!
Are you really saying that going from 10 samples to 11 samples causes a doubling (or 1.2x-ing or tripling or whatever) of the time/work required? That's what exponential means.
(I don't work there or anything)
I think there's also been a lot of independent academic attempts at this (see: http://klavinslab.org/ which is CS/BioE at UWash), but all kind of waded around in the shallow water.
The reason why I think this is compelling because I think almost every synthetic biologist has an existing workflow. It's basically design using some sort of CAD software, order from IDT, receive materials next day, run test by hand, ship to Genewiz for sequencing, etc. That's just one example of a workflow involving 4-5 specialized 'steps'. As the steps get cheaper/faster/better, consolidating and automating this is just a no brainer.
Transcriptic, on the other hand, started taking orders six months ago and has customers at Stanford, Caltech, Harvard, and more.
Definitely having the infrastructure 'warehouse' layer that Transcriptic is building (with a real API! wow!) will be valuable. And like you hint at, power users won't need hand-holding, but 99% of the market of users will. That's where packaging, ease of use, and limited configuration seem to be the difference maker (Heroku starting exclusively with Rails).
However, a service like Transcriptic may make sense if (a) you're in a company (no free undergrad labor, though summer interns may be a suitable alternative) or (b) you don't already have the equipment and just want to do a one-off collection of a large amount of data. Also, maybe prices will significantly drop as Transcriptic scales up and streamlines their operations. I'll definitely be checking back in the coming years to see if they ever reach the point where it makes sense to use their services.
If anyone in this thread thinks this is an interesting topic I'm easy to reach at max@transcriptic.com.
The first two bullet points there are like the two biggest red flags possible in an ops job post. It reads as a development team that has built a fragile and unreliable system and is looking for a superman to dump it on.
It will matter much more if your VP of Engineering position can capacity plan than it will matter if your operations position can code. No amount of ops rockstars can fight a (larger) dev team that won't design with real world workload capacity and reliability as not just a concern but a focus.
Another Transcriptic just Slacked everyone your comment here which has prompted a discussion about what we're really looking for in an "ops" person. The "exceptional coding skills" bullet is in almost all of our engineering job posting, and we thought such skills would apply to really good "devops" people, too; maybe this is wrong and asking for the wrong skill set. (The SREs I know at Google are all really good developers.)
Being an "on-call position" is a side effect of our volume and the fact that cells don't stop dividing at 8pm. Depending on when projects get started we often end up running reactions all the time, and so yes there is a (metaphorical) pager involved. Even minor failures here are very time sensitive due to the biological nature, and lost samples can be extremely costly (and devastating to our reputation with customers). I think this ops role is more about setting up the processes rather than being the only person (people) to respond to issues.
We'll be reflecting on that job description and update the posting.
These are extremely dependent on the question being studied and often are not amenable to automation, and may require very rare, expensive, and difficult-to-handle samples. For example, my collaborators work with transgenic mice that are a model for a particular disease, and these mice have to be bred then aged to 12 weeks until they exhibit the phenotype before we can even start doing an experiment. In another model, they have to do brain surgery on each mouse and then wait several weeks for the phenotype.
The 'easy' parts, such as DNA synthesis and sequencing, are already highly standardized and automated, and there is fierce competition to improve the technology and bring costs down.
Any worthwhile work I have ever done has mostly been about grunt work. Along the way there have been cool things (after all Leno made fun of our research [1] once) and insanely fun times. I may not be in research now, but every day I apply the lessons learned from patiently repeating and iterating.
1. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=723226...
It's a difficult problem to solve, because these pesky researchers are always trying out new things that you didn't anticipate - who would've thought! But still, for the mundane things that can be automated, something like this is definitely the way to go. Of course, as other people here point out, figuring out what to actually test is always the hardest part.
I believe they call themselves more of the Github of Science for scientific collaboration. Adding hooks to 'push' the tasks and 'checkout' the findings could be maybe extensible on their platform.
There is a great deal we do not know about cellular biology. Any simulation would be a fairly gross approximation. The point of many experiments is to further our understanding of the model of cellular mechanics.