Nature just launched Nature precedings , a home for Pre-publication research and preliminary findings. Within seconds of browsing through its very intuitive interface I immediately got the purpose of this offering from Nature Publishing group.
The way it works is simple- you can upload content in the form of word documents , powerpoint files and pdf files and it gets released to the community after a preliminary check for appropriateness of content and suitability for the nature precedings audience. Signed up users can then vote on the content ( a la Digg) and it gets moves up or down its category. All of the content is also search-able and link-able and citeable.
As the help pages suggest, I hope the site serves , in the least as a repository of supplementary material and science findings related to published work anywhere, which can then be commented on and discussed.
More interestingly the FAQ page, informs us that the Nature publishing group Journals do accept material that is in the un-peer-reviewed form and that has appeared in the preprint form : So if i get together a manuscript , I can first post it on Nature precedings and then send it for consideration to Nature for review separately and nature would still consider it ( if it meets its other criteria of course).
So this site could be a great place to establish the provenance of ideas, i.e I have a great new finding , I am gutsy enough to write it up in some form , post it on Nature precedings ..and then a few months later send the finished work to a print journal like a Nature publishing group journal that would accept it.
With all of this Nature precedings has the great potential to become an online repository of pre-print findings , supplementary material and other content of use to the science community…I really cant wait for the first paper to make it from Nature Precedings to the real thing , Nature itself with a citation that first appeared online!.
Powered by ScribeFire.
My good friend Deepak had a quote in his blog from Lincoln Stein about making bioinformatics as much an everyday tool to the practicing biologist as a pipettor ( a device used to dispense liquids by experimental biologists and chemists)..
I totally agree, but think we are quite far away. For example this morning I had to obtain the sequence of 772 swissprot entries ,which were part of an alignment for some downstream analysis. Of course my first choice was to query the NCBI -Entrez database. I soon realized that NCBI query box did not return any results for the first few queries I tried, all of which were probably new Uniprot/SwissProt IDs ( for eg. .sequence ids Q57T52_SALCH ,Q325Y4_SHIBS )
Disappointed , I turned to the EBI search engine. Within seconds I realized that the EBI indeed does indeed serve up all of entries. SO there are a subset of uniprot entries that the NCBI does not have in its database.
Out of sheer curiosity I entered the queries that drew a blank at the NCBI into Google.
Wonder of Wonders google pulled up all of the hard to find UniProt entries as the very first Match.
Thanks to the increasing use of publicly accessible web service APIs , Google is becoming more and more aware of a lot of very specific sequence data.
I will be very happy when I can type Q57T52_SALCH calc=MW and get an answer back from right inside google. Maybe that day bioinformatics will move one step closer to becoming just another tool.
Till then I am stuck with learning about Equery and WSDL and SOAP and so on..
Powered by ScribeFire.
I will try to keep this post real short.
The Journal of Cell Biology carries a very useful article on error bars in experimental biology.
Sadly the article is only available with a subscription, but Here is a link to the abstract on pubmed and the full text is available free at this link. The article talks about error bars in different context and how they should be used. Targeted at the non-statistics geek the article is easy to follow and quite useful.
My good friend Deepak who got me into blogging , recently started podcasting. Like his excellent blog the bbgm podcast is mostly about technology and computing and other things biotech . Deepak is extremely well plugged-in to the web 2.0 world and his podcast is fun medley of the things that catch his attention and the biotech-Bio IT business world . Recently he interviewed me on the fifth edition of his podcast. It was a lot of fun and I did get to talk a little about screencasting which is what we have been spending a fair amount of time on. He also got me hooked onto the TED talks, I would recommend these very enthusiastically. I was fascinated by a TED talk about photosynth and seadragon by Blaise Aguera Y arcas from Microsoft. I am very excited at the “how” , of this technology since it shares many similarities with single particle image processing and electron cryomicroscopy.
Check out the bbgm podcast here.
I have recently become addicted to the TED talks. I caught the TED talk by Craig Venter on various projects stemming from the initiatives undertaken by the Venter Institute and his affiliated companies. One of the exciting things he talked about was the coming field of combinatorial genomics (CG). CG is basically a marriage between synthetic biology and genomics. Basically it will deal with creating “synthetic” life forms with desired properties that are obtained by screening a library of such microbes obtained from combining genes from a multitude of organisms.
This is of course possible given the following five technologies.
Knowledge of a minimal subset: Work on the “minimal genome project” resulted in the minimal set of genes required to have a living reproducing bug or virus.
The ability to synthesize large amounts of large DNA :In his talk Craig Venter talked of their work in synthesizing the genome of Phi-X174 fully in two weeks.
The final piece of the puzzle comes from being able to assemble stretches of synthesized DNA quickly and combinatorially from these pieces. Here the amazing bug deinococcus radiodurans comes to the rescue. Deinococcus radiodurans is able to re-assemble its genome thousands of small bits which result from very harsh radiation or severe drying. Exploiting its mechanism to achieve this amazing feat its should be entirely possible to fully reconstitute an intact genome from a multitude of pieces.
The final piece of the puzzle of course is the genomics toolset itself. It is possible to assemble specialized subsets for any desired function by comparing genomes that carry out a particular function with closely related ones that do not.
So given all this, Craig Venter talks of assembling million chromosomes per day and transplanting these into cells or synthetic cells and screen for a desired effect. This he dubs as the emerging field of combinatorial genomics. A few of these desired functions are the stuff of biotechs promise since its inception: making hydrogen from photosynthetic bacteria, digesting cellulose to make ethanol and making small molecules by metabolic pathway engineering.
There is more on the technological aspects of combinatorial genomics at syntheticgenomics.com , one of Craig Venters companies. The TED talk above is also an excellent listen.
The Nature podcast section on deinococcus radiodurans and mp3 file