As a web subscriber to JCIM (and frequent author to said journal) I was reading the recent articles related to PubChem. If bandwidth allows I’ll post some additional comments shortly but Rich Apodaca has already highlighted them.

In his posting Rich commented “The public is free to download and re-use the entire database of molecules and associated data. <...> witness both eMolecules and ChemSpider, two services that unashamedly exploit the PubChem resource. Expect to see more of this in the months ahead.”

Did we do this? Absolutely. We downloaded the data and used it to test our system since we couldn’t easily source 10 million chemical structures anywhere else. We were very honest in our first press release on March 24th where we said “… the launch of their ChemSpider Service (, an online resource for chemists to search, aggregate and data mine publicly available chemical data. At time of release over 10 million compounds are indexed in the ChemSpider database including the PubChem collection and data provided by a number of other collaborators. ….”

There is no Open Source software that we are aware of that can handle 10 million compounds. So, having constructed our system we needed to performance test it and PubChem was a great resource so we used it. The quality of the database (and I have talked about Zen and Quality already) was irrelevant at that point. We needed to test the system.

The ChemSpider dataset is now over 16 million compounds and the PubChem dataset contributes 10.3 Million of those (10297250 to be precise), just over 60%. We continue to add databases (1,2,3) and have put significant effort into curating the data ourselves and with the assistance of our users (4,5,6). We’ve committed to return curated records back to Pubchem and glad to do it.

We presently have more data to add to the ChemSpider database, unique structures that are not contained in PubChem. Where allowable we will deposit these to PubChem in the future. For right now we have more important things to do, specifically address those Five Things We Don’t Like About ChemSpider.

The work on Quality checking continues and there will be an abundance of data to return to PubChem. See the recent discussions about Taxol and Thimerasol as examples. There’s a lot more where those came from.

Regarding the comment “… eMolecules and ChemSpider, two services that unashamedly exploit the PubChem resource.” I’ll confess that the PubChem database is on ChemSpider. However, eMolecules is a database of 7 million chemicals now and they have received data from many chemical vendors so it appears that they have only utilized a subset of PubChem as appropriate to their business model.

PubChem is a great resource..for those of you interested in how the data are being used/examined I do recommend the papers Rich has pointed out. Also, add Depth-First, Rich’s site, to your BlogRoll…it’s one of the best reads in the blogosphere in my opinion.

Stumble it!

3 Responses to “Does ChemSpider Unashamedly Exploit the PubChem Resource?”

  1. David Bradley says:

    Use of PubChem for this kind of purpose should come with entirely no shame. I thought the whole point of an Open system like Pubchem was that everyone is free to use it for whatever purpose they like so long as attribution is given. Isn’t that right?


  2. Tobias Kind says:

    Hi Tony,
    I also like Rich Apodacas articles, they are very well written.

    Now to Rich’s accusations:
    “eMolecules and ChemSpider, two services that
    unashamedly exploit the PubChem resource.
    Expect to see more of this in the months ahead.”

    And you (unashamedly) replied
    “Did we do this? Absolutely.”

    I did not read further because I already knew
    in which direction this discussion would go. Instead I googled for
    1) hack PubChem
    2) exploit PubChem
    3) suck PubChem (suck data)
    4) abuse PubChem

    So all these searches lead us to Rich’s blog
    and of course he doesn’t like you to plunder
    in his domain :-) (Just teasing Rich a little bit)

    Back to serious talk. I think this is Open Data all about.
    Use it, exploit it, sell it, squeeze it, hack it,
    calculate (things with) it, do science with it, *innovate it*.
    It is absolutely *important* to unashamedly exploit Open Data.
    And in your case (Chemspider) it even generates value to
    many people by creating combined data resources and
    properties which may not easy accessible from other services.

    I think what many open data advocates (open data will finally
    power the semantic web) are miffed about is, that many researchers
    and companies use open data (like PubChem or PubMed) without ever contributing to it. But Chemspider contributes by making the resource available to everyone (ok in a more “conservative” way by not allowing bulk download.) Still its a new and free to use service.
    If Chemspider is not innovative enough it will die anyway;
    like Chemfinder (RIP).

    Rich says: “By placing high costs on access to its service and severely
    restricting its use, the ACS has effectively shut out anyone wanting
    to build another service on top of CAS. Clearly this was part of the plan.”

    Rich’s comments are aimed at the Chemical Abstract Service
    and their mother ACS and the funny way of lobbying against PubChem in Washington, which backfired in such an extreme way.
    We still don’t know why dinosaurs died, but we know they went extinct. And we know that if theres a Breshnev – theres always a Gorbatchev :-)

    Nothing to add. Except some facts: A 5 seat CAS university license
    in Europe costs $80,000 a year and for commercial operations its $200,000. You are allowed to store 5000 CAS numbers for a specific research project, which need to be deleted after the project is finished. Use of more than 5,000 CAS numbers is not allowed or requires a one-time fee so high it can not be disclosed (here). Beware! CAS is a commercial operation so they can ask for any price the customer is willing to pay, its clearly not their fault. (Remember the sticky price?)

    Kind regards
    Tobias Kind

  3. Joerg Kurt Wegner says:

    I personally think that ‘open source’ is required for creating standards and adding innovation to the technology pool. With respect to the performance I would love to see a proper comparison, especially for different aims. But I agree, the key players are all commercial or in-house. The question is what is more important: speed or quality, and which tools perform better on which tasks?
    We have now proper comparisons on docking tasks (DUD dataset), but what is with cheminformatics tasks?

    We need to learn … all sides have to accept that … and I hope the global tendency is going into this direction … commercial or not …

    Cheers, Joerg

Leave a Reply