January 10, 2026
Metadata melee: bookworms vs. Bezos
Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more
Crowds love it—but demand Amazon data, proof of sources, and alerts for new books
TLDR: Librario combines book info from multiple sources into one API, aiming to fix scattered metadata. The crowd loves the idea but argues Amazon holds key data, demands source tracking to avoid bad overwrites, and asks for alerts on new releases—promising if it tackles those pain points.
A new indie project just dropped on Hacker News: Librario, a book info API that grabs details from Google Books, ISBNDB, and Hardcover, then merges them into one clean response. Built by a sleep-deprived new parent with a 1,800-book home library, it’s pre‑alpha, open‑source (AGPL), and you can try it live via a curl snippet. The dev even hired SourceHut pros to rewrite the database after admitting the first version was AI‑designed—cue applause.
But the comments? Pure bookworm drama. The top take: Amazon still hoards crucial data. One user basically said, “cool project, but without Amazon, good luck,” and the thread turned into an Amazon-is-the-final-boss debate. Another crowd chanted for open data, pointing to BookBrainz like it’s the MusicBrainz of books. Feature‑hungry readers begged for upcoming releases alerts—“tell me when my favorite author drops a new one!” Meanwhile, a veteran warned: keep strict provenance (aka record where data came from and when) or watch bad auto‑updates stomp on carefully fixed entries. Supporters want to plug it into tools like isbn-info.js, while the dev’s merging tricks (penalizing messy titles, scoring covers by quality) got respectful nods.
Mood: excited, skeptical, and very HN. If Librario nails Amazon gaps and provenance, it could become the book nerd’s favorite backstage pass.
Key Points
- •Librario aggregates book metadata from Google Books, ISBNDB, and Hardcover into a single API response.
- •It uses extractor priorities and field-specific merging strategies, including title scoring and cover quality evaluation.
- •Merged data is stored in PostgreSQL, with the database strengthening over time as more books are queried.
- •A caching layer was added for performance; the project chose Go’s net/http over Fiber after evaluation.
- •The database layer is being rewritten by SourceHut’s consultancy; the project is pre-alpha, AGPL-licensed, and available on SourceHut.