Knowledge Universe scores every result for freshness, decay, and authority before it enters your pipeline. One API. 13+ official sources. Stop gluing connectors together.
500 calls/month free. No credit card. Works with LangChain, LlamaIndex, any LLM stack.
Cosine similarity 0.94 on an 18-month-old doc. Your retriever does its job perfectly. Your user gets a confidently wrong answer. No exception. No warning.
Different schemas. Different rate limits. Different error modes. You spend 6 weeks building the data layer. Your actual product waits.
Every retrieval API (Tavily, Exa, SerpAPI) returns results confidently — even when they don't match your query. You never know until a user complains.
curl -X POST https://vlsiddarth-knowledge-universe.hf.space/v1/discover \ -H "X-API-Key: ku_test_your_key_here" \ -H "Content-Type: application/json" \ -d '{ "topic": "transformer architecture", "difficulty": 3, "formats": ["pdf", "github", "stackoverflow"] }'
import requests resp = requests.post( "https://vlsiddarth-knowledge-universe.hf.space/v1/discover", headers={"X-API-Key": "ku_test_your_key_here"}, json={"topic": "transformer architecture", "difficulty": 3, "formats": ["pdf", "github"]} ).json() # Every result has a decay_score for sid, decay in resp["decay_scores"].items(): print(f"{decay['label']:10} score={decay['decay_score']} {sid}") # Coverage confidence — did KU find good results? cov = resp["coverage_intelligence"] if cov["coverage_warning"]: print("Low confidence. Try:", cov["suggested_queries"])
{
"total_found": 8,
"credits_used": 1, "credits_remaining": 499,
"decay_scores": {
"arxiv:1706.03762": {
"decay_score": 0.847, "label": "stale", "age_days": 2736
}
},
"coverage_intelligence": {
"confidence": 0.71, "confidence_label": "high",
"coverage_warning": false,
"suggested_queries": []
},
"sources": [/* normalized Source objects */]
}