Cold Start Benchmark
SynapseKit 1.4 vs LangChain 1.2 vs LlamaIndex Core 0.14 — CPU only, no API keys
Dependencies
Import Time
Idle RAM
Install Time
Disk Size
Transitive Dependency Count
Every package installed transitively — direct and indirect. Determines security surface, conflict risk, and audit burden.
Key Insight
LangChain's 267 packages is 133x more than SynapseKit's 2. Each package is a potential CVE, version conflict, or compliance entry. Lazy loading doesn't reduce this number — it only hides it at import time.
SynapseKit
2 WINNER
httpx + pydantic. Auditable in 30 minutes.
LlamaIndex
89
Focused on RAG; fewer deps than LangChain but still heavyweight.
LangChain
267
Broad integration surface. Every provider, every vector store, every tool.
Import Time (median of 7 isolated runs)
Cold import in a fresh Python interpreter subprocess. No cached .pyc files, no shared state. Paid on every serverless cold start and Kaggle session restart.
Key Insight
LangChain wins on import time via lazy loading — most of its 267 packages don't load until you use them. LlamaIndex Core is a monolith: no deferred loading, full cost paid at import. 3.61s is 6.2x slower than LangChain.
LangChain
0.58s WINNER
Lazy loading architecture. Fast despite 267 deps.
SynapseKit
0.82s
No lazy loading needed — 2 deps means full load is fast.
LlamaIndex
3.61s
Monolithic core. Full cost at every import. 6.2x slower than LangChain.
Idle RAM — RSS After Import
Resident Set Size after importing the framework. Measured before any LLM calls, any data loading, any work. This is the floor your process starts from.
Key Insight
LlamaIndex uses 3.5x the idle RAM of SynapseKit before doing any work. For AWS Lambda (512 MB default), LlamaIndex consumes 30% of your function's memory budget just at startup.
SynapseKit
44 MB WINNER
Minimal footprint. Leaves RAM for actual workloads.
LangChain
106 MB
Loaded modules visible in RSS even with lazy import arch.
LlamaIndex
154 MB
Heaviest idle footprint. Monolithic import loads everything upfront.
pip Install Time (fresh environment)
Wall-clock time for a complete fresh install. What you pay on every CI run, Docker build, Kaggle session, and Lambda layer rebuild.
Key Insight
Lazy loading has zero effect on install time. LangChain still downloads and resolves all 267 packages — 47 seconds every cold environment. On a team running 50 CI jobs/day, that's 39 minutes of install time daily, just for LangChain.
SynapseKit
8s WINNER
2 packages = near-instant install in any environment.
LlamaIndex
29s
89 deps. Faster than LangChain but still a meaningful CI cost.
LangChain
47s
267 deps. Lazy loading doesn't touch install time.
Package Disk Size (framework source only)
Size of the framework's own installed source files, excluding transitive deps. Relevant for Docker layer caching, Lambda deployment packages, and Kaggle dataset quotas.
SynapseKit
3.2 MB WINNER
Small enough to read the entire source in one sitting.
LangChain
27.8 MB
Broad integrations surface means large source footprint.
LlamaIndex
38.4 MB
Largest package source — advanced RAG features add up.
www.engineersofai.com — AI Letters #10 · Full methodology: kaggle.com/misternautiyal