Welcome to V2.0.0
A deep dive into building a distributed content management system using IPFS and Ethereum, exploring technical decisions, data modeling, and the journey from static sites to dynamic distributed infrastructure.

The Evolution from Static to Distributed
Most developers eventually reach a point where existing tools don't quite fit their vision. For me, that moment came when building the second iteration of Krondor.org—a decision that led me deep into the world of distributed web technologies, custom data modeling, and the fundamental question: what does content management look like when you own your infrastructure?
The journey from v1 to v2.0.0 taught me valuable lessons about technical decision-making, the trade-offs inherent in distributed systems, and why sometimes you need to build your own tools. Here's how I approached creating a distributed CMS that bridges the gap between static site convenience and dynamic content management—all while maintaining full control over my data and infrastructure.
Building on lessons from v1, I had three core insights that guided the architecture:
Single-language focus: If you can only work in one language effectively, lean into that constraint
Static + Dynamic hybrid: Static site development with tools like Leptos shows promise for building SPAs on IPFS
Content management needs: My use case would really benefit from a dynamic content publishing solution
These insights led to a clear technical vision: build a distributed CMS that combines the benefits of version control, decentralized storage, and dynamic content management.
Designing the Data Model
The heart of any CMS is its data model, and building for the distributed web required rethinking traditional approaches. Rather than relying on databases and server-side storage, I designed an object-based data model that treats all content as immutable, content-addressed objects linked through a central manifest.
Each object in the system follows a simple but powerful pattern:
<path> : {
cid: <object_cid>,
created_at: <timestamp>,
updated_at: <timestamp>,
metadata: <json>
}This structure enforces several key principles:
Deterministic hashing: Every piece of content has a unique, verifiable identifier
Change tracking: Key-based lookups make it straightforward to detect and manage content changes
Arbitrary metadata: Applications can store and consume custom metadata without schema constraints
IPFS compatibility: Direct linking to content on the distributed web
All objects are coordinated through a central Manifest that serves as the single source of truth for the entire site:
{
objects: {
<path> : {
cid: <object_cid>,
created_at: <timestamp>,
updated_at: <timestamp>,
metadata: <json>
}
},
previous_cid: <previous_manifest_cid>,
version
}This manifest-based approach provides several powerful capabilities. When I publish a new version to IPFS, the entire site becomes addressable by a single Content Identifier (CID). Anyone with that CID can access not just the current content, but trace the entire history through the previous_cid chain.
For more persistent addressing, I can publish the manifest CID to Ethereum, creating a cryptographically signed pointer that's globally addressable and verifiable. This gives me the best of both worlds: the performance and decentralization of IPFS with the permanence and discoverability of blockchain addressing.
Implementation Decisions and Trade-offs
Building a distributed CMS means making trade-offs that don't exist in traditional web development. One of the most significant decisions was choosing not to use UnixFS, despite it seeming like the obvious choice for IPFS-based content storage.
Why I Avoided UnixFS
UnixFS would have made linking to content over IPFS gateways much simpler, but I had three concerns that led me to roll my own approach:
Metadata limitations: I couldn't extend UnixFS to support the arbitrary metadata system I wanted
Non-deterministic hashing: MFS hashes aren't deterministic, which conflicted with my versioning requirements
Complexity overhead: I didn't want to deal with traversing UnixFS data structures when my use case was simpler
This decision created some headaches when building the web application, particularly around gateway linking, but it gave me complete control over the data model and ensured deterministic, verifiable content addressing.
Ethereum as a Distributed Database
Using Ethereum as a pointer to IPFS content creates an interesting hybrid architecture. The blockchain serves as a cryptographically signed, globally addressable index, while IPFS handles the actual content delivery. This separates concerns beautifully:
Discoverability: Anyone can find the latest version by checking the smart contract
Authenticity: Updates are cryptographically signed by my known address
Performance: Content is delivered directly from IPFS, not constrained by blockchain throughput
However, this approach also revealed the practical limitations of mainnet Ethereum. Gas fees make frequent updates expensive, leading me to consider alternatives like IPNS or layer 2 solutions for production deployments.
Git-Inspired CLI Design
Creating a user-friendly interface for managing distributed content was crucial to making this system practical. I designed the CLI to feel familiar to developers by borrowing heavily from Git's workflow—after all, both systems deal with content versioning and distributed state management.
The command structure reflects the core operations needed for distributed content management:
# Initialize a new space to pull and stage changes
krondor-org init
# Pull the latest content from IPFS and update local staging
krondor-org pull
# Stage changes from the current directory
krondor-org stage
# Tag files with metadata stored in the manifest
krondor-org tag --name audio --path audio/sample.mp3 \
--value '{"title": "Audio Sample", "project": "demo"}'
# Push staged changes to IPFS and update the RootCid contract
krondor-org --admin-key <PRIVATE_KEY> pushThis workflow provides several benefits over traditional CMS approaches:
Offline editing: Content can be created and staged locally without network connectivity
Atomic updates: All changes are published as a single immutable snapshot
Metadata flexibility: Arbitrary metadata can be attached to any content without schema constraints
Version history: Complete audit trail through the previous_cid chain
The beauty of this approach is that it treats content publishing as a versioned deployment process. Each 'push' creates a new immutable state of the entire site that can be referenced, rolled back to, or branched from—giving content management the same powerful primitives that developers expect from version control.
Lessons Learned and Future Directions
Building this distributed CMS taught me several important lessons about the current state of decentralized web technologies and the practical challenges of moving beyond traditional hosting models.
The Economics of Decentralization
One of the most sobering realizations was that gas fees on Ethereum mainnet make frequent content updates prohibitively expensive. For a blog that might publish several posts per week, the cost of updating the root pointer for each publication quickly becomes unsustainable.
This led me to explore alternative addressing solutions:
IPNS (InterPlanetary Name System): Provides mutable pointers to IPFS content without blockchain costs
Layer 2 solutions: Dramatically reduce transaction costs while maintaining blockchain benefits
Batch updates: Accumulate multiple content changes before publishing to reduce per-update costs
Infrastructure Dependencies
While the content model achieves true decentralization, I still relied on services like Fleek for hosting the web interface. This created an interesting contradiction: distributed content served through centralized infrastructure.
The experience highlighted the importance of controlling the entire stack when building truly distributed applications. Running my own IPFS node and hosting infrastructure became not just an option, but a necessity for maintaining the system's decentralized principles.
Managing Complexity
The decision to avoid UnixFS, while giving me complete control over the data model, introduced complexity in other areas. Building custom tooling for IPFS gateway integration and content addressing meant more code to maintain and more potential failure points.
This taught me an important lesson about technical trade-offs: sometimes accepting the limitations of existing standards is worth the reduction in complexity, even if it means giving up some control or idealized architecture.
Building for the Distributed Web
Building a distributed CMS from scratch was both a technical challenge and a philosophical statement about data ownership and infrastructure independence. While the experiment revealed practical limitations—from Ethereum's gas costs to the complexity of custom tooling—it also demonstrated the feasibility of creating truly user-owned content management systems.
The architecture I developed for Krondor.org v2.0.0 represents just one approach to distributed content management, but the lessons learned extend far beyond this specific implementation. The tension between decentralization ideals and practical usability constraints will likely define the next generation of web infrastructure.
As I continue building distributed systems—from permissionlessly incentivized storage networks to rapid development frameworks—the principles learned from this distributed CMS continue to inform my technical decisions. Sometimes the best path forward isn't about achieving perfect decentralization, but finding the right balance between user control, technical feasibility, and practical usability.
The distributed web is still evolving, and projects like this help us understand not just what's possible today, but what needs to change for truly decentralized content management to become mainstream. Each experiment brings us closer to a web where users truly own their data and infrastructure.