Getting Started
A JavaScript/TypeScript SDK for interacting with Filecoin Synapse - a smart-contract based marketplace for storage and other services in the Filecoin ecosystem.
⚠️ BREAKING CHANGES in v0.24.0: Major terminology updates have been introduced. Pandora is now Warm Storage, Proof Sets are now Data Sets, Roots are now Pieces and Storage Providers are now Service Providers. Additionally, the storage API has been improved with a new context-based architecture. See the Migration Guide section for migration instructions.
The Synapse SDK provides an interface to Filecoin’s decentralized services ecosystem:
- 🚀 Recommended Usage: Use the high-level
Synapse
class for a streamlined experience with sensible defaults - 🔧 Composable Components: Import and use individual components for fine-grained control over specific functionality
The SDK handles all the complexity of blockchain interactions, provider selection, and data management, so you can focus on building your application.
Installation
Section titled “Installation”pnpm install @filoz/synapse-sdk ethers
Note: ethers
v6 is a peer dependency and must be installed separately.
Project Structure
Section titled “Project Structure”This project uses a monorepo structure managed by pnpm
workspaces. Core libraries reside in the packages/
directory, examples in examples/
, and this documentation site in docs/
.
Directorypackages/
Directorysynapse-sdk/ Core Synapse SDK library
- …
Directorysynapse-sdk-react/ React hooks and context
- …
Directoryexamples/ Usage examples and demos
- …
Directorydocs/ This documentation site
- …
- pnpm-workspace.yaml Workspace configuration
- package.json Root package file
Key Concepts
Section titled “Key Concepts”Before diving into the code, understand these fundamental concepts:
- Service Contracts: Smart contracts that manage specific services (like storage). Currently, Warm Storage is the primary service contract that handles storage operations and payment validation.
- Payment Rails: Automated payment streams between clients and service providers, managed by the Payments contract. When you create a data set in Warm Storage, it automatically creates corresponding payment rails.
- Data Sets: Collections of stored data managed by Warm Storage. Each data set has an associated payment rail that handles the ongoing storage payments.
- Pieces: Individual units of data identified by PieceCID (content-addressed identifiers). Multiple pieces can be added to a data set for storage.
- PDP (Proof of Data Possession): The cryptographic protocol that verifies storage providers are actually storing the data they claim to store. Providers must periodically prove they possess the data.
- Validators: Service contracts (like Warm Storage) act as validators for payment settlements, ensuring services are delivered before payments are released.
The Synapse
class provides a complete, easy-to-use interface for interacting with Filecoin storage services.
Quick Start
Section titled “Quick Start”Get started with storage in just a few lines of code:
import { Synapse, RPC_URLS } from '@filoz/synapse-sdk'
// Initialize SDKconst synapse = await Synapse.create({ privateKey: '0x...', rpcURL: RPC_URLS.calibration.websocket // Use calibration testnet for testing})
// Upload data, this auto-selects provider and creates a data set if needed// (your first upload will take longer than subsequent uploads due to set up)const uploadResult = await synapse.storage.upload( new TextEncoder().encode('🚀 Welcome to decentralized storage on Filecoin! Your data is safe here. 🌍'))console.log(`Upload complete! PieceCID: ${uploadResult.pieceCid}`)
// Download dataconst data = await synapse.storage.download(uploadResult.pieceCid)console.log('Retrieved:', new TextDecoder().decode(data))
Connection Management
Section titled “Connection Management”When using WebSocket connections (recommended for better performance), it’s important to properly clean up when your application is done:
// When you're done with the SDK, close the connectionconst provider = synapse.getProvider()if (provider && typeof provider.destroy === 'function') { await provider.destroy()}
Payment Setup
Section titled “Payment Setup”Before uploading data, you’ll need to deposit funds and approve the storage service:
import { TOKENS, CONTRACT_ADDRESSES } from '@filoz/synapse-sdk'import { ethers } from 'ethers'
// 1. Deposit USDFC tokens (one-time setup)const amount = ethers.parseUnits('100', 18) // 100 USDFCawait synapse.payments.deposit(amount)
// 2. Approve the Warm Storage service contract for automated payments// Warm Storage acts as both the storage coordinator and payment validator// The SDK automatically uses the correct service address for your networkconst warmStorageAddress = await synapse.getWarmStorageAddress()await synapse.payments.approveService( warmStorageAddress, ethers.parseUnits('10', 18), // Rate allowance: 10 USDFC per epoch ethers.parseUnits('1000', 18), // Lockup allowance: 1000 USDFC total 86400n // Max lockup period: 30 days (in epochs))
// Now you're ready to use storage!
With MetaMask
Section titled “With MetaMask”import { Synapse } from '@filoz/synapse-sdk'import { ethers } from 'ethers'
// Connect to MetaMaskconst provider = new ethers.BrowserProvider(window.ethereum)const synapse = await Synapse.create({ provider })
// Start uploading immediatelyconst data = new TextEncoder().encode('🚀🚀 Hello Filecoin! This is decentralized storage in action.')const result = await synapse.storage.upload(data)console.log(`Stored with PieceCID: ${result.pieceCid}`)
Advanced Payment Control
Section titled “Advanced Payment Control”For users who need fine-grained control over token approvals:
import { Synapse, TOKENS } from '@filoz/synapse-sdk'
const synapse = await Synapse.create({ provider })
// Check current allowanceconst paymentsAddress = await synapse.getPaymentsAddress()const currentAllowance = await synapse.payments.allowance(paymentsAddress)
// Approve only if neededif (currentAllowance < requiredAmount) { const approveTx = await synapse.payments.approve(paymentsAddress, requiredAmount) console.log(`Approval transaction: ${approveTx.hash}`) await approveTx.wait() // Wait for approval before depositing}
// Now deposit with optional callbacks for visibilityconst depositTx = await synapse.payments.deposit(requiredAmount, TOKENS.USDFC, { onAllowanceCheck: (current, required) => { console.log(`Current allowance: ${current}, Required: ${required}`) }, onApprovalTransaction: (tx) => { console.log(`Auto-approval sent: ${tx.hash}`) }, onDepositStarting: () => { console.log('Starting deposit transaction...') }})console.log(`Deposit transaction: ${depositTx.hash}`)await depositTx.wait()
// Service operator approvals (required before creating data sets)const warmStorageAddress = await synapse.getWarmStorageAddress()
// Approve service to create payment rails on your behalfconst serviceApproveTx = await synapse.payments.approveService( warmStorageAddress, // 10 USDFC per epoch rate allowance ethers.parseUnits('10', synapse.payments.decimals(TOKENS.USDFC)), // 1000 USDFC lockup allowance ethers.parseUnits('1000', synapse.payments.decimals(TOKENS.USDFC)), // 30 days max lockup period (in epochs) 86400n)console.log(`Service approval transaction: ${serviceApproveTx.hash}`)await serviceApproveTx.wait()
// Check service approval statusconst serviceStatus = await synapse.payments.serviceApproval(warmStorageAddress)console.log('Service approved:', serviceStatus.isApproved)console.log('Rate allowance:', serviceStatus.rateAllowance)console.log('Rate used:', serviceStatus.rateUsed)console.log('Max lockup period:', serviceStatus.maxLockupPeriod)
// Revoke service if neededconst revokeTx = await synapse.payments.revokeService(warmStorageAddress)console.log(`Revoke transaction: ${revokeTx.hash}`)await revokeTx.wait()
Rail Settlement
Section titled “Rail Settlement”Payment rails are continuous payment streams between clients and service providers. These rails accumulate payment obligations over time that need to be settled periodically to transfer the actual tokens.
Key Concepts:
- Rails: Unidirectional payment streams with a defined rate per epoch (30 seconds)
- Settlement: The process of executing accumulated payments and transferring tokens
- Network Fee: 0.0013 FIL burned (destroyed) per settlement, accruing value to the network through supply reduction
- Commission: Optional percentage taken by the operator (Warm Storage or other service contract)
For a comprehensive guide, see Rails & Settlement Guide.
Viewing Rails
Section titled “Viewing Rails”// Get rails where you are the payer (client)const payerRails = await synapse.payments.getRailsAsPayer()for (const rail of payerRails) { console.log(`Rail ${rail.railId}: ${rail.isTerminated ? 'Terminated' : 'Active'}`) if (rail.isTerminated) { console.log(` Ended at epoch: ${rail.endEpoch}`) }}
// Get rails where you are the payee (service provider)const payeeRails = await synapse.payments.getRailsAsPayee()console.log(`Receiving payments from ${payeeRails.length} rails`)
Settling Rails
Section titled “Settling Rails”Settlement converts accumulated payment obligations into actual token transfers. Each settlement requires a network fee that is burned, compensating the Filecoin network for processing the settlement:
// The settlement fee is burned to the network (not paid to any party)import { SETTLEMENT_FEE } from '@filoz/synapse-sdk'console.log(`Settlement fee: ${ethers.formatEther(SETTLEMENT_FEE)} FIL`)// This fee accrues value to the network by reducing FIL supply
// Preview settlement amounts before executingconst preview = await synapse.payments.getSettlementAmounts(railId)console.log(`Would settle: ${ethers.formatUnits(preview.totalSettledAmount, 18)} USDFC`)console.log(`Payee receives: ${ethers.formatUnits(preview.totalNetPayeeAmount, 18)} USDFC`)console.log(`Commission: ${ethers.formatUnits(preview.totalOperatorCommission, 18)} USDFC`)
// Settle a rail (automatically includes settlement fee)const settleTx = await synapse.payments.settle(railId)console.log(`Settlement transaction: ${settleTx.hash}`)await settleTx.wait()
// For terminated rails, use the specialized methodif (rail.isTerminated) { const terminatedTx = await synapse.payments.settleTerminatedRail(railId) await terminatedTx.wait()}
Settlement Example
Section titled “Settlement Example”// Complete example: Find and settle all rails where you're the payerconst rails = await synapse.payments.getRailsAsPayer()
for (const rail of rails) { if (!rail.isTerminated) { // Check if there's anything to settle const preview = await synapse.payments.getSettlementAmounts(rail.railId)
if (preview.totalSettledAmount > 0n) { console.log(`Settling Rail ${rail.railId}...`) const tx = await synapse.payments.settle(rail.railId) await tx.wait() console.log(`✅ Settled ${ethers.formatUnits(preview.totalSettledAmount, 18)} USDFC`) } }}
API Reference
Section titled “API Reference”Constructor Options
Section titled “Constructor Options”interface SynapseOptions { // Wallet Configuration (exactly one required) privateKey?: string // Private key for signing provider?: ethers.Provider // Browser provider (MetaMask, etc.) signer?: ethers.Signer // External signer
// Network Configuration rpcURL?: string // RPC endpoint URL authorization?: string // Authorization header (e.g., 'Bearer TOKEN')
// Advanced Configuration withCDN?: boolean // Enable CDN for retrievals (set a default for all new storage operations) pieceRetriever?: PieceRetriever // Optional override for a custom retrieval stack disableNonceManager?: boolean // Disable automatic nonce management warmStorageAddress?: string // Override Warm Storage service contract address (all other addresses are discovered from this contract)
// Subgraph Integration (optional, provide only one of these options) subgraphService?: SubgraphRetrievalService // Custom implementation for provider discovery subgraphConfig?: SubgraphConfig // Configuration for the default SubgraphService}
interface SubgraphConfig { endpoint?: string // Subgraph endpoint goldsky?: { projectId: string subgraphName: string version: string } // Used if endpoint is not provided apiKey?: string // Optional API key for authenticated subgraph access}
Synapse Methods
Section titled “Synapse Methods”Instance Properties:
payments
- PaymentsService instance for token operations (see Payment Methods below)storage
- StorageManager instance for all storage operations (see Storage Operations below)
Core Operations:
preflightUpload(dataSize options?)
- Check if an upload is possible before attempting it, returns preflight info with cost estimates and allowance check (with or without CDN)getProviderInfo(providerAddress)
- Get detailed information about a service providergetNetwork()
- Get the network this instance is connected to (‘mainnet’ or ‘calibration’)getChainId()
- Get the numeric chain ID (314 for mainnet, 314159 for calibration)getProvider()
- Get the underlying ethers Provider instance (useful for cleanup - see Connection Management)
Synapse.storage Methods
Section titled “Synapse.storage Methods”Context Management:
createContext(options?)
- Create a storage context for a specific provider + data set (returnsStorageContext
)upload(data, options?)
- Upload data using auto-managed context or route to specific contextdownload(pieceCid, options?)
- Download from any available provider (SP-agnostic)
Upload Options:
// Simple upload (auto-creates/reuses context)await synapse.storage.upload(data)
// Upload with specific providerawait synapse.storage.upload(data, { providerAddress: '0x...' })
// Upload with specific context (current or future multi-context)await synapse.storage.upload(data, { context: storageContext })
Download Options:
// Download from any available providerawait synapse.storage.download(pieceCid)
// Prefer specific provider (still falls back if unavailable)await synapse.storage.download(pieceCid, { providerAddress: '0x...' })
// Download through specific contextawait synapse.storage.download(pieceCid, { context: storageContext })
Synapse.payments Methods
Section titled “Synapse.payments Methods”Balance Operations:
walletBalance(token?)
- Get wallet balance (FIL or USDFC)balance()
- Get available USDFC balance in payments contract (accounting for lockups)accountInfo()
- Get detailed USDFC account info including funds, lockup details, and available balancedecimals()
- Get token decimals (always returns 18)
Note: Currently only USDFC token is supported for payments contract operations. FIL is also supported for walletBalance()
.
Token Operations:
deposit(amount, token?, callbacks?)
- Deposit funds to payments contract (handles approval automatically), returnsTransactionResponse
withdraw(amount, token?)
- Withdraw funds from payments contract, returnsTransactionResponse
approve(spender, amount, token?)
- Approve token spending (for manual control), returnsTransactionResponse
allowance(spender, token?)
- Check current token allowance
Service Approvals:
approveService(service, rateAllowance, lockupAllowance, maxLockupPeriod, token?)
- Approve a service contract as operator, returnsTransactionResponse
revokeService(service, token?)
- Revoke service operator approval, returnsTransactionResponse
serviceApproval(service, token?)
- Check service approval status and allowances
Rail Settlement:
getRailsAsPayer(token?)
- Get all payment rails where wallet is the payer, returnsRailInfo[]
with{railId, isTerminated, endEpoch}
(endEpoch is 0 for active rails)getRailsAsPayee(token?)
- Get all payment rails where wallet is the payee (recipient), returnsRailInfo[]
getRail(railId)
- Get detailed rail information, returns{token, from, to, operator, validator, paymentRate, lockupPeriod, lockupFixed, settledUpTo, endEpoch, commissionRateBps, serviceFeeRecipient}
. Throws if rail doesn’t exist.settle(railId, untilEpoch?)
- Settle a payment rail up to specified epoch (must be <= current epoch; defaults to current if not specified), automatically includes settlement fee (0.0013 FIL), returnsTransactionResponse
settleTerminatedRail(railId)
- Emergency settlement for terminated rails only - bypasses Warm Storage (or other validator) validation to ensure payment even if the validator contract is buggy (pays in full), returnsTransactionResponse
getSettlementAmounts(railId, untilEpoch?)
- Preview settlement amounts without executing (untilEpoch must be <= current epoch; defaults to current), returnsSettlementResult
with{totalSettledAmount, totalNetPayeeAmount, totalOperatorCommission, finalSettledEpoch, note}
settleAuto(railId, untilEpoch?)
- Automatically detect rail status and settle appropriately (untilEpoch must be <= current epoch for active rails)
Storage Context Methods
Section titled “Storage Context Methods”A StorageContext
(previously StorageService
) represents a connection to a specific service provider and data set. Create one with synapse.storage.createContext()
.
By using StorageContext
directly you have efficiently deal with a specific service provider and data set for both upload and download options.
Instance Properties:
dataSetId
- The data set ID being used (string)serviceProvider
- The service provider address (string)
Core Storage Operations:
upload(data, callbacks?)
- Upload data to this context’s service provider, returnsUploadResult
withpieceCid
,size
, andpieceId
download(pieceCid, options?)
- Download data from this context’s specific provider, returnsUint8Array
preflightUpload(dataSize)
- Check if an upload is possible before attempting it, returns preflight info with cost estimates and allowance check
Information & Status:
getProviderInfo()
- Get detailed information about the selected service providergetDataSetPieces()
- Get the list of piece CIDs in the data set by querying the providerhasPiece(pieceCid)
- Check if a piece exists on this service provider (returns boolean)pieceStatus(pieceCid)
- Get the status of a piece including data set timing information
Storage Context Creation
Section titled “Storage Context Creation”The SDK automatically handles all the complexity of storage setup for you - selecting providers, managing data sets, and coordinating with the blockchain. You have two options:
- Simple mode: Just use
synapse.storage.upload()
directly - the SDK auto-manages contexts for you. - Explicit mode: Create a context with
synapse.storage.createContext()
for more control. Contexts can be used directly or passed in the options tosynapse.storage.upload()
andsynapse.storage.download()
.
Behind the scenes, the process may be:
- Fast ( < 1 second): When reusing existing data sets that match your requirements (including all metadata)
- Slower ( 2-5 minutes): When setting up new blockchain infrastructure (i.e. creating a brand new data set)
Basic Usage
Section titled “Basic Usage”// Option 1: Auto-managed context (simplest)await synapse.storage.upload(data) // Context created/reused automatically
// Option 2: Explicit context creationconst context = await synapse.storage.createContext()await context.upload(data) // Upload to this specific context
// Option 3: Context with metadata requirementsconst context = await synapse.storage.createContext({ metadata: { withIPFSIndexing: '', category: 'videos' }})// This will reuse any existing data set that has both of these metadata entries,// or create a new one if none match// Note: the `withCDN: true` option is an alias for { withCDN: '' } in metadata.
Metadata Limits
Section titled “Metadata Limits”Metadata is subject to the following contract-enforced limits:
Data Set Metadata:
- Maximum of 10 key-value pairs per data set
- Keys: Maximum 32 characters
- Values: Maximum 128 characters
Piece Metadata:
- Maximum of 5 key-value pairs per piece
- Keys: Maximum 32 characters
- Values: Maximum 128 characters
These limits are enforced by the blockchain contracts. The SDK will validate metadata before submission and throw descriptive errors if limits are exceeded.
Advanced Usage with Callbacks
Section titled “Advanced Usage with Callbacks”Monitor the creation process with detailed callbacks:
const context = await synapse.storage.createContext({ providerAddress: '0x...', // Optional: use specific provider address withCDN: true, // Optional: enable CDN for faster downloads callbacks: { // Called when a provider is selected onProviderSelected: (provider) => { console.log(`Selected provider: ${provider.owner}`) console.log(` PDP URL: ${provider.pdpUrl}`) },
// Called when data set is found or created onDataSetResolved: (info) => { if (info.isExisting) { console.log(`Using existing data set: ${info.dataSetId}`) } else { console.log(`Created new data set: ${info.dataSetId}`) } },
// Only called when creating a new data set onDataSetCreationStarted: (transaction, statusUrl) => { console.log(`Creation transaction: ${transaction.hash}`) if (statusUrl) { console.log(`Monitor status at: ${statusUrl}`) } },
// Progress updates during data set creation onDataSetCreationProgress: (status) => { const elapsed = Math.round(status.elapsedMs / 1000) console.log(`[${elapsed}s] Mining: ${status.transactionMined}, Live: ${status.dataSetLive}`) } }})
Creation Options
Section titled “Creation Options”interface StorageServiceOptions { providerId?: number // Specific provider ID to use providerAddress?: string // Specific provider address to use dataSetId?: number // Specific data set ID to use withCDN?: boolean // Enable CDN services metadata?: Record<string, string> // Metadata requirements for data set selection/creation callbacks?: StorageCreationCallbacks // Progress callbacks uploadBatchSize?: number // Max uploads per batch (default: 32, min: 1)}
// Note: The withCDN option follows an inheritance pattern:// 1. Synapse instance default (set during creation)// 2. StorageService override (set during createStorage)// 3. Per-method override (set during download)
// Data Set Selection: When creating a context, the SDK attempts to reuse existing// data sets that match ALL your requirements. A data set matches if it contains// all requested metadata entries with matching values (order doesn't matter).// The data set may have additional metadata beyond what you request.
Storage Context Properties
Section titled “Storage Context Properties”Once created, the storage context provides access to:
// The data set ID being usedconsole.log(`Data set ID: ${context.dataSetId}`)
// The service provider addressconsole.log(`Service provider: ${context.serviceProvider}`)
Storage Context Methods
Section titled “Storage Context Methods”Preflight Upload
Section titled “Preflight Upload”Check if an upload is possible before attempting it:
const preflight = await context.preflightUpload(dataSize)console.log('Estimated costs:', preflight.estimatedCost)console.log('Allowance sufficient:', preflight.allowanceCheck.sufficient)
Upload and Download
Section titled “Upload and Download”Upload and download data with the storage context:
// Upload with optional progress callbacks and piece-specific metadataconst result = await context.upload(data, { // Optional: Add metadata specific to this piece (key-value pairs) metadata: { snapshotVersion: 'v2.1.0', generator: 'backup-system' }, onUploadComplete: (pieceCid) => { console.log(`Upload complete! PieceCID: ${pieceCid}`) }, onPieceAdded: (transaction) => { // For new servers: transaction object with details // For old servers: undefined (backward compatible) if (transaction) { console.log(`Transaction confirmed: ${transaction.hash}`) } else { console.log('Data added to data set (legacy server)') } }, onPieceConfirmed: (pieceIds) => { // Only called for new servers with transaction tracking console.log(`Piece IDs assigned: ${pieceIds.join(', ')}`) }})
// Download data from this context's specific providerconst downloaded = await context.download(result.pieceCid)
// Get the list of piece CIDs in the current data set by querying the providerconst pieceCids = await context.getDataSetPieces()console.log(`Piece CIDs: ${pieceCids.map(cid => cid.toString()).join(', ')}`)
// Check the status of a piece on the service providerconst status = await context.pieceStatus(result.pieceCid)console.log(`Piece exists: ${status.exists}`)console.log(`Data set last proven: ${status.dataSetLastProven}`)console.log(`Data set next proof due: ${status.dataSetNextProofDue}`)
Size Constraints
Section titled “Size Constraints”The storage service enforces the following size limits for uploads:
- Minimum: 127 bytes
- Maximum: 200 MiB (209,715,200 bytes)
Attempting to upload data outside these limits will result in an error.
Note: these limits are temporary during this current pre-v1 period and will eventually be extended. You can read more in this issue thread
Efficient Batch Uploads
Section titled “Efficient Batch Uploads”When uploading multiple files, the SDK automatically batches operations for efficiency. Due to blockchain transaction ordering requirements, uploads are processed sequentially. To maximize efficiency:
// Efficient: Start all uploads without await - they'll be batched automaticallyconst uploads = []for (const data of dataArray) { uploads.push(context.upload(data)) // No await here}const results = await Promise.all(uploads)
// Less efficient: Awaiting each upload forces sequential processingfor (const data of dataArray) { await context.upload(data) // Each waits for the previous to complete}
The SDK batches up to 32 uploads by default (configurable via uploadBatchSize
). If you have more than 32 files, they’ll be processed in multiple batches automatically.
Storage Information
Section titled “Storage Information”Get comprehensive information about the storage service:
// Get storage service info including pricing and providersconst info = await synapse.getStorageInfo()console.log('Price per TiB/month:', info.pricing.noCDN.perTiBPerMonth)console.log('Available providers:', info.providers.length)console.log('Network:', info.serviceParameters.network)
// Get details about a specific providerconst providerInfo = await synapse.getProviderInfo('0x...')console.log('Provider PDP URL:', providerInfo.pdpUrl)
Download Options
Section titled “Download Options”The SDK provides flexible download options with clear semantics:
SP-Agnostic Download (from anywhere)
Section titled “SP-Agnostic Download (from anywhere)”Download pieces from any available provider using the StorageManager:
// Download from any provider that has the piececonst data = await synapse.storage.download(pieceCid)
// Download with CDN optimization (if available)const dataWithCDN = await synapse.storage.download(pieceCid, { withCDN: true })
// Prefer a specific provider (falls back to others if unavailable)const dataFromProvider = await synapse.storage.download(pieceCid, { providerAddress: '0x...'})
Context-Specific Download (from this provider)
Section titled “Context-Specific Download (from this provider)”When using a StorageContext, downloads are automatically restricted to that specific provider:
// Downloads from the provider associated with this contextconst context = await synapse.storage.createContext({ providerAddress: '0x...' })const data = await context.download(pieceCid)
// The context passes its withCDN setting to the downloadconst contextWithCDN = await synapse.storage.createContext({ withCDN: true })const dataWithCDN = await contextWithCDN.download(pieceCid) // Uses CDN if available
CDN Inheritance Pattern
Section titled “CDN Inheritance Pattern”The withCDN
option follows a clear inheritance hierarchy:
- Synapse level: Default setting for all operations
- StorageService level: Can override Synapse’s default
- Method level: Can override instance settings
// Example of inheritanceconst synapse = await Synapse.create({ withCDN: true }) // Global default: CDN enabledconst context = await synapse.storage.createContext({ withCDN: false }) // Context override: CDN disabledawait synapse.storage.download(pieceCid) // Uses Synapse's withCDN: trueawait context.download(pieceCid) // Uses context's withCDN: falseawait synapse.storage.download(pieceCid, { withCDN: false }) // Method override: CDN disabled
PieceCID
Section titled “PieceCID”PieceCID is Filecoin’s native content address identifier, a variant of CID. When you upload data, the SDK calculates a PieceCID—an identifier that:
- Uniquely identifies your bytes, regardless of size, in a short string form
- Enables retrieval from any provider storing those bytes
- Contains embedded size information
Format Recognition:
- PieceCID: Starts with
bafkzcib
, 64-65 characters - this is what Synapse SDK uses - LegacyPieceCID: Starts with
baga6ea4seaq
, 64 characters - for compatibility with other Filecoin services
PieceCID is also known as “CommP” or “Piece Commitment” in Filecoin documentation. The SDK exclusively uses PieceCID (v2 format) for all operations—you receive a PieceCID when uploading and use it for downloads.
LegacyPieceCID (v1 format) conversion utilities are provided for interoperability with other Filecoin services that may still use the older format. See PieceCID Utilities for conversion functions.
Technical Reference: See FRC-0069 for the complete specification of PieceCID (“v2 Piece CID”) and its relationship to LegacyPieceCID (“v1 Piece CID”). Most Filecoin tooling currently uses v1, but the ecosystem is transitioning to v2.
Browser Integration
Section titled “Browser Integration”The SDK works seamlessly in browsers.
MetaMask Setup
Section titled “MetaMask Setup”Official documentation for configuring MetaMask with Filecoin (both Mainnet and the Calibration Test network) can be found at: https://docs.filecoin.io/basics/assets/metamask-setup
If you want to add the Filecoin network programmatically, you can use the following code snippet, for Mainnet (change accordingly for Calibration Testnet):
// Add Filecoin network to MetaMaskawait window.ethereum.request({ method: 'wallet_addEthereumChain', params: [{ chainId: '0x13A', // 314 for mainnet chainName: 'Filecoin', nativeCurrency: { name: 'FIL', symbol: 'FIL', decimals: 18 }, rpcUrls: ['https://api.node.glif.io/rpc/v1'], blockExplorerUrls: ['https://filfox.info/en'] }]})
Additional Information
Section titled “Additional Information”Type Definitions
Section titled “Type Definitions”The SDK is fully typed with TypeScript. Key types include:
PieceCID
- Filecoin Piece Commitment CID (v2)LegacyPieceCID
- Filecoin Piece Commitment CID (v1)TokenAmount
-number | bigint
for token amountsStorageOptions
- Options for storage service creationAuthSignature
- Signature data for authenticated operations
Error Handling
Section titled “Error Handling”All SDK methods use descriptive error messages with proper error chaining:
try { await synapse.payments.deposit(amount)} catch (error) { console.error(error.message) // Clear error description console.error(error.cause) // Underlying error if any}